TrueNAS Scale 24.10 - Very slow SMB performance Windows 11 client

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bugacha

Active Member
Sep 21, 2024
232
55
28
This has been driving me nuts this week. My Terramaster F4-424 Max is 2x faster when reading from SMB share (Raid1 with NVMe cache) than reading from TrueNAS Raid 1 NVMe dataset.

My setup is

VM TrueNAS Scale 24.10.2 in Proxmox (q35, VirtIO SCSI single)
8 cores of Epyc 7402
96GB RAM to TrueNAS
2 x NVMe PM983 PCIe passthrough
1 x Mellanox ConnectX 4-Lx 25gbpe PCIe passthrough

I have raid 1 mirror from 2 PM983 exported as SMB Multichannel share
1738346476781.png

Samba 4.20 - super modern, deffo supports SMB multichannel and io_ring

Its perfectly capable to determine interface speed and RSS support, but I also tried explicit via cli
Code:
interfaces = "x.x.x.x;capabilities=RSS,speed=25....lots of zeros"
via
Code:
service smb update smb_options=
with subsequent smbd restart
also tried (but as I read, they are old commands, totally useless with modern Samba)
Code:
aio read size = 1 or 16 * 1024
use sendfile = yes

NVMe is fast, fio for sequential reads reports this :

Code:
$ fio --name=fio_test --ioengine=libaio --iodepth=16 --direct=1 --thread --rw=write --size=100M --bs=4M --numjobs=1 --time_based --runtime=60
Run status group 0 (all jobs):
  WRITE: bw=6334MiB/s (6642MB/s), 6334MiB/s-6334MiB/s (6642MB/s-6642MB/s), io=371GiB (399GB), run=60001-60001msec
Network - iperf3 - 10gbps easily

Code:
Accepted connection from 192.168.1.16, port 60898
[  5] local 192.168.1.40 port 5201 connected to 192.168.1.16 port 60899
[  8] local 192.168.1.40 port 5201 connected to 192.168.1.16 port 60900
[ 10] local 192.168.1.40 port 5201 connected to 192.168.1.16 port 60901
[ 12] local 192.168.1.40 port 5201 connected to 192.168.1.16 port 60902
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   299 MBytes  2.51 Gbits/sec
[  8]   0.00-1.00   sec   299 MBytes  2.51 Gbits/sec
[ 10]   0.00-1.00   sec   292 MBytes  2.45 Gbits/sec
[ 12]   0.00-1.00   sec   292 MBytes  2.45 Gbits/sec
[SUM]   0.00-1.00   sec  1.15 GBytes  9.91 Gbits/sec
Super basic setup as you can see, everything is fast and supposed to be fast for clients.

But it is not, the max I get copying files from SMB export share is 470mb/s

1738346935517.png

Client - Windows 11 Pro Mellanox ConnectX 4-Lx

SMB multichannel - 100% enabled and used (I even wasted day looking at Wireshark negotiating packets)

Code:
PS C:\Windows\System32>  Get-SmbConnection

ServerName   ShareName  UserName     Credential  Dialect NumOpens
----------   ---------  --------     ----------  ------- --------
192.168.1.40 video_fast KORESH\admin KORESH\otec 3.1.1   2

PS C:\Windows\System32> Get-SmbMultichannelConnection

Server Name  Selected Client IP    Server IP    Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable
-----------  -------- ---------    ---------    ---------------------- ---------------------- ------------------ -------------------
192.168.1.40 True     192.168.1.16 192.168.1.40 6                      2                      True               False
Thats it. I tried numerous sysctl changes - but no point as iperf3 shows perfect 10gbps speed.

I connect same Windows 11 client to Terramaster F4-424 Max (running 1x10gbps RJ45 Marvell AQtion) , I get immediate 1.15gb/s

RAID1 + NVMe cache

Terramaster runs Samba 4.15 with same settings


1738347159200.png




WHY TRUENAS IS SLOW ??

I did try 22 version and even nightly 25. Same speed.

Is this side-effect of running under Proxmox VM ? Why do I get perfect 10gbps saturation and 6GB/s NVMe speed inside the VM using specialized tools then ?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,432
1,335
113
DE
The problem is SAMBA + transfer via ip.
A little faster are kernelbased SMB servers over ip (ksmbd on Linux, Solaris/illumos or Windows).

If you really want SMB with 3-10 GByte/s (25-100G nics) the only option is SMB Direct/RDMA.
While ksmbd claims SMB Direct support, I have not seen a single success report so for now only Windows Server ex 2022/2025 Essentials offer a working SMB Direct.
 

pimposh

hardware pimp
Nov 19, 2022
329
188
43
Nah to get 1.15GB it is always doable on typical smbd as long as it's not ran on some clumsy hw.. Problem has to be elsewhere.
 

bugacha

Active Member
Sep 21, 2024
232
55
28
The problem is SAMBA + transfer via ip.
A little faster are kernelbased SMB servers over ip (ksmbd on Linux, Solaris/illumos or Windows).

If you really want SMB with 3-10 GByte/s (25-100G nics) the only option is SMB Direct/RDMA.
While ksmbd claims SMB Direct support, I have not seen a single success report so for now only Windows Server ex 2022/2025 Essentials offer a working SMB Direct.
I hear you, I just want to get 1.15gb/s that I already get on Samba 4.15 on Terramaster as a step 1.

I'm thinking about Windows Server. Just need to find a source of cheap licenses first
 

gea

Well-Known Member
Dec 31, 2010
3,432
1,335
113
DE
On Proxmox you can install ksmbd.
Should be the fastest SMB option under Proxmox.

Settings are quite identical to SAMBA smb.conf.
 

kapone

Well-Known Member
May 23, 2015
1,285
738
113
But it is not, the max I get copying files from SMB export share is 470mb/s

View attachment 41640

Client - Windows 11 Pro Mellanox ConnectX 4-Lx
You're copying TO "blaze(D:)"...

SMB multichannel - 100% enabled and used (I even wasted day looking at Wireshark negotiating packets)

Code:
PS C:\Windows\System32>  Get-SmbConnection

ServerName   ShareName  UserName     Credential  Dialect NumOpens
----------   ---------  --------     ----------  ------- --------
192.168.1.40 video_fast KORESH\admin KORESH\otec 3.1.1   2

PS C:\Windows\System32> Get-SmbMultichannelConnection

Server Name  Selected Client IP    Server IP    Client Interface Index Server Interface Index Client RSS Capable Client RDMA Capable
-----------  -------- ---------    ---------    ---------------------- ---------------------- ------------------ -------------------
192.168.1.40 True     192.168.1.16 192.168.1.40 6                      2                      True               False
Am I reading this correctly? It says FALSE for RDMA capable? (True for RSS, not RDMA)

You're copying TO "Downloads"...

Are these two destinations (Downloads and blaze on D:) the same drive on your client?
 

bugacha

Active Member
Sep 21, 2024
232
55
28
You're copying TO "blaze(D:)"...
Am I reading this correctly? It says FALSE for RDMA capable? (True for RSS, not RDMA)
You're copying TO "Downloads"...
Are these two destinations (Downloads and blaze on D:) the same drive on your client?
In both cases I copy to NVMe on Windows 11, blaze is the disk, downloads is the folder on that disk. (I did thousands of those copies this week so pardon random screenshots, but destination is the same).

Samba doesnt support RDMA (as @gea mentioned above), so my best bet is SMB Multichannel that requires either RSS support or multiple NICs. (I did try with multiple NICs, same speed).

Samba and Client deffo negotiate SMB3.11 + SMB multichannel I saw that in tcpdump SMB2 negotiation packets, Its 100% enabled with TrueNAS and Terramaster.
 

kapone

Well-Known Member
May 23, 2015
1,285
738
113
In both cases I copy to NVMe on Windows 11, blaze is the disk, downloads is the folder on that disk. (I did thousands of those copies this week so pardon random screenshots, but destination is the same).

Samba doesnt support RDMA (as @gea mentioned above), so my best bet is SMB Multichannel that requires either RSS support or multiple NICs. (I did try with multiple NICs, same speed).

Samba and Client deffo negotiate SMB3.11 + SMB multichannel I saw that in tcpdump SMB2 negotiation packets, Its 100% enabled with TrueNAS and Terramaster.
Got it.

So, really, the difference is...something on bare metal vs something virtualized on the other end...

Hmm... Install Truenas bare metal on that Proxmox setup, and see if it makes a difference?
 

bugacha

Active Member
Sep 21, 2024
232
55
28
Got it.

So, really, the difference is...something on bare metal vs something virtualized on the other end...
I initially thought so, but fio and iperf3 benchmarks say that NVMe and NIC work perfectly fine in VM in Proxmox. So I don't think it has anything to do with Proxmox.
 

bugacha

Active Member
Sep 21, 2024
232
55
28
Got it.

So, really, the difference is...something on bare metal vs something virtualized on the other end...

Hmm... Install Truenas bare metal on that Proxmox setup, and see if it makes a difference?
Turns out this was Proxmox and specifically CPU specification for TrueNAS VM.

TrueNAS VM was setup with QEMU x86-64-v2-AES CPU.

Switching this to host CPU immediately fixes the problem. I guess adds support for missing CPU instructions that Samba clearly uses.

I'm back to 10gbps saturation with Samba 4.20 and TrueNAS SCALE 24.10.2

1739025195416.png
 
Last edited: