VERY odd SMB performance issue

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ptyork

New Member
Jul 12, 2021
22
4
3
So this is weird.

I have a Windows PC connected via a Vimin 10g/2.5g switch. No network issues whatsoever in general.

BUT, SMB performance from the Windows PC via the switch is awful. As in 100x (no joke) slower. On the very same PC I can open a WSL session, mount a CIFS share to the exact same server+share, and I get full performance. No issues. I can also take a long ethernet cable and bypass the Vimin switch and achieve full performance.

So it's the switch (or maybe something with the fiber transceivers), but the issue ONLY affects Windows native SMB traffic. Which makes me think there's a configuration thing. I mean, unless the switch has some kind of insidious SMB sniffing happening that has bugged out.

Short of starting down the rabbit hole of buying and swapping out parts, is there anything you might suggest that I look at? I don't think that there's much to configure in the switch, but maybe something with packet sizes or buffers or something in Windows that could somehow be "incompatible" with this switch???
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,241
428
83
depends on the windows configurations, by default it uses default reliable buffers of 32k; try changing buffers to 10240k see if it changes anything.
(windows not nic config)
 

ca3y6

Active Member
Apr 3, 2021
129
61
28
Also I get a lot of aggravation from SMB3 with mandatory encryption. Try disabling mandatory encryption (if it is enabled) or downgrading to SMB2 to see if it helps.
 

ptyork

New Member
Jul 12, 2021
22
4
3
Thanks all. So I left out that the server was TrueNAS Scale. I think a number of the options being suggested might be lanmanserver options rather than lanmanclient options.

I have tried a few things. Disabling SMBv2/v3 was not helpful. Or at least I was never quite able to get SMBv1 to work indepently. Took a lot of reboots and now my WSL environment is reporting errors mounting drvfs (though the errors don't seem actually to manifest negatively).

In truth, though, I could use one level more hand holding.

@CyklonDX, where would I change that buffer size?

I'm looking at Get-SmbServerConfiguration (probably not relevant) and Get-ClientConfiguration on my Windows 11 client. I'm not seeing anything that jumps out at me:

Code:
CompressibilitySamplingSize           : 524288000
CompressibleThreshold                 : 104857600
ConnectionCountPerRssNetworkInterface : 4
DirectoryCacheEntriesMax              : 16
DirectoryCacheEntrySizeMax            : 65536
DirectoryCacheLifetime                : 10
DisableCompression                    : False
DormantFileLimit                      : 1023
EnableBandwidthThrottling             : True
EnableByteRangeLockingOnReadOnlyFiles : True
EnableCompressibilitySampling         : False
EnableInsecureGuestLogons             : True
EnableLargeMtu                        : True
EnableLoadBalanceScaleOut             : True
EnableMultiChannel                    : True
EnableSecuritySignature               : True
EncryptionCiphers                     : AES_128_GCM, AES_128_CCM, AES_256_GCM, AES_256_CCM
ExtendedSessionTimeout                : 1000
FileInfoCacheEntriesMax               : 64
FileInfoCacheLifetime                 : 10
FileNotFoundCacheEntriesMax           : 128
FileNotFoundCacheLifetime             : 5
ForceSMBEncryptionOverQuic            : False
KeepConn                              : 600
MaxCmds                               : 50
MaximumConnectionCountPerServer       : 32
OplocksDisabled                       : False
RequestCompression                    : False
RequireSecuritySignature              : False
SessionTimeout                        : 60
SkipCertificateCheck                  : False
UseOpportunisticLocking               : True
WindowSizeThreshold                   : 8
Is there anything on the client side to set with regard to SMB multichannel and SMB direct?
 

acquacow

Well-Known Member
Feb 15, 2017
798
449
63
43
On the FreeNAS side, I have this:
1727451084800.png

And I have all the usual tunables for 10gige: (Similar settings in windows as well)
1727451273793.png

You can enable SMB Direct in windows via:
Enable-WindowsOptionalFeature -Online -FeatureName SMBDirect


-- Dave
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,241
428
83
where would I change that buffer size?
Registry

Use this page,
.


and also this


You can test/find right sizes for your setup by playing with total commander -
1727453348429.png
where you can change settings on fly without rebooting to apply settings - then you can just move those settings to registry when you got your sweet spot.
 

gea

Well-Known Member
Dec 31, 2010
3,352
1,307
113
DE
You can enable SMB Direct in windows via:
Enable-WindowsOptionalFeature -Online -FeatureName SMBDirect


-- Dave
Afaik SMB direct is still not available/stable in SAMBA.
You currently need a current Windows Server as SMB server, Windows 10/11 as client and RDMA capable nics like Mellanox 4/5 (20-100G). Then and only then you can use SMB direct and yes SMB direct is superior regarding high performance, low latency and CPU load.

I really hope, ZFS on Windows become stable enough soon for exact this reason.

beside, try another lan cable, nic to nic connection, enable multichannel, try jumbo frames on/off end some nic settings tweaks.
 

ptyork

New Member
Jul 12, 2021
22
4
3
Sorry. Helene knocked me out for a couple of weeks. Finally back online...mostly.

Thanks for the advice. I didn't really get a chance to play with it much before I was rudely interrupted by umpteen trees on my house (thankfully "on", not "in"). But I just upgraded from TrueNAS SCALE 24.10 RC1 to RC2 and the problem seems to have resolved itself. Who knows what weird combination of strangeness caused this issue to surface. Whether it was "fixed" by iX or something broke on install during the initial upgrade to RC1, I guess we'll never know. :)

Thanks again!
 
  • Like
Reactions: CyklonDX