Poor SMB Performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hagak

Member
Oct 22, 2012
92
4
8
I had posted this to [H] forums while servethehome was down but got hardly any responses. So below are the posts I put on [H].

Ok so background on setup first. I have an all-in-one ESXi setup using ZFS for a SAN as a guest machine. I have 2 Server 2012 guest machines on the host.(each with 2 vmnx3 nics with RSS enabled)

I wanted to play with SMB3 and multipath and see how it works. So I created a COMSTAR iSCSI volume to mount on one of the Server 2012 VMs then I shared that mount point within the guest. (Yes I know there is overhead and I could share the drive directly from ZFS but then that does not support SMB3)

To get a baseline performance of the iSCSI volume within that VM guest I ran crystalmark and get ~400MB/s seq. read and ~340MB/s seq. write. About what I would expect.

Where this goes bad is from the other Server 2012 guest I only get ~80MB/s seq read and ~90MB/s seq write.

I have confirmed that RSS is enabled on both machines and that it is using the multiple connections. But heck I am not even seeing performance that I would expect from non-multipath let alone multipath.

NOTE since these tests are done within the VM it eliminates and switch or wiring issues and ESXi with vmxnet3 drivers should get really good network speeds between guests.

Then after some tweaking:
I have improved performance a bit by adjusting the
TCPIP paramater
GlobalMaxTcpWindowSize to 0xffff

LANMANSERVER paramaters
SizeReqBuf to 0x8000
MaxMpxCt to 0x2000
MaxWorkItems to 0x8000

These changes increased the throughput of my SMB transfers between VM guests to ~190MB/s for reads and writes.

I tested my performance from a external (not a vm) machine that has 2 gbit nics connected running windows 8.1. The read and write performance was ~140MB/s which was up from the read/write perfomance to the CIFS share directly on the ZFS server that is providing the storage which was ~110MB/s which is bascially saturation of a single gbit nic. So I do see a performance boost from sharing the storage directly from ZFS to sharing from ZFS to Server 2012 then out.

Any one have other ideas I should look into?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Do you have enough vcpu for the windows machine and enabled multiple vcpu to networking (and storage)?
 

Darkytoo

Member
Jan 2, 2014
106
4
18
on the iSCSI adapters, all protocols unbound except for TCP/IPv.4? jumbo packets? Did you disable nagle algorithm?
 

hagak

Member
Oct 22, 2012
92
4
8
not using jumbo packets yet. Wanted to see how far i could get without going to that.
Not familiar with where/how to configure nagle algorithm.