I'm working with one of our engineers trying to troubleshoot poor SMB performance speeds from a node running Ubuntu 14.04 LTS connecting to one of our Solaris 11.3 ZFS storage arrays. Our CentOS 6.7 nodes are 2-3x faster on average (identical hardware) and I'm hoping we are missing something simple like a SMB version mismatch or driver config error.
Hardware:
Solaris 11.3- Dual E5 v3, 512GB DDR3, ConnectX3 VPI cards, 90 4TB HDD's.
Nodes- Dual E5 v3, 128GB DDR3, ConnectX3 VPI cards, Intel DC3500 SSD's.
5035 Switch with subnet manager enabled.
Benchmarks: 3GB File or 10GB file moved via DD to/from the storage. Averages listed.
CentOS 6.7- 430MB's (up to 1.3GB's once cached). IPERF of 25.6Gb/s.
Ubuntu 14.04LTS- 223MB's (does not cache?). IPERF of 12.4Gb/s.
Windows 7 Pro- 450MB's (maxes local SSD). IPERF of 20.2Gb's
Node to Node performance (Ubuntu to Cent) is 400-600MB's (SSD's maxed) and IPERF of 20Gb's.
Both linux nodes are running the mlx4_core modules, static addresses, and 65520 MTU. On a side note, we are experiencing a 40% CPU wait while running our application on the Ubuntu nodes, but not on our Cent nodes.
Any input is welcome.
Hardware:
Solaris 11.3- Dual E5 v3, 512GB DDR3, ConnectX3 VPI cards, 90 4TB HDD's.
Nodes- Dual E5 v3, 128GB DDR3, ConnectX3 VPI cards, Intel DC3500 SSD's.
5035 Switch with subnet manager enabled.
Benchmarks: 3GB File or 10GB file moved via DD to/from the storage. Averages listed.
CentOS 6.7- 430MB's (up to 1.3GB's once cached). IPERF of 25.6Gb/s.
Ubuntu 14.04LTS- 223MB's (does not cache?). IPERF of 12.4Gb/s.
Windows 7 Pro- 450MB's (maxes local SSD). IPERF of 20.2Gb's
Node to Node performance (Ubuntu to Cent) is 400-600MB's (SSD's maxed) and IPERF of 20Gb's.
Both linux nodes are running the mlx4_core modules, static addresses, and 65520 MTU. On a side note, we are experiencing a 40% CPU wait while running our application on the Ubuntu nodes, but not on our Cent nodes.
Any input is welcome.
Last edited: