SMB 3.0 and Bandwidth Aggregation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bp_968

New Member
Dec 23, 2012
45
0
0
Yes, but your slowest link is still iSCSI so the SMB3 multi-path may not be of help. I can see doing OI with vmxnet3 to windows 2012 with vmxnet3 to leverage multiple 1GbE links. If you only have one or two clients, going with QDR IB from OI server to client seems like a less complex and higher throughput solution though.
Why Specifically QDR IB? Are we discussing the virtual link between VMs? (I haven't used ESX since 3.5 so maybe they now have a QDR IB driver?). I ask because I have SDR and DDR IB equipment I'm setting up to use on the server so clients have decent speed to the array (my tests with SDR hit 700MB/s+). I have a IB switch (SDR) I'll probably use but if I went Point to Point it could be done with a DDR speed link.

The main reason I was thinking about a weird roundabout SMB3 attempt was so 1 particular PC that is outside the range of a CX4 IB cable (its just to long a run) could still get fast speeds with a bundle of cat5 ;) Honestly I'll either move it so IB can reach or later on I'll pick up some 10GbE fiber stuff and make a fiber run since its max length is far far more then I'd ever need.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Why Specifically QDR IB? Are we discussing the virtual link between VMs? (I haven't used ESX since 3.5 so maybe they now have a QDR IB driver?). I ask because I have SDR and DDR IB equipment I'm setting up to use on the server so clients have decent speed to the array (my tests with SDR hit 700MB/s+). I have a IB switch (SDR) I'll probably use but if I went Point to Point it could be done with a DDR speed link.

The main reason I was thinking about a weird roundabout SMB3 attempt was so 1 particular PC that is outside the range of a CX4 IB cable (its just to long a run) could still get fast speeds with a bundle of cat5 ;) Honestly I'll either move it so IB can reach or later on I'll pick up some 10GbE fiber stuff and make a fiber run since its max length is far far more then I'd ever need.
I should have said a more generic Infiniband in place of QDR IB. I had ESXi on my mind, so my solution was ESXi dependent. The vmxnet3 nic is VMWare's 10GbE adapter.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Did a little test with dual QDR links and SMB Direct.
Server:
i7-920
24GB DDR3-1600
Client:
Opti 6164 HE
32GB DDR3-1333

CDM
Ram Disk
Code:
               Read   |   Write
Seq             5433        7148
512k            5174        6755
4k              677.2       646
4k QD32         616.2       626.2
Mapped Network Drive
Code:
               Read   |   Write
Seq             899.7       754.1
512k            720.8       682.9
4k              19.6        20.42
4k QD32         89.24       80.08
Anvil
Ram Disk

Mapped Network Drive
 
Last edited:

MrFlppy

Member
Jun 11, 2016
41
1
8
38
Hi,
I hope I'm not disturbing the peace by replying to such an old thread. For some time now I've been interested in using SMB 3.0 Multichannel at home but I'm uncertain if the 1 GbE hardware meets the requirements:

- PC 1 has an Intel I350-T2V2, connected to Switch 1 with two 1 Gbps connections
- PC 2 has an Intel I350-T4V2 connected to Switch 2 with four 1 Gbps connections
- Switch 1 and 2 are connected to each other by four 1 Gbps connections (static LAG)

Is it possible to use SMB Multichannel at a single file transer from PC 1 to PC 2, resulting in approx. 220 MB/s transfer speed?

The NICs do support RSS but not RDMA. Is RDMA a Must-Have for SMB Multichannel or only something that comes into play when using 10 GbE adapters?

Thank you very much for your advice!
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
RDMA is not a requirement for SMB multi-channel

It works where I work. when we have old systems that need we just add 1-3 more 1 GB ports... and it just works. not certain how your LAG configuration will help or hurt SMB multi-channel.

Chris
 

ultradense

Member
Feb 2, 2015
61
11
8
41
One connection can only go through one switchport. Thats why SMB multichannel sets up multiple connections for a single datatransfer and thus can scale through multiple nics. LACP however in the most defalt setting won't loadbalance the connections coming from the same IP and going to the same IP. You shlould check your switch manual how to configure the switches LACP connections to be correctly loadbalanced for SMB multichannel.

Verstuurd vanaf mijn ONE A2003 met Tapatalk
 

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
Is there a way to configure a LAG Group to allow SMB Multi-Path / Multi-Channel (Would that be MAC vs IP Setting)? I've never seen the Multichannel work whenever a LAG / LACP Group is in between the switches. I would love to see this implemented more, I have a few Synology NASs that have 4 ports each, they support SMB 3.0 but not Multi-Channel, they claim that it will come with an update sometime later this year **Fingers Crossed**.

That said I have seen Multi Channel / Multi Path work in a Windows Server 2k12 / Windows 8 / Windows 10 Environment. It is really cool to see the file copy / transfers just pick up speed as additional links are added. Amazing that Microsoft of all companies have had this implemented for how long and others don't seem to want to or are so slow implement it when it is a "Standard".
 

MrFlppy

Member
Jun 11, 2016
41
1
8
38
Thank you all for your input!

I took some screenshots from a switch with two 1 Gbps connections (to another switch) set up as a static LAG (just as an example). Should there be any changes made to the LACP section?
 

Attachments

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
Is there a way to configure a LAG Group to allow SMB Multi-Path / Multi-Channel (Would that be MAC vs IP Setting)?
Of the simpler LACP distribution methods, you probably want a source port/destination port hash. A group only has a single IP address, so using that won't work. And on at least some switches (I just checked a Cisco 4948-10GE) the LACP uses the MAC address of the first member port, so that won't work either.

If this is on one of the Quanta switches, the following info may be useful. However, I don't know if the Quanta uses this silicon or something else, nor if their software supports this:

LAG Hashing

The purpose of link aggregation is to increase bandwidth between two switches. It is achieved by aggregating multiple ports in one logical group. A common problem of port channels is the possibility of changing packets order in a particular TCP session. The resolution of this problem is correct selection of a physical port within the port channel for transmitting the packet to keep original packets order.

The hashing algorithm is configurable for each LAG. Typically, an administrator is able to choose from hash algorithms utilizing the following attributes of a packet to determine the outgoing port:

• Source MAC, VLAN, EtherType, and incoming port associated with the packet.

• Source IP and Source TCP/UDP fields of the packet.

• Destination MAC, VLAN, EtherType, and incoming port associated with the packet.

• Source MAC, Destination MAC, VLAN, EtherType, and incoming port associated with the packet.

• Destination IP and Destination TCP/UDP Port fields of the packet.

• Source/Destination MAC, VLAN, EtherType, and incoming port associated with the packet.

• Source/Destination IP and source/destination TCP/UDP Port fields of the packet.

Enhanced LAG Hashing

Devices based on Broadcom XGS-IV silicon support configuration of hashing algorithms for each LAG interface. The hashing algorithm is used to distribute traffic load among the physical ports of the LAG while preserving the per-flow packet order.

One limitation with earlier LAG hashing techniques is that the packet attributes were fixed for all type of packets. Also, there was no MODULO-N operation involved, which can result in poor load balancing performance.

As part of Release 4.0, the LAG hashing support is extended to support an Enhanced hashing mode, which has the following advantages:

• MODULO-N operation based on the number of ports in the LAG.

• Packet attributes selection based on the packet type. For L2 packets, Source and Destination MAC address are used for hash computation. For IP packets, Source IP, Destination IP address, TCP/UDP ports are used.

• Non-Unicast traffic and Unicast traffic is hashed using a common hash algorithm.

• Excellent load balancing performance."
 

azev

Well-Known Member
Jan 18, 2013
768
251
63
I am trying to do some testing on smb multichannel but still unsure what is the correct way to set it up.
Let say I have a storage server with 10Gb link, and my desktop is configured with intel dual port nic.
I have both nic cabled up to the switch and here is where I am abit lost. It was said that no LACP is needed to get this to work, but then do I have to configure both nic with IP address & gateway, etc ? Do I need to configure both port on the same subnet ?? different subnet ?? The server is on a separate subnet than the workstation.

Thanks.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
If all of the servers/clients are attached to the same switch. there is no configuration.

it "just" works

you may need to have multiple TCP sessions to see the difference.

robocopy <source> <destination> /e /mt:16 /nfl /ndl /r:1 /w:1 /log:c:\log.txt

Chris
 
  • Like
Reactions: Chuntzu

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
I am trying to do some testing on smb multichannel but still unsure what is the correct way to set it up.
Let say I have a storage server with 10Gb link, and my desktop is configured with intel dual port nic.
I have both nic cabled up to the switch and here is where I am abit lost. It was said that no LACP is needed to get this to work, but then do I have to configure both nic with IP address & gateway, etc ? Do I need to configure both port on the same subnet ?? different subnet ?? The server is on a separate subnet than the workstation.

Thanks.
Typically you would want the NICs to have their own IP in the same network as the server, the server will "Discover" the multiple "Paths" to your client and do its thing. So different IPs / Same Subnet / Same Gateway and for simplicity sake no LACP / LAG / Bond.
 

MrFlppy

Member
Jun 11, 2016
41
1
8
38
Regarding the topic of SMB Multichannel between systems that are connected via multiple switches there seems to be hope:

Aggregating Switch Ports & NIC's to Double Bandwidth

"Dr.Why" wrote in the comment section:

"SMB Multichannel did the trick! No configuration needed on the switches, in fact I had to remove the trunk groups that I had previously created for the ports that went to the servers. I used LACP between the two HP switches. I did not team the NIC's together in the OS, I just gave them each their own static IP. SMB multichannel is enabled by default, so as soon as the Windows OS sees two NICS with different IP's on the same network, it just works. Results are below.


Thanks everyone!"

Are the LACP features of these HP switches comparable to my TP-Link TL-SG2210P?
 
  • Like
Reactions: Pete L.

ultradense

Member
Feb 2, 2015
61
11
8
41
Thats true. It works when connecting each nic to a different switch. You should do this for the server and the client.

When you want the client connected to switch A with multiple nics, switch A connected to switch B with multiple links and switch B connected to the server with multiple links, there are some things to consider:
The link between switch A and B should be in a LAG (like LACP).
The LAG should be set to load-balance on IP&port.
The connections from switch to either client or server should NIT be set to LACP.
Give each nic a different IP in the subnet. So the server should have 2 IPs and the client also.

That might make it work.

Verstuurd vanaf mijn ONE A2003 met Tapatalk
 

aero

Active Member
Apr 27, 2016
346
86
28
54
Regarding the topic of SMB Multichannel between systems that are connected via multiple switches there seems to be hope:

Aggregating Switch Ports & NIC's to Double Bandwidth

"Dr.Why" wrote in the comment section:

"SMB Multichannel did the trick! No configuration needed on the switches, in fact I had to remove the trunk groups that I had previously created for the ports that went to the servers. I used LACP between the two HP switches. I did not team the NIC's together in the OS, I just gave them each their own static IP. SMB multichannel is enabled by default, so as soon as the Windows OS sees two NICS with different IP's on the same network, it just works. Results are below.


Thanks everyone!"

Are the LACP features of these HP switches comparable to my TP-Link TL-SG2210P?
Refer to page 41 of the manual for your switch.
http://www.tp-link.us/resources/document/TL-SG2210P_V1_UG.pdf

You can set the hash algorithm for the LACP LAG to "SRC IP + DST IP".
 

MrFlppy

Member
Jun 11, 2016
41
1
8
38
@ultradense

I finally got some time to try it - I checked each point on your list and SMB Multichannel is not working on my systems :-(

Testing configuration:

- PC 1 has an Intel I350-T2V2, connected to Switch 1 with two 1 Gbps connections (Windows 8.1 Pro), no LAG/LACP
- PC 2 has an Intel I350-T4V2 connected to Switch 2 with two 1 Gbps connections (Windows 10 Pro), no LAG/LACP
- Switch 1 and 2 are connected to each other by two 1 Gbps connections (LACP (SRC + DEST IP))

Each NIC has RSS enabled and each NIC can ping everyone else. IPs are set manually in the same subnet. Stuck at 118 MB/s.

Does anyone have an idea on how to proceed?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
You need to gather some more information about where you are stuck.

Do a long transfer. Look at traffic rates on each link. Which ones are carrying traffic? And how much.

On the Windows machines this is easy - bring up task manager, open the "networking" tab and watch the utilizations.

Usage on the LAG between your switches will be tough.

My guess - total WAG but based on experience with similar systems: SMB Multipath is working fine - On both PCs you'll see traffic on each of the two links between the PC and the switch. But they'll each be pegged at 50% utilization (about 500 mbps each). And when you look at the LAG between the switches you'll see one link totally pegged and the other one unused. The problem is traffic hashing onto the LAG. Again - this is a totally wild a.. guess and could be way off.