I'm having a particular issue with NIC Teaming, LACP, and SMB.
At first I was getting ready to team all the NICs across my server for increase throughput to/from nodes/storage. Come to find out that LACP only theoretically increases bandwidth for multiple TCP/IP sessions and what I actually wanted was something similar to MPIO, but, not literally MPIO.
I learned that SMB 3.0 supports multi-channel out of the box through a variety of configs. e.g. the NICs have to support RSS, or they have to support RDMA, or they have to be teamed, and that's pretty much it.
So all night I've been testing this to try to get the magic to happen and I'm completely stumped and am seeking ya'lls advice. I'll go over all of the scenarios I've tested:
I've teamed all available NICs on each of my test beds using combinations of all of these settings:
In the OS level: "Switch Independent / Dynamic", "Switch Independent / Address Hash", "Static Teaming / Dynaminc", "Static Teaming, Address Hash", "LACP / Dynamic", "LACP / Address Hash"
Each pair of options corresponds to every method I've attempted to team the NICs on both the client and the server. But nothing seems to work. LACP is definitely functioning as a protocol as Windows Server reports LACP as failed until I literally go into my switch and enable the LAG group for those ports so I know that's not the issue.
Trying all of these things I only continue to see a single Gigabit of throughput when copying files.
Now here is what DOES work:
Leaving both sets of NICs un-teamed, un-paired, etc. Basically unconfigured. As soon as I start copying a large file from the SMB share I immediately notice it's transferring at over 2 Gbps.
What ALSO works is teaming ONLY the client set of NICs and leaving the server NICs un-teamed. I get combined interface throughput that way.
Nothing else works at all and I've been researching this for hours now. Can anyone chime in on their experience with this?
At first I was getting ready to team all the NICs across my server for increase throughput to/from nodes/storage. Come to find out that LACP only theoretically increases bandwidth for multiple TCP/IP sessions and what I actually wanted was something similar to MPIO, but, not literally MPIO.
I learned that SMB 3.0 supports multi-channel out of the box through a variety of configs. e.g. the NICs have to support RSS, or they have to support RDMA, or they have to be teamed, and that's pretty much it.
So all night I've been testing this to try to get the magic to happen and I'm completely stumped and am seeking ya'lls advice. I'll go over all of the scenarios I've tested:
I've teamed all available NICs on each of my test beds using combinations of all of these settings:
In the OS level: "Switch Independent / Dynamic", "Switch Independent / Address Hash", "Static Teaming / Dynaminc", "Static Teaming, Address Hash", "LACP / Dynamic", "LACP / Address Hash"
Each pair of options corresponds to every method I've attempted to team the NICs on both the client and the server. But nothing seems to work. LACP is definitely functioning as a protocol as Windows Server reports LACP as failed until I literally go into my switch and enable the LAG group for those ports so I know that's not the issue.
Trying all of these things I only continue to see a single Gigabit of throughput when copying files.
Now here is what DOES work:
Leaving both sets of NICs un-teamed, un-paired, etc. Basically unconfigured. As soon as I start copying a large file from the SMB share I immediately notice it's transferring at over 2 Gbps.
What ALSO works is teaming ONLY the client set of NICs and leaving the server NICs un-teamed. I get combined interface throughput that way.
Nothing else works at all and I've been researching this for hours now. Can anyone chime in on their experience with this?