Can you daisy chain 4 servers with 2x10GB SPF connections?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

iLya

Member
Jun 22, 2016
48
5
8
Has anyone tried or know if it would be possible to connect 4 servers together without a switch by simply plugging an SFP cable from one server to the next if I have 2x10GB NICs?
If possible, would it be something like this?
Server 1 Port 1 -> Server 2 Port 1 -> Server 2 Port 2 -> Server 3 Port 1 -> Server 3 Port 2 -> Server 4 Port 1 -> Server 4 Port 2 -> Server 1 Port 2?

Do I need the Server 4 Port 2 -> Server 1 Port 2?
Am I just saying something stupid? o_O
 

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
I am going to say not really, if you want all of the systems to be able to talk to each other then you will run in to issues unless you can set up some kind of routing in the server that would forward the traffic from one to the next and even then you would run in to issues.
 

iLya

Member
Jun 22, 2016
48
5
8
So I have a choice of either investing into 10GB switch and some SFP+ cables or replace the dual 10GB NICs with Single port ConnectX-3 RDMA Mezzanine card and getting a Voltaire Infiniband switch.
I can't seem to find any reasonable deals on ebay for the 10GB switches, they all seem to be ~$1200.
The ConnectX-3 card is ~$99 each (i need 4) and the Infiniband switch is ~$300 + the cables.
I am looking at ~$800 for Infiniband config which gives me ~40Gbps throughput vs ~20Gbps with dual 10 GB NICs if I can configure them correctly.
There is an option to go with what could (I don't know if they are compatible) Dual port ConnectX-2 cards that are QDR (40Gbps) for even greater throughput but I am not sure of their compatibility with Windows OS since these cards are kind of getting old.
 

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
Personally I have very little IB Experience so that would be better left to others to give their option on. Yes 10G is just way over priced and while a lot of people seem to think that this year is the year of 10G I personally don't think you will see price points that you would like to see anytime soon. If you can find a used Dell 4012 or something like that maybe it will get down there in price but more than 4 ports seems to be quite the premium these days. It is funny, that 10G seems to be the new big thing but Big Companies and Data Centers are already well past 10G and moving on / beyond that.

Too bad SMB3 / Multi-Channel isn't fully implemented as copper 1G Ports are so much cheaper than 10G ports are.

Anyway make sure you check out Fiber Store www.fs.com for your cabling / SFP+ Needs. they are awesome and very reasonably priced for quality items that they stand behind.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
oo bad SMB3 / Multi-Channel isn't fully implemented as copper 1G Ports are so much cheaper than 10G ports are.
it is, what makes you think that it is not? I have over 100 servers that use multiple 1 GB ports and do so with SMB multi-channel.

Chris
 

Pete L.

Member
Nov 8, 2015
133
23
18
56
Beantown, MA
it is, what makes you think that it is not? I have over 100 servers that use multiple 1 GB ports and do so with SMB multi-channel.

Chris
SMB3 Multi Channel is only supported in relatively newer OS's and not a lot of NAS units. Multi Channel only comes in to play in Windows 8 or Server 2008 (or higher) in the Windows world and Samba which runs a lot of to the NAS units like Synology does not support it yet which to me is shocking. It also makes me wonder if companies are purposely holding back on implementing it so that you would buy more expensive hardware that has higher speed interfaces. I'm sure no company would do that would they? =)
 

iLya

Member
Jun 22, 2016
48
5
8
it is, what makes you think that it is not? I have over 100 servers that use multiple 1 GB ports and do so with SMB multi-channel.

Chris
There is another thread that someone is asking questions about SMB and MPIO and I was wondering with you configuration, are you able to start a normal file copy from one server to another where both NICs are used as a normal file copy or does it only work when you have an SMB share?
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Has anyone tried or know if it would be possible to connect 4 servers together without a switch by simply plugging an SFP cable from one server to the next if I have 2x10GB NICs?
If possible, would it be something like this?
Server 1 Port 1 -> Server 2 Port 1 -> Server 2 Port 2 -> Server 3 Port 1 -> Server 3 Port 2 -> Server 4 Port 1 -> Server 4 Port 2 -> Server 1 Port 2?

Do I need the Server 4 Port 2 -> Server 1 Port 2?
Am I just saying something stupid? o_O
I see no reason why it should NOT work if you enable routing on each of the intermediate servers and set up the routes properly. Traffic between non-adjacent machines would take a latency hit and the intermediate machines would have to do some work forwarding the packets so there would be a bit of impact across the board. I haven't done things like this since 100T was expensive but it worked then so it should work now. It was a nightmare to maintain so we were glad when we got our first switch.

Is there any chance you can declare one of the servers as "master" and put two of the dual port NICs in it? It would have to route any non-connected traffic but this might be less intrusive depending on your traffic patterns.

You could set up an 8 port 10G router with four dual port 10G NICs. It won't be as fast as a switch but might be less expensive. You can find boards with six x8 (electrical) PCIe slots and a low power processor for under a hundred dollars if you shop around. Dual port 10G NICs seem to be 50 bucks or less. Something like this could save you money until 10G switches drop in price when they would be a direct, drop in replacement.

You don't NEED server 4 port 2 -> server 1 port 2 in your first suggestion. You can do it if you are careful with your routes. It is real easy to create loops if you aren't careful.
 

iLya

Member
Jun 22, 2016
48
5
8
@fractal, interesting idea about having another server that does the routing, I was just starting to look into Windows Server Gateway that is used in the Hyper-V space for doing routing between subnets and this is the same idea just in a physical space.

Now I have to go play around with seeing if I can make the Windows Server Gateway work and if so, then I can build my own switch by getting 4 x 2 10 Gbe NICs. But thinking out load, it might still be better to go the Infiniband route for faster performance and probably reliability.
Imagine I have to patch my Windows Server with updates and then reboot, there goes my storage on my Hyperconverged cluster :eek:
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
There is another thread that someone is asking questions about SMB and MPIO and I was wondering with you configuration, are you able to start a normal file copy from one server to another where both NICs are used as a normal file copy or does it only work when you have an SMB share?
I am at a loss as to what you are asking here. I have a 99% windows shop that I work in. all of my servers are Windows based that I work on. I have many servers that have multiple 1/10/40 GB ports. SMB Multi-Channel is operational on all of them. The group I work in transfers 2PB of information a day over SMB. 99% of that is in the form of robocopy and all of my server have more than 1 port open.

MPIO is not SMB Multi-Channel. They have similar results but work fundamentally very differently.

SMB3 Multi Channel is only supported in relatively newer OS's and not a lot of NAS units. Multi Channel only comes in to play in Windows 8 or Server 2008 (or higher) in the Windows world and Samba which runs a lot of to the NAS units like Synology does not support it yet which to me is shocking. It also makes me wonder if companies are purposely holding back on implementing it so that you would buy more expensive hardware that has higher speed interfaces. I'm sure no company would do that would they? =)
This is True. SMB Multi-Channel has not been ported to Samba (yet Roadmap - SambaWiki). I cannot answer who or why SAMBA is not supporting SMB Multi Channel right now. you may want to talk to the SAMBA guys to find out a more definitive timeline. the major vendors are just waiting on SAMBA to be upgraded before they support it. Until then Multi Channel will be a Windows only feature.

Chris
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
As the roadmap says, multi-channel support is funded, but of course it isn't in samba right now because it hasn't been written yet.

Edit: actually it has. According to this presentation thing from the linux foundation in april regarding SMB3 and multi-channel, it's apparently in the 4.4 branch and looking at the current release notes there's been a fair flurry of fixes going in for it since it was introduced in 4.4.0. So anyone fancying trying out the bleeding edge samba should be able to give it a whirl if they're so inclined.
Code:
EXPERIMENTAL FEATURES
=====================

SMB3 Multi-Channel
------------------

Samba 4.4.0 adds *experimental* support for SMB3 Multi-Channel.
Multi-Channel is an SMB3 protocol feature that allows the client
to bind multiple transport connections into one authenticated
SMB session. This allows for increased fault tolerance and
throughput. The client chooses transport connections as reported
by the server and also chooses over which of the bound transport
connections to send traffic. I/O operations for a given file
handle can span multiple network connections this way.
An SMB multi-channel session will be valid as long as at least
one of its channels are up.

In Samba, multi-channel can be enabled by setting the new
smb.conf option "server multi channel support" to "yes".
It is disabled by default.

Samba has to report interface speeds and some capabilities to
the client. On Linux, Samba can auto-detect the speed of an
interface. But to support other platforms, and in order to be
able to manually override the detected values, the "interfaces"
smb.conf option has been given an extended syntax, by which an
interface specification can additionally carry speed and
capability information. The extended syntax looks like this
for setting the speed to 1 gigabit per second:

    interfaces = 192.168.1.42;speed=1000000000

This extension should be used with care and are mainly intended
for testing. See the smb.conf manual page for details.

CAVEAT: While this should be working without problems mostly,
there are still corner cases in the treatment of channel failures
that may result in DATA CORRUPTION when these race conditions hit.
It is hence

    NOT RECOMMENDED TO USE MULTI-CHANNEL IN PRODUCTION

at this stage. This situation can be expected to improve during
the life-time of the 4.4 release. Feed-back from test-setups is
highly welcome.
 

iLya

Member
Jun 22, 2016
48
5
8
@cesmith9999 , somehow its working now but as I was posting my question to you, I was only seeing traffic on a single NIC and the otherwise was not being utilized. Somehow, magically it started working and the only thing I remember changing was disabling the IPv6 setting option on the NICs.
I can tell you that it works great now and I am really happy, I just hope that I can get it working out of the box during my next install/configuration :)
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
Congratulations.

Disabling IPv6 may have done it because it may not be using the broadcast address. we have a lot of IPv6 here (and at my house - thanks Comcast).

Chris
 
  • Like
Reactions: Chuntzu