Recommend/Summarize Hardware for 10Gb Ethernet

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

brad

New Member
Dec 22, 2015
14
0
1
Hello,

so the more I read about it, the more I am confused on what I actually need for our network setup.

We got:
Asus P9DWS mainboard (2x PCIe3 x4 SSD, 1x PCEe2 x8 IBM M1015 cards already in)
D-Link DGS-1024D 24 Port Gigabit Switch

We want:
Between Server and Switch 2x 10GB Ethernet, so that multiple clients can get each full 2Gbit (Full-Duplex) at the same time. The switch needs at least 24 Ports. 48 Ports wouldn't be bad, if the price does not skyrocket. And a 19" rack mount is a must. Besides that, we want a managed switch for link bonding of some 1Gb ports to the clients.

I assume, that we will need a PCIe Dual 10Gb NIC and a Switch with 2 10Gb ports.
However the details are not clear to me:
- which ports should the parts have exactly
- which cables are needed
- what's up with these transceivers?
The server and the switch are located right next to each other. So the cable length will be 1m tops. Probably could get it down to .5m if needed.

Actual question:
Could someone please recommend reasonable parts for our setup?
We want to stay below 1000€, the cheaper, the better, as long as the performance does not degrade.
Used parts are also very welcome.

Thank you and Merry Christmas everyone!
 

j_h_o

Active Member
Apr 21, 2015
644
179
43
California, US
Depending on what OS you're running on the server, you may wish a different card (Intel X520-DA1 or X520-DA2, etc.), so you have the driver support you need. These will also work with the DAC.
 
Last edited:

brad

New Member
Dec 22, 2015
14
0
1
That's a lot cheaper than I've expected. Thanks a lot.

Our server is running Ubuntu 14.04.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Yup - for a very short distance like that use a direct-attach cable (aka DAC, twinax, etc.) that has SFP+ on both ends. And you are looking for cards/switches with SFP+ ports. With an SFP+ port you have the option of either using a cable with SFP+ end, or a SFP+ transceiver that will convert the electrical signal to an optical one - optical is great if you need to go long distances or work in areas with a ton of electrical interference but in your case would just be more expensive. The other option for 10GbE is 10G-baseT which at that short range you could probably run over a regular Cat5e with no problems (or Cat6A out to 100m). 10G-baseT is nice for backwards compatibility with older ethernet standards, but uses more power and has more latency than SFP+-based.

The other thing to keep in mind is that link bonding might not help you at all. A single client talking to a single server is only going to use 1 path through the bonded NICs - assuming 1G nics in the client that client will max out at 1G of bandwidth. It's not quite technically correct, but if you think of NIC-bonding as a way to load-balance connections instead of packets the result is pretty close - you need lots of connections (lots of clients) to really get more than just increased reliability from NIC-bonding.
 

brad

New Member
Dec 22, 2015
14
0
1
@link bonding:
The idea was to use 2x 1 Gbit Ports on some clients, which are connected to 2 1Gbit ports on the switch. The switch should bond the 2 links into effectively a 2 Gbit link between the client and the server.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
@link bonding:
The idea was to use 2x 1 Gbit Ports on some clients, which are connected to 2 1Gbit ports on the switch. The switch should bond the 2 links into effectively a 2 Gbit link between the client and the server.
Yes - the OS on that client will see a single link with 2G as the speed. However, the bonding driver still needs to decide somehow on which physical 1G port to send every outgoing packet to, and unfortunately the methods for picking which port to use are all pretty dumb and don't consider utilization as a factor (and the same thing inside the switch for packets coming in). There are different algorithms available out there (in the linux bonding driver look at the 'xmit_hash_policy' option if you want to override it), but its almost certainly going to be based on the combination of source/destination address (MAC's or IPs). And so traffic between a single client and server all ends up going over only a single NIC in the bond.
 

Quasduco

Active Member
Nov 16, 2015
129
47
28
113
Tennessee
@link bonding:
The idea was to use 2x 1 Gbit Ports on some clients, which are connected to 2 1Gbit ports on the switch. The switch should bond the 2 links into effectively a 2 Gbit link between the client and the server.
Warning, broad strokes view incoming.

Link bonding (a.k.a. trunking, lacp, lagg, 802.3ad, etc.) is not intended as a 1:1 speed increase, rather a 1:many speed increase. Meaning, 1 client maxes at 1Gbit, regardless of how many ports you throw at it.

If you use it how it is intended, you end up with potentially n>1 people getting 1Gbit *each*.

If you want faster links to your clients, your clients need faster cards, and you need a different switch.
 
  • Like
Reactions: Chuckleb

brad

New Member
Dec 22, 2015
14
0
1
Ok I understand the problems with link bonding. This part is optional anyway.
Main problem currently is that many clients bash on the 1Gbit Server Interface.
Primary goal is to give each client potentially 1Gbit (as long as the ZFS Mirror is able to serve fast enough).

Cheap 1m sfp dac cables are hard to find here in germany. Any problems with these longer 3m cables?
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Probably should be ok, though that does bring up another downside of the SFP+ ecosystem - sometimes devices (cards and/or switches) are picky about what SFP+ cables/transceivers they will work with. Eg. a Cisco switch will only work with a Cisco-approved cable/transceiver (those are Cisco cables you linked to). I would recommend doing some googling around before purchasing anything and make sure that whatever cards/switch you end up with will be happy with the cables.

I would also recommend staying under the 7m cables. At 7m some vendors are ok with passive cables, some like active cables, some vendors wait till 10m before they care about active/passive, etc. It all just gets even more complicated, and I've seen more forum posts with problems with the longer cables. Stick with as short as you can find cheaply, and have less slack cable to deal with as a bonus.
 

brad

New Member
Dec 22, 2015
14
0
1
I spend a few hours reading on that Quanta LB4M Switch. Price is superb and technically this seems to be good too. But the 60 - 70 dB might be problematic here.
Are there any alternatives?
We need/want just a Layer 2 Switch. No routing functions whatsoever.
At least 24 Ports, but 48 would be better.
And 2x 10Gbit SFP+ Ports which both connect to a dual 10Gbit NIC.
Another thing to consider: The Switch will be running 24/7 and each Watt costs us around 2.26€ / year. The Quanta wants around 60-70Watts (~150€ / year ).
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
There are some threads on here for switches under $500 with 10Gb, check that thread. There are many many choices as dual 10G is really easy. Remember that most of these live in datacenter so noise is usually high, but you can fan mod them.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Yes - the OS on that client will see a single link with 2G as the speed. However, the bonding driver still needs to decide somehow on which physical 1G port to send every outgoing packet to, and unfortunately the methods for picking which port to use are all pretty dumb and don't consider utilization as a factor (and the same thing inside the switch for packets coming in). There are different algorithms available out there (in the linux bonding driver look at the 'xmit_hash_policy' option if you want to override it), but its almost certainly going to be based on the combination of source/destination address (MAC's or IPs). And so traffic between a single client and server all ends up going over only a single NIC in the bond.
In vSphere check out 'Route based on Physical NIC load' AKA LBT/Load based Teaming.

Frank Denneman has a great post on it. Fully agree most people mistakingly 'think' an LACP/bonding/aggregation will be their saving grace when in reality it take a LOT of planning and sophisticated setup to implement/scale that config properly...not to mentiuon getting FAR away from the KISS principle. With LBT config on a VMware vDS just take a regular ole' access/trunk switchport setup and at 75% port utilization for more than 30 seconds it will balance/migrate connections/sessions to the other uplink nic. No fuss/muss...now I will admit I think they bundle this feature up at Enterprise licensing so BOO there unless ya got the keys to the kingdom. :-D

Load Based Teaming - frankdenneman.nl
IP-Hash versus LBT - frankdenneman.nl