10GBe NICs suggestions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kingmouf

Member
Jun 15, 2016
44
6
8
43
Hi all, I need your help since on networking equipment my knowledge is pretty basic. I have a Threadripper workstation (Linux/Windows) and a recently converted my previous Xeon workstation to a server/NAS device. Both workstations are connected through 1GBe to my router and from there to the internet. As I need to have very fast access to large files on the NAS (incl. large photo and video files), I am thinking on installing two additional 10GBe NICs on each workstation and connect them directly.

- is there any reason to consider SFP+ over BaseT solutions? The two workstations are sitting next to each other, so the cables can be very short (even 1m suffices).
- can you propose any reasonably priced NICs? For some reason the NICs that I find for consumer solutions are on the pretty expensive side I think (100+ euros per NIC with only one port). I need to mention that the Threadripper workstation has PCIe 4.0 slots available while the Xeon workstation has PCIe 3.0 slots available.
- looking on the used market (e.g. eBay), I find a plethora of server cards available, but I am completely unaware whether these cards can be installed in a typical workstation rather than a server. Especially NICs from server manufacturers (HP, Dell, Lenovo etc). Is this concern valid or these cards can be used?

Any help / proposal would be highly appreciated!

Thank you!
Andreas
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
for me sfp+ based solutions are more "futureproof" (thanks to the backwards compatibility of (q)sfp28/56) and offer flexibility: cheap dac cable up to 3 meters, cheap optical transceivers (even used original cisco ones!) and fibers for distances >10m

for 10gbe and faster I would look for mellanox nics. The older cx-3 can be had for <50€ each and newer cx-4 for ~100€ each (in EU).
these cards are far better than everything you will find in the consumer 10gbe cards.
most of these cards on ebay are oem variants from hpe, dell, lenovo etc. and can be flashed with vanilla firmware (search the forums for mellanox firmware, there are plenty of threads). no problems using them in "normal" systems*

pcie 3.0 or 4.0 won't make a difference for 10gbe :D

*I have a first gen threadripper mainboard from asus that needed a bios update. when a mellanox nic with a x16 connector was used the system didn't boot before the update
 

sko

Active Member
Jun 11, 2021
246
129
43
- is there any reason to consider SFP+ over BaseT solutions? The two workstations are sitting next to each other, so the cables can be very short (even 1m suffices).
Optical transceivers have MUCH less power draw and heat dissipation than copper ports.

- can you propose any reasonably priced NICs? For some reason the NICs that I find for consumer solutions are on the pretty expensive side I think (100+ euros per NIC with only one port). I need to mention that the Threadripper workstation has PCIe 4.0 slots available while the Xeon workstation has PCIe 3.0 slots available.
If price is the primary factor, look for X520-da2 NICs. If you want to have something more recent and more power-efficient, try to get a good deal on X710-da2.
Another option might be mellanox ConnectX3 or later. Especially if you arent planning on much surrounding infrastructure (i.e. host-to-host cabling), you could also jump directly to 40GBps at a budget.
For DACs or transceivers and cabling I recommend fs.com - we've been using their products for years without any problems; delivery is super fast and support is excellent.

- looking on the used market (e.g. eBay), I find a plethora of server cards available, but I am completely unaware whether these cards can be installed in a typical workstation rather than a server. Especially NICs from server manufacturers (HP, Dell, Lenovo etc). Is this concern valid or these cards can be used?
why not? As long as you stick to PCIe cards and not some proprietary interface cards they work in anything that has PCIe...
 

kingmouf

Member
Jun 15, 2016
44
6
8
43
Thank you both for the prompt answers!

I will look into your proposed cards and see what deals I can find. The argument about the power draw and heat dissipation is a very important one that I will have to take under consideration. Considering the cost, I was under the impression that SFP cabling was significantly more expensive than copper. Seems I will have to do my homework!

As for the question on the server cards, I was mostly thinking about cooling requirements (server chassis vs normal chassis), drivers and specific firmware issues that may make these cards suitable for builds of same manufacturer versus generic builds with OTS components. Getting a normal firmware as the first responder suggested may make this point irrelevant.

Thanx again!!! I will come back once I have researched what you suggested me!!
 

Stephan

Well-Known Member
Apr 21, 2017
944
712
93
Germany
If we are talking Linux here then with two ConnectX3 cards flashed from QCBT to FCBT, using a recent kernel and qualified FDR DAC cable you can get 44-48 GBit/s net iperf throughput with 4-5 streams. FDR means 14 GBit/s per stream times 4 voila 56 Gbps. Cards need cooling, but 10 Gbps cards need that too. One stream between 10 Gbps NICs will only be 2500 Mbps, need 4-5 to saturate as well.
 

kingmouf

Member
Jun 15, 2016
44
6
8
43
If we are talking Linux here then with two ConnectX3 cards flashed from QCBT to FCBT, using a recent kernel and qualified FDR DAC cable you can get 44-48 GBit/s net iperf throughput with 4-5 streams. FDR means 14 GBit/s per stream times 4 voila 56 Gbps. Cards need cooling, but 10 Gbps cards need that too. One stream between 10 Gbps NICs will only be 2500 Mbps, need 4-5 to saturate as well.
Wow, I completely lost you here. Why would I only get 2500 Mbps from a 10Gbps NIC?

Also Connectx-3 cards seem to be passively cooled and in the manual I could find: https://network.nvidia.com/pdf/user...d_Dual_SFP+_Port_Adapter_Card_User_Manual.pdf it does not mention special cooling requirements. It does mention max power (for the 1 port variant) in the order of 5 watt if I read this information right (page 47), which seems quite low and I guess can be covered by the typical airflow in a ventilated workstation case.

I also found this comparison table: ESPCommunity

If I read this right, the practical difference between Connectx-3 and connectx-3 pro is the support of RoCE. If I connect though the two NICs directly with each other, I can go with IB and both plain and pro connectx3 NICs seem to support RDMA.

By a first quick search on ebay, I can find multiple offers of pairs of Connectx-3 NICs with CISCO cables included for around 120-130 euros (e.g. https://www.ebay.com/itm/133641218351?hash=item1f1da3292f:g:tXgAAOSwszBgBWGX ). Do you think it is a valid choice?
 

Stephan

Well-Known Member
Apr 21, 2017
944
712
93
Germany
Because 10 Gbps ethernet is one device, at nominal speed 10 Gbps, but physically it is four 2.5G streams bolted onto each other and the ethernet network chip hides that from you and balances the traffic. If you only have one stream, there is nothing to balance and you will only reach 2.5 Gbps. One stream is like some file copy in Explorer or Nemo. Same with 40/56G of a CX3, but here, one stream is already 10 or 14 Gbps.

Passively cooled does not imply "no airflow". Imagine a 80mm fan within 50mm distance blowing with 800rpm straight at the heatsink, it will need that much. Otherwise chip will get too hot or die prematurely because you run it close to 100degC all the time. Last 10 Gbps card I had in my hands, a BCM 57810S dual-SFP+, also needs around that amount of airflow.

I wasn't talking about low-end single port CX3, I was talking about high-end CX354 dual port QSFP+. Like an Oracle 7046442 which can be flashed to FCBT so you have full 40/56 Gbps ethernet. For 40 Gbps any DAC cable should do, for 56 Gbps you need a certified cable from Mellanox or EMC. Section 1.2.5 of https://network.nvidia.com/pdf/firmware/ConnectX3-FW-2_42_5000-release_notes.pdf
 
  • Like
Reactions: kingmouf

kingmouf

Member
Jun 15, 2016
44
6
8
43
Because 10 Gbps ethernet is one device, at nominal speed 10 Gbps, but physically it is four 2.5G streams bolted onto each other and the ethernet network chip hides that from you and balances the traffic. If you only have one stream, there is nothing to balance and you will only reach 2.5 Gbps. One stream is like some file copy in Explorer or Nemo. Same with 40/56G of a CX3, but here, one stream is already 10 or 14 Gbps.
Hmm, I see. Thank you for the clarification. This is something specific to the architecture of the Mellanox ConnectX chips or it is something that generally applies to all solutions for 10Gbps networking (even chipsets geared towards workstations such as Aquantia ones)? My application scenario is more geared towards one or two streams requiring high bandwidth rather than a typical server scenario where one would easily expect multiple streams.

Passively cooled does not imply "no airflow". Imagine a 80mm fan within 50mm distance blowing with 800rpm straight at the heatsink, it will need that much. Otherwise chip will get too hot or die prematurely because you run it close to 100degC all the time. Last 10 Gbps card I had in my hands, a BCM 57810S dual-SFP+, also needs around that amount of airflow.

I wasn't talking about low-end single port CX3, I was talking about high-end CX354 dual port QSFP+. Like an Oracle 7046442 which can be flashed to FCBT so you have full 40/56 Gbps ethernet. For 40 Gbps any DAC cable should do, for 56 Gbps you need a certified cable from Mellanox or EMC. Section 1.2.5 of https://network.nvidia.com/pdf/firmware/ConnectX3-FW-2_42_5000-release_notes.pdf
I understand that and this is also evident from my initial post about server parts being used in a common workstation scenario. I have the same issues with the FPGA cards I am using - for server enclosures they come with passive heatsinks relying on the enclosure airflow of the server and for desktops they come with an active cooling solution. However, as I wrote earlier (and I am not 100% sure I am correct), from what I understood from the manual that I read, the ConnectX3 seems to draw something in the order of 5 watt that seems low. Did I understand this correctly or I am way off??
 

Rttg

Member
May 21, 2020
71
47
18
Yeah, I’m not so sure that prior information is right. QSFP+/28 uses four channels to reach 40/100Gbps, but plain old SFP+ is 10Gbps at the link layer. Only slight wrinkle is if you’re running iperf3, you might need more than one stream (using the ```-P``` option) to saturate the link, given that single streams can be CPU-bound.
 

mattventura

Active Member
Nov 9, 2022
447
217
43
Because 10 Gbps ethernet is one device, at nominal speed 10 Gbps, but physically it is four 2.5G streams bolted onto each other and the ethernet network chip hides that from you and balances the traffic. If you only have one stream, there is nothing to balance and you will only reach 2.5 Gbps. One stream is like some file copy in Explorer or Nemo. Same with 40/56G of a CX3, but here, one stream is already 10 or 14 Gbps.
Sorta, depends on what layer you're talking about and which PHY type. Some are 4x2.5 (like -CX4 and XFP), others are 1x10 (like SFP+). But it does such a good job of hiding that from you that you can still get the full line rate out of a single stream. Though once you move up to 40g+ it's a lot harder for the host to actually send/receive that much data on a single stream. I have a couple hosts on QSFP interfaces and I can get more than 10 but still way less than 40 on a single stream.