Crazy 100GbE adapter for $100

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

koifish59

Member
Sep 30, 2020
66
19
8
Mellanox has something similar called socket direct:
I don't understand this product. Why can't they just make a single card with 16x instead of needing two separate 8x lanes?

Maybe not enough PCB space so they needed a daughter board?
 
Last edited:

TRACKER

Active Member
Jan 14, 2019
178
54
28
This adapter does not look exactly like normal NIC, but more like some custom routing/packet filtering solution. And it has drivers for Linux only...i cannot even download them from official vendor website, it says 'contact support'.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,803
113
I don't understand this product. Why can't they just make a single card with 16x instead of needing two separate 8x lanes?

Maybe not enough PCB space so they needed a daughter board?
Three easy answers in this case.

First, a PCIe Gen3 x16 slot cannot handle 200Gbps of bandwidth. You need Gen4 for that since the Gen 3 slot can handle just over 100Gbps.

Second, you can connect x16 to different CPUs. Instead of going across sockets, you go directly to the network.

Third, one NIC is less expensive than two, one for each CPU.

To me, the multiple hosts to one NIC is super cool as well.
 

Dreece

Active Member
Jan 22, 2019
503
160
43
It is another one of those 'has been' concepts, was rolled out in a few places IIRC due to specific server constraints, remember seeing them at a Fast centre in London years ago.

The mellanox 2-card solution was/is better, at least you have RDMA along with NVMEoF support so still technically viable and practical in today's modern homelabs and low-budget SMB designs.

Realistically these Intel FM series cards are good for tinkering with vintage hardware and old-hat datacentre networking concepts which never took off, other than that they're practically unpractical in today's world, especially considering their power-needs and lack of driver support in VMware and Hyper-V.

They should be priced lower, $100 is like selling gone-off milk.
 
  • Like
Reactions: Yunia

neobenedict

Member
Oct 2, 2020
64
21
8
They should be priced lower, $100 is like selling gone-off milk.
Even so, they're still (probably) 100G (or 200G) capable NICs... you don't really get those for $100. You mean lack of driver support in the hosts or lack of passthrough support to VMs?

Definitely not something that should be used outside the homelab either way, but that's true for most ebay hardware.

Also, there are drivers written by Silicom for these cards in the linux kernel. https://lore.kernel.org/dpdk-dev/CA...k=ztHZDNPW8qNqeaFdTpJApsJYA@mail.gmail.com/T/
 
Last edited:

Dreece

Active Member
Jan 22, 2019
503
160
43
It's simply a matter of one man's gold vs another man's junk.

It all depends on what people are going to be using 100G for, I can only imagine it being the one and only thing in most organised labs either home/research/smb and that is a datapipe from a storage box to an app box etc, and for these such requirements, running a fabric without RDMA and additional functional offloading tech at 100G is seriously taxing on the CPUs, hell, even 40G pushes CPUs.

The point is, when you start going up in bandwidth, you need to consider offloading as much of the work as possible unless you're actually dedicating CPUs for such work, which the average small lab etc doesn't really desire (costs/power etc), and when it comes to a lot of these 'rare' cards, even though there is basic support in linux, not all the offloading features actually work (needs investigation, and even then its still not worth the hassle going forward as the tech was thrown off the cliff a long while back, and any future continued support for the driver is always in question). To rack out the whip on this, take 8 x4 pcie lane nvme drives then slap a 100G crap nic onto it, good luck! This all is the reason why Nvidia bought Mellanox, and that whole article Patrick did on the subject, the future is about abstracting the flows, not consolidating - hopefully one day the CPUs will only do what they were meant to be doing, executing algos for the apps, leave the rest of the workload to the devices outside of the CPU... obviously companies like Intel never really liked this idea back in the day because it means losing more control of the platform ($$$), but even they now are facing the music, market is wiser today and tech is going to leapfrog the limits of the CPU with or without the CPU manufacturers support, though the former is inevitable, especially with the programmes going on out there such as Gen-Z for example.

Anyway, going back to used hardware, I see a gazillion posts where people jump around with tech, "bargain this" "got that", bla bla bla, half the crap doesnt even function as advertised without customised patches in the later kernels, they cook the inside of your server case forcing fans to run louder drawing more power etc and frankly the support is seriously lacking because the demand isn't there. Just because something 'works', doesnt necessarily mean it works how it was intended to work with all its bells and whistles, unless you start knocking on the door of RHEL installations stuck in an older but still supported release for that particular driver version etc.

To avoid going off on a banter tangent, bottom line, if used prices of crap tech stay high, the good tech stay higher, and that has always been the case across the board. There are a great number more 100G nics to hit the market very soon, the doors are opening, 100G switches are already knocking around the 700-900 mark... the reality check here is that MOST people are barely cracking 25G loads, let alone 40G... right now, 100G awards pub-bragging rights, not much else. Many of us have been on 100G for over a year now, for I it was only recently that I even made good use of it, and that's because of a business function, but as far as personal use goes, hell no.
 

chinesestunna

Active Member
Jan 23, 2015
621
194
43
56
Looking at that "daughter card" for additional PCIe lanes (presumably because x16 slots are less common in a multitude of server applications) makes my mind play out "We need for PCIe Lanes!" in the Starcraft voice haha
 

Dreece

Active Member
Jan 22, 2019
503
160
43
A colleague informed me that these cards are good for custom networking applications, their offloading capabilities lack RDMA but do have the usual receive-side scaling along with segmentation offloading and tcp/udp/ip checksums. So I wouldn't totally discount them, just not something that I'd recommend for future proofing.

I'm sure there are some labs out there doing bizarre and wonderful things with their networks where such cards can come in quite handy as long as linux is in play.
 
  • Like
Reactions: Yunia and Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,803
113
Mine arrived today. I could not see any sign of wear on mine and looked like default packaging.

One thing I want to try is to see if I can get three nodes connected to the same adapter.
 
Last edited:
  • Like
Reactions: Yunia and Dreece

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,803
113
I spoke to someone at Silicom today. I think this is going to be a weekend project, but I am more excited about this card than ever.
 

Prof_G

Active Member
Jan 16, 2020
133
79
28
If its going to be that type of weekend make sure to start the smoker. Tasty brisket sammich's are good for dripping on server parts its like baptism for them.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,803
113
If its going to be that type of weekend make sure to start the smoker. Tasty brisket sammich's are good for dripping on server parts its like baptism for them.
Not that type of weekend. Spare the Air Alert today again. 48 days this year.

I am planning my next brisket will be done via PoE.
 
  • Like
Reactions: bob_dvb

Prof_G

Active Member
Jan 16, 2020
133
79
28
With the amount of wild fires we have had it doesn’t surprise me. I’m not sure we get those notifications in Los Angeles county.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,803
113
Yea @Prof_G they just extended this current one again. No wood fire burning through Tuesday.

That should put us over the 50-day mark this year. When we hit 48 this year it was already a record after what was set in 2017 or 2018. I have lived in California for ~27 years. It never used to be like this.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
i purchase 2 of these cards and would be good to get any feedback on if anyone is able to get these to work and in which type of host.
best,
jp
 

james23

Active Member
Nov 18, 2014
441
122
43
52
... the reality check here is that MOST people are barely cracking 25G loads, let alone 40G... right now, 100G awards pub-bragging rights, not much else. Many of us have been on 100G for over a year now, for I it was only recently that I even made good use of it, and that's because of a business function, but as far as personal use goes, hell no.
Thanks for this reply / quote, i was very glad to read it.

out of curiosity, is there any chance you could share what that 100g use-case is/was? just in general terms, ofcourse.

Network bandwidth always fascinates me- some 100g use cases off the top of my head;
+ Uplinks for 10g network switches (most IME have 40g or multi-40g uplinks however)
+ data-center isp/upstream uplinks or cross connects (again switches though)
+ HPC node to node / rack to rack links (cpu related research tasks or gpu related tasks like ML or AI)

What are some others uses of a host/server needing beyond 40gbit?

Can any layer3 "router OSes" even saturate a 100gbit nic/link? (ie a normal x86 server doing L3 routing tasks such as cg-nat/qos/filtering , not a $50k+ HW or asic task specific router)

thanks!
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
The pcie 3.0 bus has been a bottleneck for a long time for my use case. It involves a sw based traffic generator for testing vnf (virtual network functions.) I have built several sff appliances that are capable of generating the traffic of over 50M UEs (think cell phones) for testing 5G networks. Mellanox (now part of Nvidia) has made dual port 100GE and now dual port 200GE NICs for many years. With amd epyc rome CPUs, we are now able to push the pcie bus to its current limits (in the case of dual 200GE NICs). Mellanox makes NICs with a daughter type card similar to this franken-nic (don't quite know what to think of this NIC yet!) They call it socket direct. In the Mellanox case, they can aggregate 2 x16 pcie 3.0 slots so as to not oversubscribe the bus in the case of their dual port 100GE NICs. I am still not clear how many combined lanes that this Silicom is using. Is it 2x8? if so that is only x16 pcie 3.0 which still creates a bottleneck for the dual 100GE ports. The intel datasheet shows the fm10840 controller capable of 4x8 which would theoretically not create a bottleneck.

With regards to how practical is this for home use, I don't think it is that far off for some of the folks on this forum. You can now get 32x100G switches relatively cheap. There are also 100G NICs that are getting more reasonable. There are still some issues to contend with. ex. those switches are really loud and most folks don't have fiber running through their house. 100G DACs are cheap (at least the qsfp28 passive ones). Even if you are able to sort out those 2 issues, you main question is still valid; how do you take advantage of you new found bandwidth for regular apps? For me it is very helpful due to my job. I am not sure how other folks would take advantage of this bandwidth.