OK, what am I missing - *new* Dell 25Gig NIC with iWarp for $76?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
Someone hit me with something, because I must be stupid, I *have* be missing something here:

Dell QLogic QL45212HLCU 25GB Dual Port SFP PCle 3.0x 8 Ethernet Adapter XV3MV
Dell QLogic QL45212HLCU 25GB Dual Port SFP PCle 3.0x 8 Ethernet Adapter XV3MV | eBay

This appears to be a Marvell FastLinQ 45000 series - QL45212HLCU, no?
https://www.marvell.com/content/dam...stlinq-45000-series-product-brief-2019-11.pdf

Chinese fake? Fan really loud? Drivers suck? Only works in Dell server? Something?

There are actually many of them on ebay in the $80 - $200 range.... not even hard to find.

These are more what I would expect:
https://www.amazon.com/QLogic-QL45212HLCU-CK-SFP28-Network-Interface/dp/B01F14WVGK
PROVANTAGE: QLogic QL45212HLCU-CK 2 Port 25GB PCIE GEN3 SFP28 Ethernet Adapter

I was researching cards that could do iWarp for S2D, and came across this, but it doesn't make sense.

-JCL
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
My guess it's similar to "nobody got fired for buying intel/cisco", but for nics.
I associate marvell with ics for soho networking gear and "simple"/cheap sata controllers...

(Actually mellanox can also do infiniband -> great for dl/ml and other application where you want low latency and high throughput)
 

jcl333

Active Member
May 28, 2011
253
74
28
My guess it's similar to "nobody got fired for buying intel/cisco", but for nics.
I associate marvell with ics for soho networking gear and "simple"/cheap sata controllers...

(Actually mellanox can also do infiniband -> great for dl/ml and other application where you want low latency and high throughput)
I know right?, it is true that Marvell makes some cheap stuff, but they acquired Qlogic, which makes mostly good stuff (they make the Cavium stuff that Microsoft loves for S2D). Intel actually seems a bit behind the times on RDMA, surprisingly. As you allude to, it is more the territory of Mellanox, Chelsio, etc.

So, this is then getting OEM'd to Dell and others.

It would be cool if this is a genuine find...

-JCL
 
  • Like
Reactions: Patrick

jcl333

Active Member
May 28, 2011
253
74
28
Ordered for Rohit to play with.
OK, nice. So nothing glaringly suspicious jumps out at you either?

These can do both iWARP and RoCEv2 if they are legit.

Heh, then I discover this thread by fohdeesha, 10Gig iWarp might actually be on the table for me afterall here.
Only 160 pages more to read.....

-JCL
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Still need a RoCE capable switch to do it right. Without honoring the low level flow controls (DCB) and ECN markings you'll get packet loss that will crush RDMA performance. Might work OK for 2-3 node deployments but beyond that all the benefits of RDMA are likely lost.
 
  • Like
Reactions: MiniKnight

jcl333

Active Member
May 28, 2011
253
74
28
Still need a RoCE capable switch to do it right. Without honoring the low level flow controls (DCB) and ECN markings you'll get packet loss that will crush RDMA performance. Might work OK for 2-3 node deployments but beyond that all the benefits of RDMA are likely lost.
Right, but you could use iWarp instead, if your use case can utilize it.

I haven't read far enough into the fohdeesha thread to see if there are any of those cheap Brocades with RoCEv2 support, I am doubting it.

-JCL
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,729
3,081
113
33
fohdeesha.com
Right, but you could use iWarp instead, if your use case can utilize it.

I haven't read far enough into the fohdeesha thread to see if there are any of those cheap Brocades with RoCEv2 support, I am doubting it.

-JCL
the ICX 7250, ICX 7450, and ICX 7750 all support PFC/lossless ethernet, can't remember what else RoCE requires but none of them nor the mellanox mentioned above support 25g (1g/10g/40g only)
 

jcl333

Active Member
May 28, 2011
253
74
28
the ICX 7250, ICX 7450, and ICX 7750 all support PFC/lossless ethernet, can't remember what else RoCE requires but none of them nor the mellanox mentioned above support 25g (1g/10g/40g only)
Well hello! Speak of the devil... thanks for chiming in. So, maybe the trade-off is you have to go with 10Gig, or suffer with 40gig.... hehe.

I am torn on the information in your thread. Some of the features for the price there are indeed drool-worthy, but I used to work in my companies network team before I came over to the server team, not sure I want to dust off my Cisco hat..... maybe, time is my most valuable resource with my kids and everything. I don't think you will find a cheaper option for RoCE short of going with old infiniband stuff. I was leaning toward just getting a cheap(ish) but easy to use 10Gig switch (like Ubiquiti) and live with being limited to just iWarp. That being said, I am reading that newer RoCE can be done without switch support. And I could potentially also just do iWarp with cables and no switch, Chelsio even has a solution that embraces that specifically without switches https://www.chelsio.com/wp-content/uploads/resources/s2d-ring-quickstartguide.pdf But I haven't started looking to see if anyone has tried it.

If these NICs turn out to be a thing then I can go either way and keep my options open.

-JCL
 
  • Like
Reactions: fohdeesha

Dreece

Active Member
Jan 22, 2019
503
160
43
I've been playing with both setups (iwarp/roce), iwarp works out of the box, but chelsio is really bad with driver support for anything outside of Windows/Red-Hat/Suse enterprise, plus their driver doesnt even compile on v5 kernels, I tried to patch up the code, a gazzillion issues. Also chelsio support sucks, really only for enterprise contracts, forum? doesnt exist.

Mellanox only does roce (v1/v2 etc), however their support is amazing, great forums, drivers exist for virtually most popular distros.

Regarding Cavvium/QLogic, indeed Marvell bought them up I believe, and I think you're going to find it quite difficult for ongoing driver support let alone current support as it is. If you look at their products page you'll notice they've recently released new cards which have inherited the tech from Cav/QL corner, and many of those cards dont come with publicly available drivers, ie. specialist/oem contracts with enterprise etc.

Bottom line... for driver support, and tech-savvy homelabs, Mellanox is the king, and they have a superb track record of supporting old generation cards too, I'm sure this wide support of the homelab/tech community has helped them rapidly gain a huge chunk of the enterprise sector too, simply because of their well known exceptional support.
Unfortunately Chelsio throw away old gens very quickly. Cavvium is dead. QLogic is dead. Marvell is a whole different beast altogether and their entry into enterprise is at an OEM level mostly, so not really sure how that will pan out in the future when OEMs jump ship in as far as continued support of drivers.

Stick with Mellanox if you can, that's my 2 pence. I'm a big fan of their ConnectX4-LX range, from Hyper-V to linux guests to Proxmox to Windows guests, SR-IOV works across virtual platforms and the drivers rock solid.
 

jcl333

Active Member
May 28, 2011
253
74
28
I've been playing with both setups (iwarp/roce), iwarp works out of the box, but chelsio is really bad with driver support for anything outside of Windows/Red-Hat/Suse enterprise, plus their driver doesnt even compile on v5 kernels, I tried to patch up the code, a gazzillion issues. Also chelsio support sucks, really only for enterprise contracts, forum? doesnt exist.

Mellanox only does roce (v1/v2 etc), however their support is amazing, great forums, drivers exist for virtually most popular distros.
Well, I am doing Windows, but I can certainly appreciate the driver frustration you speak of.
I use allot of Mellanox FC HBAs at work, never had a problem with them, so I didn't know the support was so good ;-)

Regarding Cavvium/QLogic, indeed Marvell bought them up I believe, and I think you're going to find it quite difficult for ongoing driver support let alone current support as it is. If you look at their products page you'll notice they've recently released new cards which have inherited the tech from Cav/QL corner, and many of those cards dont come with publicly available drivers, ie. specialist/oem contracts with enterprise etc.
Huh, wonder why Microsoft is pushing these so much for S2D. Still, this is a Dell card, and HPe makes one as well, so I could likely use the drivers they have, and both of those companies tend to support things for a number of years. Also, if we are talking <$100 cards, they would not owe me much....

Bottom line... for driver support, and tech-savvy homelabs, Mellanox is the king, and they have a superb track record of supporting old generation cards too, I'm sure this wide support of the homelab/tech community has helped them rapidly gain a huge chunk of the enterprise sector too, simply because of their well known exceptional support.
Unfortunately Chelsio throw away old gens very quickly. Cavvium is dead. QLogic is dead. Marvell is a whole different beast altogether and their entry into enterprise is at an OEM level mostly, so not really sure how that will pan out in the future when OEMs jump ship in as far as continued support of drivers.

Stick with Mellanox if you can, that's my 2 pence. I'm a big fan of their ConnectX4-LX range, from Hyper-V to linux guests to Proxmox to Windows guests, SR-IOV works across virtual platforms and the drivers rock solid.
I certainly appreciate this kind of first-hand knowledge and will take it seriously.
If I can find good enough deals on them maybe.
Mostly I was trying to avoid having to buy very expensive and complicated switches to support roce, but it is looking like people have some success with either crossover cables, or 2-4 nodes on their own VLAN without specific switch support.

-JCL
 

Dreece

Active Member
Jan 22, 2019
503
160
43
Microsoft pushes a lot of things, industry rarely grabs onto what they push out past a year or two, those days have long gone. As homelab'ers, you want to buy into something which you can verify driver support to cover all the technology you're keen on using is functioning bug-free... ie RDMA from server to server, from server to clients (win 10 pro etc), RSS, more offloading, linux support (if needed)... but at a few dollars here and there, another facet of being a homelab is that you could be the guy who does the buying/testing and promotion to the rest of us too!! As many on here have done so in the past.
 

jcl333

Active Member
May 28, 2011
253
74
28
Don't worry, I am not a Microsoft loyalist, I hate all software and platforms equally, I don't have favorites.
They make some good stuff, some not, and as long as I am paid to learn and use it, that is what I do.

How much of a headache have you had with roce?
 

jcl333

Active Member
May 28, 2011
253
74
28
Actually, it looks like you can get the ConnectX4-LX dual 25Gig for around $130 if you shop around.
 

Dreece

Active Member
Jan 22, 2019
503
160
43
If you're sticking to Windows, RedHat Enterprise or Suse Enterprise, then Chelsio T5/T6 are great, my only concern with them is that if and when T7 comes out, based on history, they will probably stick two fingers up at T5 owners... guess we can call it capitalism.

If you want to work across platforms using virtualisation be it Hyper-V or Proxmox (KVM), employing SRIOV (that simply just works! as soon as you install the distro!) in Linux guests etc, then Mellanox is the one to go for because they are heavily supported across the Linux hemisphere and not just the enterprise distros, thus any new kernel pops along, Mellanox driver maintainers will throw out a driver upstream very quick. Honestly hardware companies can learn a lot from Mellanox and how well they integrate with the opensource linux community - 'distro agnostic'.

With Mellanox, I only had to set traffic classes up on my switch, everything just worked, and even that configuration is literally a couple of minutes, plenty of guides up on each major switch manufacturers forums for that, far from black magic. I haven't actually tried running Roce across a switch without traffic priority etc., there used to be packet-loss potential on that front thought maybe not in v2, so that is something I'd recommend looking into if you're not planning on investing in a more recent ToR switch. Half the fun of Roce IMO is getting in there and learning more about switches rather than just plugging into these magical silver/blackboxes. Once you start talking Cisco or Juniper or whatever brand takes your fancy, your switch becomes your best buddy and you form a relationship you never thought possible, a managed switch which is truly managed is a happy secure switch with a happy relaxed master. I ssh in at least a couple of times a week just to keep her happy, just wish'd I had more work for her to do than just throw around a few files here and there.