Need recommendations for 10 Gbps NIC cards and switch for home lab

randman

New Member
May 3, 2020
25
3
3
I'm looking for recommendations to upgrade my home lab for 10 Gbps. I have three HPE ProLiant servers and am planning to also get a Synology NAS. So, I will need 4 10Gbps NICs.

I was interested in the MikroTik CRS305-1G-4S+IN (4 10Gbps SFP+ ports) or MikroTik CRS309-1G-8S+IN (8 10Gbps SFP+ ports) from STH's reviews but can't seem to find SFP+ NICs that meet my needs.

I'm also open to considering, and have a slight preference, towards switches/NICs that support Multi-gigabit Ethernet (10/5/2.5/1 Gbps). But so far, I haven't found an RJ-45 multi-gigabit switch that's as good a deal as the MikroTik SFP+ switches.

It seems that, in general, the hardware related to Multi-gigabit seems more recent than the hardware that supports SFP+ ?

My needs are:

1. 10 Gbps NIC must work in a PCI 3.0 x4 slot. A lot of NICs that I see are x8 slots.

2. NIC should support VMWare ESXi 6.7.

3. NIC should support, or plan to support, VMWare ESXi 7.0.

4. Ideally, cost should be under $150. I like the Intel X550-TI (RJ45 NBASE-T). However, it is very expensive (and cheaper deals from eBay entail ordering from overseas, so makes me a little nervous on authenticity, not to mention having to wait a month or so for shipment). I saw some cheaper NICs from TrendNET and StartTech, but their support stops at ESXi 6.5 and don't support ESXi 6.7 and 7.0. Mellanox cards seem expensive, and ones that I found interesting were EOL.

There's seemingly plenty of 10 Gbps NICs listed in VMWare compatibility hardware, but it's hard to figure out which ones are good/inexpensive from the list.

Anyone have any suggestions on NICs and switches?
 

randman

New Member
May 3, 2020
25
3
3
I just saw Patrick's MikroTik CRS312-4C+8XG-RM review. Nice switch, and good cost for what it provides. Looks like its fan noise is high, but this would be for basement use, so I can live with the fan. But more challenging is finding 10 Gbps NICs that meet my aforementioned requirements (PCI 3.0 x4 slot, VMware ESXi 6.7 & 7.0 support, and not too expensive).
 

Mithril

Member
Sep 13, 2019
96
19
8
Does it need to be x4 PHYSICAL? Most (I only say most because it's possible for cards to not follow spec correctly :) ) cards should downgrade correctly on powers of 2, all the way down to 1x lane. As for Gen 3, unless you are running dualport even 2.0 at 4x electrical gets you 16Gbits/s. If your constraint is only "needs to work in a 4x electrical slot" that does make it a lot easier :)
 

randman

New Member
May 3, 2020
25
3
3
My x16 and x8 slots are already being used by an Nvidia GPU and an NVMe PCI card. That leaves me with only two slots left in my servers (HPE ML30 Gen9), which are both x4 physical, and actually just x1 electrical. I know, if I use these available slots, I would only theoretically get 8 Gbps instead of 10 Gbps, but I can live with that (especially since the NAS's SSDs won't get close to that). Or, in the future, if the NAS with 10 Gbps is deemed fast enough, I may do away with the NVMe PCI-e card and free up the x8 slot (this x8 slot is actually x8 physical but only x4 electrical). But for now, I only have those two x4 physical available.

I'm happy with single port 10Gbps NICs. I don't need dual port, unless the price difference is small enough and I would still get 8Gbps on the x4 slot (when only using 1 of the 2 ports).
 

PigLover

Moderator
Jan 26, 2011
2,917
1,234
113
I can't give you specific NIC recommendations - the VMware thing takes it out of my area of expertise. But I will comment on a couple of other things you mentioned.

While I am a fan of Multi-gig (nBase-T 2.5, 5, etc) I don't think its mature enough today to be driving your decisions. There aren't enough switches, the NICs available are immature - since you want to manage costs - there just isn't any flow of used/refurbs on the market.

Also, you seem to be leaning towards 10Gbase-T solutions. But unless you have a reason to go there I think you'll actually find SFP+ much better. I'm assuming your three servers and the NAS are reasonably co-located. You'll find that, normally, there is a lot of choice and selection of SFP+ equipment out there where Base-T is more limited and generally more expensive - often a lot more expensive.

If you have to pull cables through the walls perhaps Cat-6 is easier to deal with than fiber - but otherwise there is not much of a reason for it today.

As others have noted, you can use most x8 NICs in an X4 slot as long as the card will physically fit. Even if it is x4 physical it can still work Iif the socket is open at the back and there are not other components in your way. Unless you are driving 2 links at full speed at the same time it shouldn't be an issue. And if the socket isn't open you can always open it yourself. Its an easy mod.

Lasstly, you are making this jump at a time when the used/refurb channel is pretty drained off. If times were "normal" I think you'd find a lot more product on eBay. Just bad timing. Sorry.
 

randman

New Member
May 3, 2020
25
3
3
Thanks, PigLover. The servers and NAS are all close to each other in the basement, so they can all use new cables. No need to use my existing in-wall Ethernet cables that go elsewhere in my home (most are Cat 6 or Cat 5e, but some Cat 5, depending on the year I fished the wires). I guess 10Gbase-T seems more appealing in theory, but, I'm open to using either one for now, since 10 Gbps use would remain within the basement. So, cost would be the overriding decision.

I checked my servers, and yes, all the PCIe slots take full-length cards (and have openings in the back). I wasn't sure if it was safe to assume that all (or most) x8 NICs would physically fit in the x4 slots. Maybe I read one too many old threads about people using their Dremel with their cards...
 

randman

New Member
May 3, 2020
25
3
3
Okay, so I found decent deals on the Intel X520-DA1 (single port SFP+) and Intel X520-DA2 (dual port SFP+). These are older Intel cards that are x8 lane, PCIe v2.0 5.0 GT/s cards. I'm a little rusty on how PCIe v2.0 cards work on PCIe v3.0 slots. My servers have a spare PCIe v3.0 slot that is x4 physical but x1 electrical. In theory, the x1 electrical will limit me to 8 Gbps, which is fine by my. But can someone confirm that, theoretically at least, that if I put a PCIe v2.0 x8 NIC on my empty PCIe v3.0 slot (x4 physical x1 electrical), that I can get 8 Gbps out of the NIC?
 

PigLover

Moderator
Jan 26, 2011
2,917
1,234
113
I worry that the x1, PCIe 2.0 speeds would make this untenable. YYMV, but I think you'd be very dissapointed. If it were x4 electrical, even at 2.0 speeds then it would be useful.
 

randman

New Member
May 3, 2020
25
3
3
Yeah, I’m suspicious of using a PCIe 2.0 card on a PCIe 3.0 x1 electrical slot. Rather than being limited by PCIe 3.0, I suppose it may be limited by PCIe 2.0, which is ~500MB per lane. Still a lot better than 1Gbe (125 MB) but not up to a PCIe 3.0 lane’s 1000 MB capability. I suppose I need to look at PCIe 3.0 cards.
 

PigLover

Moderator
Jan 26, 2011
2,917
1,234
113
2.0 on a 3.0 slot works fine 99.9% of the time. 3.0 card on a 2.0 slot works fine somewhat less often (always if they got the backward compatibility right and fully tested, YMMV).

Your bigger issue is with trying to do 10Gbe (and the associated signalling/control for the card) over an PCI x1 electrical pathway. Its going to be too slow which will lead to packet loss on the link, which will lead to TCP timeouts and retransmissions, which will yield very poor performance, which will lead to hair loss...
 
  • Like
Reactions: randman

RTM

Active Member
Jan 26, 2014
520
185
43
Maybe something like one of Mellanox's single port connectx-3 10g cards then, like this one:
That looks to be PCIe x4 too, so looks perfect.
 
  • Like
Reactions: randman

vangoose

Active Member
May 21, 2019
177
49
28
Intel X540-T1 single port 10Gbe is pcie-2 x4 card. If you have x4 slot whether it's pci-e 2 or 3, you will run full 10gbe bandwidth. X540 is much cheaper than X550.

You may also look at Aquantia cards. Aquantia is bought by Marvell now and has driver for ESXi 6.7/7.0 now.
 

randman

New Member
May 3, 2020
25
3
3
@PigLover - even if my NAS will have SSDs (NAS limited by its own SATA interface), I don't think I would saturate the capability of one PCIe 3.0 lane (8 Gbps). But yeah, with a PCIe 2.0 card using just one lane, it will be much more prone to retransmissions/timeouts. I suppose if performance were really good with a 10 Gbps network, I wouldn't need the (now smallish) PCIe NVMe card that is using up my x4 (x8 physical, x4 electrical) slot and I'd move the NIC over to that slot.

@vangoose: I don't see Aquantia cards in VMWare's compatibility list. I see Marvell.

@RTM: thanks for the Mellanox ConnectX-3 tip. I looked at Mellanox earlier and after seeing how many different series of cards they had (ConnectX-3, 4, 5, etc.), my head started spinning. But the MCX311A-XCAT looks interesting. Online prices are low too, which makes it very appealing. I also saw the MCX311A-XCCT. Still trying to figure out what's different between them.
 

vangoose

Active Member
May 21, 2019
177
49
28
@PigLover - even if my NAS will have SSDs (NAS limited by its own SATA interface), I don't think I would saturate the capability of one PCIe 3.0 lane (8 Gbps). But yeah, with a PCIe 2.0 card using just one lane, it will be much more prone to retransmissions/timeouts. I suppose if performance were really good with a 10 Gbps network, I wouldn't need the (now smallish) PCIe NVMe card that is using up my x4 (x8 physical, x4 electrical) slot and I'd move the NIC over to that slot.

@vangoose: I don't see Aquantia cards in VMWare's compatibility list. I see Marvell.

@RTM: thanks for the Mellanox ConnectX-3 tip. I looked at Mellanox earlier and after seeing how many different series of cards they had (ConnectX-3, 4, 5, etc.), my head started spinning. But the MCX311A-XCAT looks interesting. Online prices are low too, which makes it very appealing. I also saw the MCX311A-XCCT. Still trying to figure out what's different between them.
Marvell bought Aquantia
 

randman

New Member
May 3, 2020
25
3
3
Thanks for your feedback, folks!

I gotta say, Mellanox has some pretty good technical support. I had a couple of questions and they replied to my email in less than an hour. They confirmed that both the MCX311A-XCAT and MCX311A-XCCT were compatible with VMWare ESXi 6.7 & 7.0. The MCX311A-XCCT is from their "Pro" family and cost more. I decided to go for the MCX311A-XCAT and ordered 3 of them. I also ordered the MikroTik CRS309-1G-8S+IN. Now I need to figure out what SFP+ cables to get (should only need < 6' length).
 

RTM

Active Member
Jan 26, 2014
520
185
43
Thanks for your feedback, folks!

I gotta say, Mellanox has some pretty good technical support. I had a couple of questions and they replied to my email in less than an hour. They confirmed that both the MCX311A-XCAT and MCX311A-XCCT were compatible with VMWare ESXi 6.7 & 7.0. The MCX311A-XCCT is from their "Pro" family and cost more. I decided to go for the MCX311A-XCAT and ordered 3 of them. I also ordered the MikroTik CRS309-1G-8S+IN. Now I need to figure out what SFP+ cables to get (should only need < 6' length).
If you only need a short distance, you should definitely go for DAC cables (rather than transceivers etc.).
You may find some for cheap on eBay else you can look at places likes fs.com.
 

WANg

Well-Known Member
Jun 10, 2018
662
328
63
I'm looking for recommendations to upgrade my home lab for 10 Gbps. I have three HPE ProLiant servers and am planning to also get a Synology NAS. So, I will need 4 10Gbps NICs.

I was interested in the MikroTik CRS305-1G-4S+IN (4 10Gbps SFP+ ports) or MikroTik CRS309-1G-8S+IN (8 10Gbps SFP+ ports) from STH's reviews but can't seem to find SFP+ NICs that meet my needs.

I'm also open to considering, and have a slight preference, towards switches/NICs that support Multi-gigabit Ethernet (10/5/2.5/1 Gbps). But so far, I haven't found an RJ-45 multi-gigabit switch that's as good a deal as the MikroTik SFP+ switches.

It seems that, in general, the hardware related to Multi-gigabit seems more recent than the hardware that supports SFP+ ?

My needs are:

1. 10 Gbps NIC must work in a PCI 3.0 x4 slot. A lot of NICs that I see are x8 slots.

2. NIC should support VMWare ESXi 6.7.

3. NIC should support, or plan to support, VMWare ESXi 7.0.

4. Ideally, cost should be under $150. I like the Intel X550-TI (RJ45 NBASE-T). However, it is very expensive (and cheaper deals from eBay entail ordering from overseas, so makes me a little nervous on authenticity, not to mention having to wait a month or so for shipment). I saw some cheaper NICs from TrendNET and StartTech, but their support stops at ESXi 6.5 and don't support ESXi 6.7 and 7.0. Mellanox cards seem expensive, and ones that I found interesting were EOL.

There's seemingly plenty of 10 Gbps NICs listed in VMWare compatibility hardware, but it's hard to figure out which ones are good/inexpensive from the list.

Anyone have any suggestions on NICs and switches?
Are you planning to use SRIOV for 6.7/7.0? One thing to be aware of for the ConnectX-3 series is that their native drivers for ESXi will not support SRIOV (just RDMA). If you need SRIOV you'll need to blacklist the native drivers and use the old VMKLinux-only OFED drivers (which works from 5.x to 6.x and not above).
 

randman

New Member
May 3, 2020
25
3
3
Thanks for the info, WANg. The ConnectX-3 product brief claims SRIOV support (https://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX3_EN_Card.pdf ). However, it was written a long time ago (it still talks about VMware ESXi 4.x and 5.x and Ubuntu 12.04). I guess the next higher series ConnectX-4 supports SRIOV? But I imagine it wouldn't have the nice deals online like the ConnectX-3 NICs.

So far, I don't think I have a use case that necessitates the performance advantage of SRIOV (for example, I have low CPU utilization in my servers). In the future, it would be nice to use vMotion, which doesn't work with SRIOV (although my Essentials license doesn't provide vMotion, and I haven't decided if I want to continually pay the annual VMUG fee to get access to it).
 

newabc

Member
Jan 20, 2019
31
8
8
Will you consider Emulex OEM like HP NC552SFP?
The HP web page (link) shows it supports SR-IOV. But it will consume 3-5 watts more than X520-DA2.
link