Recommended 10Gb NIC for Proxmox + Ceph Lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

VMman

Active Member
Jun 26, 2013
128
47
28
Hi All,

I'm currently in the planning phase of setting up a lab using 2 Proxmox hypervisors connected to 3 Ceph storage hosts.

I wanted to get everyone's opinion on what is considered the best NIC value or otherwise to connect these servers together using a pair of ICX 6650's that I have.

I was considering dual port 10gbit cards from both Mellanox (ConnectX3-Pro) or Cheliso (T520-CR) but I wanted to know if there were any other options I missed?
I would prefer to use a chipset that can offer RDMA or similar technologies should I repurpose the lab for something like MS Storage Spaces in the future so I'm avoiding the older intel X520 series and alike.

Thanks in advance
 

vincococka

Member
Sep 29, 2019
44
21
8
Slovakia
If possible consider NICs from Chelsio -wiser choice than now abandoned CX3Pro.
Also for CEPH (or any other cluster filesystem/application) consider using 25GbE.
 

VMman

Active Member
Jun 26, 2013
128
47
28
I agree but I'd like to use the 10gbit SFP+ switch that I have for this build.
 

vincococka

Member
Sep 29, 2019
44
21
8
Slovakia
SFP28 is backwards compatible with SFP+... but yeah, for home lab is 10GbE usually enough.
Not to mention the price aspect of newer cards
 

danb35

Member
Nov 25, 2017
34
4
8
44
I have Chelsio T420s in three of my Proxmox nodes, and a Solarflare 5122 in the fourth. All are working well, but the Solarflare card will be cheaper.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
just FYI since you mentioned RDMA in your OP, AFAIK SolarFlare do not do RDMA as they kinda went a different direction in the low-latency HFT market, optimizing TCP flows with OS support.

CX-3 do RoCE v1 (and IB), and CX-3 Pro and up do RoCE v2.

Ceph support for RDMA is kinda abandoned now.