Can I combine 40G and 100G networking

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hyltcasper

Member
May 1, 2020
48
4
8
Hi, I need high speed low latency storage network.

I will use drbd with NVME RDMA. The replica count will be 2. First is on the host, second is on another host. My virtualizaton is vmware esxi version 7 based. RDMA is important because, I can't rezerve cpus to storage instead of the virtualization.

The cabinet has 5 Dell R630 servers each with 768 GB memory. Also there are 4 overclocked i9 tower servers for single core workloads each with 256 gb memory.

I found mellanox connectx 4 dual port 100Gbit card for 300 usd. There are only 5 pieces in stock. I think good for my R630's.

Since 100g switches are too expensive, switchless configuration is an option.

On ebay there are many connectx 3 Pro dual port 40Gbit cards for 80 usd. These are possible cheap solution for my i9 servers. NetApp 40Gbit DAC cables are also cheap on ebay.

Also I need to scale that setup. I am planning to buy a hp c7000 v3 with 16 gen9 blades. On ebay there are 40Gbit blade switches for 300 usd. Since there is no need to attach mellanox cards to each blade server, it is pretty cheap option.

The question is that: What happens if I invest to 40G and 100G mixed network? As I know Qsfp+ cables are working at 40g speed when they attached to QSFP28 100g port. Has anyone experienced with this?
 

Mithril

Active Member
Sep 13, 2019
432
148
43
Hi, I need high speed low latency storage network.

I will use drbd with NVME RDMA. The replica count will be 2. First is on the host, second is on another host. My virtualizaton is vmware esxi version 7 based. RDMA is important because, I can't rezerve cpus to storage instead of the virtualization.

The cabinet has 5 Dell R630 servers each with 768 GB memory. Also there are 4 overclocked i9 tower servers for single core workloads each with 256 gb memory.

I found mellanox connectx 4 dual port 100Gbit card for 300 usd. There are only 5 pieces in stock. I think good for my R630's.

Since 100g switches are too expensive, switchless configuration is an option.

On ebay there are many connectx 3 Pro dual port 40Gbit cards for 80 usd. These are possible cheap solution for my i9 servers. NetApp 40Gbit DAC cables are also cheap on ebay.

Also I need to scale that setup. I am planning to buy a hp c7000 v3 with 16 gen9 blades. On ebay there are 40Gbit blade switches for 300 usd. Since there is no need to attach mellanox cards to each blade server, it is pretty cheap option.

The question is that: What happens if I invest to 40G and 100G mixed network? As I know Qsfp+ cables are working at 40g speed when they attached to QSFP28 100g port. Has anyone experienced with this?

There used to be a much better option for 40GB so long as you don't mind reflashing the firmware (uses legit stock firmware, just crossflashes). Search for "649281-B21" or "MCX354AQCBT". should be 30-40 per.

Do you know if your application and hardware can actually push/use 100G? I'm not sure if the CPUs in the rx30 generation are up to the task
 

i386

Well-Known Member
Mar 18, 2016
4,443
1,660
113
35
Germany
As I know Qsfp+ cables are working at 40g speed when they attached to QSFP28 100g port. Has anyone experienced with this?
Yes, it works:
Mellanox.jpg
100GBE is backwards compatible, but 40GBE stuff sometimes doesn't "know" newer 100GBE standards and will use the least common speed which is usually 10GBE
 

tsteine

Active Member
May 15, 2019
178
85
28
I have 3 servers running 100GbE and 3 servers running 40GbE on a mellanox SN2700 switch, with Connectx 3/5 adapters.
Works just fine.