Dumb Question Needs A Smart Answer

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Quartzeye

New Member
Jul 29, 2013
16
0
1
I have some ConnectX-3 cards and I see that I can run either Infiniband or Ethernet. I also see a lot of different information on 40Gbit/S and 10Gbit/S confusion.

Is Ethernet on these card 1/5 the speed of Infiniband?

If the speeds are equal, why would I use Infiniband as it cannot be bridged like Ethernet?

The goal is to rum my VM's on one server using disk storage from another server.
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
I have some ConnectX-3 cards and I see that I can run either Infiniband or Ethernet. I also see a lot of different information on 40Gbit/S and 10Gbit/S confusion.

Is Ethernet on these card 1/5 the speed of Infiniband?

If the speeds are equal, why would I use Infiniband as it cannot be bridged like Ethernet?

The goal is to rum my VM's on one server using disk storage from another server.
I can't think of a CX3 card that can do 40G IB that's limited to 10G ethernet. Put it in Ethernet mode, put in proper transceivers and it should be good to go.

IB has additional configuration requirements and, if setup properly, may be more performant, I can't recall which protocol has more overhead. I suspect Ethernet does; IB was originally designed to be a low latency, low overhead interconnect for clustered systems.

Unless your storage is stupid fast though (NVMe on PCIe 4), I suspect both will work equally well for you.
 

BeTeP

Well-Known Member
Mar 23, 2019
657
433
63
The CX3 cards are capable of running both protocols at the same speed. The reason why some dual QSFP+ models were marketed as "40G IB + 10G Ethernet" is because 2 40Gbps links would saturate PCIe 3.0 x8 uplink.

The main reason why people use IB for storage is because of lower latency. If you do not need that (or the rest of your hardware can't benefit from that) - you do not have to use it.