Mellanox ConnectX-5 VPI 100GbE and EDR InfiniBand Review

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Do you know if mellanox will release >40gbe vpi switches?
Can't find any information about edr vpi switches or ICs.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
VPI isn't really the killer feature on CX5/CX6 boards - outside of labs almost nobody ever needs to swap between IB and Ethernet. VPI is really an advantage for Mellanox so that they have to manage fewer SKUs (same stock item regardless of what kind of network the customer has).

The REAL killer feature of CX5/CX6 cards is their near complete implementation of TC-FLOWer, which allows pushing vSwitch and IPTables rules into the card. Coupled with SR-IOV, this allows line-rate applications with security groups and SDN features.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I'd tend to agree with @PigLover - the offload capabilities of CX5 are huge. We've got a NVMeoF solution a customer is running that chews CPU cycles on Intel NICs and works great on Mellanox CX5.

I can also say we've got a customer who uses them exactly as you describe. One IB one Ethernet port for their GPU cluster. They don't care as much about switching, they do care about running both and that's certainly in production.
 
  • Like
Reactions: T_Minus

oojingoo

New Member
Jun 17, 2015
2
1
3
44
Two neat features of the CX5's are the ability to do host chaining (switchless networking, save $$$) and Mellanox's offload library libvma. Also, the MCX512A-ACAT is a steal as far as I'm concerned. All the CX5 features, dual port 25gbe for $300.
 

s0lid

Active Member
Feb 25, 2013
259
35
28
Tampere, Finland
For science you should get a Mellanox MAM1Q00A-QSA28 QSFP28 to SFP28 adapter and 10/100/1000BASE-T SFP. Then connect it to 10Mbps Full-Duplex switch port.
 

necr

Active Member
Dec 27, 2017
156
48
28
124
Do you know if mellanox will release >40gbe vpi switches?
Can't find any information about edr vpi switches or ICs.
Unfortunately, there aren't any new ICs with VPI. SwitchX-2 had 170/220ns Infiniband/Ethernet latency, while SwitchIB/SwitchIB2 has 90ns. They even allowed to disable FEC to decrease latency in the latest systems. It seems like the best bet is to build a gateway system which is CX5 based.
 
  • Like
Reactions: Patrick