ConnectX-3 vs ConnectX-3 Pro?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

frogtech

Well-Known Member
Jan 4, 2016
1,485
272
83
36
What's the difference between these two feature-wise (EN models only)? It seems like only the ConnectX-3 Pro does RDMA over converged ethernet (RoCE) whereas standard ConnectX-3 does standard RDMA? I'm not sure there's even a difference between just RDMA or RoCE.

Thanks.
 
  • Like
Reactions: michel333

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
247
43

frogtech

Well-Known Member
Jan 4, 2016
1,485
272
83
36
While we're still on the topic...I'm getting a little confused by the whole RDMA, RoCE v1, v2, iWARP thing. It seems like some of these run on different protocols. In common scenarios you would want RDMA for a Storage Spaces Direct or other hyper-converged environments. But how do you know which protocol to go with and do you really need data center bridging on a switch for it to work? If you do routing at a switch with static routes do you need RoCE v2 which has routable RoCE packets? Does VMWare VSAN benefit from RDMA?
 

i386

Well-Known Member
Mar 18, 2016
4,411
1,638
113
35
Germany
While we're on the topic: Does anyone know if the hardware is identical between the C-X3 and the C-X3 Pro?
I think the CX-3 Pro cards have a slightly newer asic to support roce v2.
On their switches they have an ethernet switch that's basically a sx6036 that can only do roce v1, a newer revision with a new asic can do roce v2!
I'm getting a little confused by the whole RDMA, RoCE v1, v2, iWARP thing
RDMA means Remote Direct Memory Access > reading and writing directly to another hosts memory without involving cpu or the tcp/ip stack.

RoCE = RDMA over Converged Ethernet, it's mellanox implementation of RDMA for ethernet.
There are currently two versions of that protocol, v1 and v2. v1 is using ethernet frames for RoCE and is limited to the "local" network (not routable), v2 is using ip packets for RoCE making it a routable protocoll.
RoCE v1 is supported by CX-2 and CX-3 cards.
RoCE v2 is supported by CX-3 Pro and newer CX cards (they are backward compatible with roce v1)

iWARP is the standard protocol for RDMA over ethernet networks, but besides Chelsio nobody uses it (not even Intel who helped establishing that standard!)
 

frogtech

Well-Known Member
Jan 4, 2016
1,485
272
83
36
There are currently two versions of that protocol, v1 and v2. v1 is using ethernet frames for RoCE and is limited to the "local" network (not routable), v2 is using ip packets for RoCE making it a routable protocoll.
So would local mean devices/subnets that are directly connected on the same switch?

If RoCE is mellanox's implementation, was there ever "just RDMA", or was iWARP the first implementation? I guess to me it sounds like RDMA is a standard, and these consortiums implement their own value adds or product features based on them.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
247
43
So would local mean devices/subnets that are directly connected on the same switch?

If RoCE is mellanox's implementation, was there ever "just RDMA", or was iWARP the first implementation? I guess to me it sounds like RDMA is a standard, and these consortiums implement their own value adds or product features based on them.
RDMA functions over Infiniband networks. RoCE and iWARP are the implementations that allow RDMA to occur over Ethernet.
 
  • Like
Reactions: frogtech

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
But how do you know which protocol to go with and do you really need data center bridging on a switch for it to work? If you do routing at a switch with static routes do you need RoCE v2 which has routable RoCE packets? Does VMWare VSAN benefit from RDMA?
Many common RDMA protocols assume a lossless network, and will lock up or degrade dramatically if there is any RDMA packet loss. One way to detect the issue is to run a bandwitdh test such as ib_send_bw -a, that ramps up from low to high bandwidth. If it ramps up to %95 and stays there, great. If it ramps up to %50 and then gets stuck at 0%, it can be a flow control issue.

Two ways for an ethernet switch to support lossless RDMA include: global pause or priority flow control (PFC). In simple terms, Priority flow control can apply to an individual vlan, while global pause affects all switch ports and traffic.

Data center bridging includes PFC, but PFC and ECN (congestion notification) is the crucial requirement.

Some software (eg.Windows SMB Direct, and Vsphere NVMeoF, vmotion and ISER) require PFC, and what they are requiring is lossless RDMA.

The link below shows that ECN and PFC are required for a lossless network with Mellanox cards. That offers the widest support for various protocols. Even if you limit use to a Layer 2 isolated network with no other traffic, you still need flow control for a lossless network.

Mellanox has a table comparing lossy and lossles configurations.

A CX3 Pro can be used for a lossless ethernet network, whereas I don't think the CX3 can. But it really depends on the whole software and hardware recipe, because the two ends and the switch all need to all be configured for a compatible flow control method.