Mellanox ConnectX-3 2-Port MCX354A-FCBT (56Gb IB / 40GbE) $150

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Cipher

Member
Aug 8, 2014
159
15
18
53
These look nice. For someone who uses Ethernet, and not IB, I have two questions regarding these cards:

1) How does using these cards in an Ethernet setup compare to using something like the latest Intel Fortville adaptors?

2) To connect these to the 40GB ports on my Gnodal GS4008, am I only looking at a single cable with QSFP+ at both ends?
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
@Cipher
1) This chipset runs hotter than the Fortville cards, but they are much more readily available. They work fine in a Ethernet environment
2) Yes, you just need a QSFP+ cable

Really good deal for the cards. I have too many or would buy more.
 
  • Like
Reactions: Cipher

Scott Laird

Active Member
Aug 30, 2014
317
148
43
One *minor* issue--some of the older firmware on the MCX353s (and presumably also the dual-port 354s like these) doesn't recognize 40Gbe over DAC cables for some reason. Flashing the firmware fixes this. Other than that, they're nice cards, and this is a good price.
 
  • Like
Reactions: Chuckleb

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
If all you wanted is 40Gig with DACs why would you spend $150 when you could get 3 ConnectX2 VPI / 40Gig for $50 or less... do these cards do something else, special, other than 56Gb/IB ??
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
The ConnectX3 VPI cards do 40 GbE, for a start, not just 40/56 Gb IB. On the Ethernet front, they support RoCE, which the ConnectX2 doesn't. Mind you, that's basically Infiniband over Ethernet, so it's of somewhat limited use (and perhaps questionable taste), but it gives you everything on one network.

The biggest problem with IB in my mind is that it's hard to get IP off of it at reasonable speeds for reasonable prices. It's easy to get a 10 Gbe L3 switch (well, relatively easy), but dedicated IB<->Ethernet routers are rare. You're largely going to end up either (a) making a PC do the job (not horrible, but more maintenance) or (b) running two networks (ditto).
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
What exactly is the difference between infiniband over ethernet versus something like IPoIB which the connectx-2 does support?
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
I use these for lab, test, and dev environments and throw IB on one port and Ethernet on the other and call it a day. I get RDMA and SMB IB speeds to my storage and between Hosts and 10/40GbE to everything else.
 
  • Like
Reactions: Chuckleb

Scott Laird

Active Member
Aug 30, 2014
317
148
43
You are in a maze of twisty protocols all alike.

There's
  • IPoIB (IP packets natively on top of Infiniband)
  • EoIB (Ethernet frames over Infiniband; doesn't seem to be well supported)
  • RoCE (Mellanox's way to do RDMA over Ethernet networks, largely reuses a big chunk of the IB stack)
  • iWARP (IETF-standard way to do RDMA over IP networks; supported by Chelsio and some Intel cards)
Are you building a network primarily for sending RDMA traffic around, or is just one of many things happening? If you only care about RDMA, then Infiniband is easy. If you only care about IP, then Ethernet is easy. If you want to mix two types of traffic on one network, then compromises must occur. The problem with Infiniband networks is that they're hard to interconnect to non-IB networks. The problem with RoCE is that it's really demanding on your switch(es) and can't cope with packet loss. At all. The problem with iWARP is that practically nothing supports it, including Intel's newer network cards.

If you're playing along at home, there's nothing wrong with dropping cheap IB cards into a few machines and running an IB network along with an Ethernet network. It ends up being a pain long-term, though, because you end up needing *some* way to say "please talk to machine X over IB, not Ethernet," and that doesn't scale very well with more than a couple machines. You're better off attaching most systems to a single network, and IMHO IB's not a good choice for that. OTOH, mixing 10 Gbe into a 1000/100/10 network is trivial. The cards are relatively cheap. The switches hurt, but they only hurt once, while mixed IB/Ethernet networks just keep on hurting.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I agree with everything Scott said. We run many clusters and for one of our groups, just ripped out all IB and went to 10G for easy routing and connectivity. On the others, IB for HPC with with RDMA. Data transfer nodes are dual linked with IB internally and 40GbE for outward facing. Bridging the two networks can suck on scale.

For cheap at home, IB due to switch cost. To bridge into home network, go 10GbE.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
My plan is to use my ConnectX2 VPI w/DACs to the IB Switch for 40Gig SAN and ESXI Hosts, and each ESXI host also has up to 2 10GigE ports to the 'user' network. My 3rd network is a couple POE switches, with fiber and fiber direct to a ConnectX3 EN in an ESXI Host passed-through to the Win7/Security VM. In addition if needed I have 2 or 3 6 port 1Gig NICs but don't see them as beeing needed with 2-3 onboard 1Gig + 2x10Gig on the hosts.

Any better way?

I'm thinking the downfall here is ~300w idle from networking gear :-X

I'm really hoping the 40Gig IB latency/performance lets my NVME Fileserver shine :)
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
My plan is to use my ConnectX2 VPI w/DACs to the IB Switch for 40Gig SAN and ESXI Hosts, and each ESXI host also has up to 2 10GigE ports to the 'user' network. My 3rd network is a couple POE switches, with fiber and fiber direct to a ConnectX3 EN in an ESXI Host passed-through to the Win7/Security VM. In addition if needed I have 2 or 3 6 port 1Gig NICs but don't see them as beeing needed with 2-3 onboard 1Gig + 2x10Gig on the hosts.

Any better way?

I'm thinking the downfall here is ~300w idle from networking gear :-X

I'm really hoping the 40Gig IB latency/performance lets my NVME Fileserver shine :)

NVM Express over Fabrics - Working Hard In IT
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
@PnoT oooooh exciting!!!!!

Although, I'm going to find it hard to to complain for ~$450 TOTAL for the 40Gig IB stuff (switch+NICs+DACs) since all of it cost less than 2 10Gig Intel NICS 6 months ago :(
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
@PnoT oooooh exciting!!!!!

Although, I'm going to find it hard to to complain for ~$450 TOTAL for the 40Gig IB stuff (switch+NICs+DACs) since all of it cost less than 2 10Gig Intel NICS 6 months ago :(
T_Minus, did you pick up one of those IS5030 switches?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I got in the deal thread for: (Which is still good by the way).

519571-B21 - HP Voltaire InfiniBand 4036 36Port 4xQDR Managed Switch - VLT-30111


($3oo or $350 I Forget which they accepted)
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Confirming the $150 BO price. I went in for a qty of 5 and $100, expired offer. $140, was countered at $200, $150 and he bit.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
Damn and I thought I got a good deal getting the HP branded versions for $180 two weeks ago. (according to mellanox doc and sticker they are identical so same firmware works etc)

I did get my hands on a FDR managed switch ("tabletop" SX6012) but haven't checked what license level it is at, I doubt it has IB+eth bridging but one can hope. First I need to quiet it down with some homebrew fan modding, thing sounds like a vacuum cleaner but inside is 4 PWM fans, and I don't care if I run it with the cover off and large slow fans instead of the 40mm screamers @ 5000 RPM.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
I got in the deal thread for: (Which is still good by the way).

519571-B21 - HP Voltaire InfiniBand 4036 36Port 4xQDR Managed Switch - VLT-30111


($3oo or $350 I Forget which they accepted)
I got my switch earlier this week. Just waiting on my cards and qsfp cables before I go to town :).
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I scored three connectx-3 cards awhile back for $300 or so (all brand-new). The issue I had for awhile is that getting a switch that would support them was iffy (e.g. pricey). Keep in mind, these are the EN variant (e.g. ethernet only). I have two vsphere boxes talking to a storage server. Since the cards are all dual-port, I had each vsphere host connected to separate ports on the SAN box. It worked, sorta, except they had to be in different subnets, which caused problems if I wanted to give guests vmxnet3 (10gbe) vnics, due to the addresses changing if I vmotioned guests from one hypervisor to the other. I finally realized that I could put the two 40gbe interfaces on the SAN host in a bridge group, and use one subnet, and now it all works fine...
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Interesting. Can anyone shed some light on how you go about having say, all of your compute/storage nodes communicate using only IB and use only the GbE Rj45 for internet?