Beware of EMC switches sold as Mellanox SX6XXX on eBay

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Poco

New Member
Nov 16, 2018
3
0
1
So, my replacement EMC SX6005 switch came in..
I thought something was wrong with my switch since my QDR Amphenol fiber cable didn't pick up a link to the switch with the last one, so that's why a returned it.

However I'm curious if anyone knows some cheap DAC cables which work with this switch and CX-2 40Gbps cards.

I kind of just bought this switch imagining it would be essentially just like my Voltaire ISR9024D-M I bought a long time ago..
I wish I read this forum post about the EMC switches in short.

Thanks in advance.
 

cecil1783

New Member
Aug 9, 2017
7
8
3
41
I'm not sure what you consider cheap, but I've been getting new Mellanox FDR DACs (1.5m and 2m) for around $35 on ebay, I'm sure FDR-10 cables would be less.

Are you running a subnet manager on an infiniband connected machine? The SX6005 is unmanaged, it's just a dumb collection of copper until something tells it how the network works (opensm). You won't see a connection on any port, from any machine, until you have a subnet manager running.


aa
 
  • Like
Reactions: Poco

Poco

New Member
Nov 16, 2018
3
0
1
I'm not sure what you consider cheap, but I've been getting new Mellanox FDR DACs (1.5m and 2m) for around $35 on ebay, I'm sure FDR-10 cables would be less.

Are you running a subnet manager on an infiniband connected machine? The SX6005 is unmanaged, it's just a dumb collection of copper until something tells it how the network works (opensm). You won't see a connection on any port, from any machine, until you have a subnet manager running.


aa
I am running OpenSM from OFED, yes.
The cards are using the mlx4_core and mlx4_ib modules, not Ethernet (mlx4_en).

I'm just guessing it's my cables at this point.
They work in point-to-point and have a 40Gbps link, but nada on the end of the switch oddly..

I'll look around for some of those FDR cables though.
Also; is it just safest to buy the official Mellanox cables I'm assuming?
 

cecil1783

New Member
Aug 9, 2017
7
8
3
41
I really can't say if any other cables will work or not. I believe others here have successfully used non-Mellanox cables, but can't say for sure which ones. A lot of people seem to be having good luck with cables from fs.com. Hopefully someone else can chime in with the cables they've had the best luck with, if not Mellanox.


aa
 
  • Like
Reactions: Poco

mpogr

Active Member
Jul 14, 2016
115
96
28
54
A remark on IPoIB vs EN mode: everything else being equal (FDR and 56GbpE), you get SIGNIFICANTLY less network throughput in IPoIB mode comparing to native EN. And when I say significantly, I mean 1/2 the throughput as the best case scenario.

That said, as long as storage speed is the main concern, you get the best throughput/latency with IB over SRP. However, with SRP being now deprecated in virtually every environment (except Linux on both ends), you have to move to iSER, which requires IP as the underlying layer, which means you pretty much have to use the EN mode with flow control enabled in order to achieve speeds that get close to the theoretical 56Gbps limit.
 

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
is there an overview which describes the available options nowadays with platforms and protocols (and potentially recommendations) ?

Would still be looking for the fastest option for ESX and nfs on any ZFS capable system (Solaris>FreeBSD>Linux in order of preference)...
 

mpogr

Active Member
Jul 14, 2016
115
96
28
54
I'm not aware of any overviews, but, AFAIK, if you want to stick with the recent versions of ESXi (6.5 and above), your only option is Linux with ZoL and iSER (I use SCST implementation, but the built in should work as well).

Edit: and yes, it is slower than pre-6.5 SRP had been...
 

Terry Wallace

PsyOps SysOp
Aug 13, 2018
202
133
43
Central Time Zone
I’ve been using gtek cables from amazon and I have 2 Mellanox switches 3 quantas and all my nic’s are Mellanox ConnectX-3 flashed over from hp.
Never had a cable problem. I buy the cheap cables listed as open switch.
 
  • Like
Reactions: Poco

trippehh

New Member
Oct 29, 2015
17
3
3
43
Anyone managed to get IPv6 routing on Ethernet going on these post-conversion?

The switch ASIC seems capable, the OS is, but the CLI is hiding all IPv6 options except for the ones used by the management network interface.

Same with both eth-single-switch and vpi-single-switch profiles.

"ipv6 enable" on the root node, gone
ipv6 options for the VRFs gone
ipv6 options for the interfaces gone

# show system resource table ipv6-uc
---------------------------------
Table-Id In-Use
---------------------------------
ipv6-uc 0
Mode: strict
Total configured entries: 0
Total free entries: 0
# show vrf anet
VRF Info:
Name: anet
RD: 100:100
Description: -
IP routing state: Enabled
IPv6 routing state: Disabled
IP multicast routing state: Disabled
Protocols: IPv4
Interfaces: vlan40
 

petreza

New Member
Dec 28, 2017
29
2
3
Successfully upgraded my MSX6012! Wanted to give a shout out and thanks to mpogr for putting together the guide!

I have 2 questions though:

1) I'm trying to check performance of the setup. I have 2 R610's with ConnectX-3's flashed with the latest Mellanox firmware. I'm running them as 40Gbe, 9000 MTU (on cards and switch). However, when running iperf3, it's showing a max of 14Gbps, even with parallel instances. Anything I should look at here?

2) My management interface goes down over time, and won't come back automatically. The lights on the management nic on the switch go off as well. If I unplug the network patch cable to the management nic and plug it back in, it'll come back online. It will still go offline again after a day or so. Has anyone seen this?
That is the max speed you will get on PCIe 2.0 (if you have an x56xx series Xeon) which the r610 is. the Rx20's and newer do PCIe 3.0 (8GT/s per lane).

After overhead, your speed should be sitting ~12.7 gigabytes/s (jumbo frames or not wont make a difference)

Also, make sure the card is not using the riser that the internal storage perc card is using( which would put the card above the idrac module). that physical x8 runs as an x4. The physical x8 on the riser closest to the power supplies, is the slot that actually runs x8
PCIe x8 2.0 should do 4GB/s one way. That's 32Gbps minus overhead. Why is autodestruct getting only 14Gbps (or 12.7Gbps)? Is it actually a limit imposed by the processor - X56xx?

I have two PowerEdge R810 with Xeons X7560 and three ConnectX-3 Pro EN - MCX314A-BCCT - which I cannot test right now. Would I be able to get close to the theoretical limit of the PCIe x8 2.0 with this setup or am I also limited? I ask because I was planning to get a cheap, EMC OSed Mellanox MSX6012 switch and go through the pain of converting it to Mellanox OS, but if my speed is limited to approx. 10GbE, then there is no point.
(As you could probably tell I quite new to all this.)

Thanks for your help.
 
Last edited:

petreza

New Member
Dec 28, 2017
29
2
3
(newbie question)
Can a switch like Mellanox SX6012 have some of its ports operate in Ethernet mode while others in Infiniband? If so, is there any interference or do the two modes function as if there are two switches in one chassis?
 

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Good to know, thanks.
O/c the 314 is a x8 card and the 354 would be a x16 card, so its not a 'full' cross switch, but very good to know its working firmware side.
Still looking to try that with my CX5-EN ;)
 

petreza

New Member
Dec 28, 2017
29
2
3
Good to know, thanks.
O/c the 314 is a x8 card and the 354 would be a x16 card, so its not a 'full' cross switch, but very good to know its working firmware side.
Still looking to try that with my CX5-EN ;)
I forgot to put ? at the end (fixed now) - I am asking not reporting.
Sorry
 

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
Ah. Usually it worked on CX3s, also with EN to VPI iirc, but have not tried it. Probably might work inside the same family (x8 to x8, x16 to x16) but not sure on LX/EN to VPI.
Always hoped so but in reality not even LX/EN's are so cheap that it makes sense to get them over VPIs
 

mpogr

Active Member
Jul 14, 2016
115
96
28
54
Guys, is there any reasonable (price-wise) substitute to CX-3 cards available now in the wild? My CX-3s are fine, but they don't perform with RDMA-over-IP as well as newer cards would, impacting iSER performance comparing to what SRP used to provide... Has to be at least 56Gbps in EN mode though...
 

Rand__

Well-Known Member
Mar 6, 2014
6,648
1,780
113
No.
CX4/5 are expensive most of the time, although you can get deals for CX4 around $150/200. You'll only get 40Gbps on them though on a FDR switch, for 56 you need CX5's. Both can do 100Gbs o/c if you get the right model and switch
 

oxynazin

Member
Dec 10, 2018
38
21
8
Guys, is there any reasonable (price-wise) substitute to CX-3 cards available now in the wild? My CX-3s are fine, but they don't perform with RDMA-over-IP as well as newer cards would, impacting iSER performance comparing to what SRP used to provide... Has to be at least 56Gbps in EN mode though...
Hi. Thanks for the guide. Still waiting for the switches (will arrive in a month or two, may be even three).

Did you do benchmarks for iSER vs SRP? How much faster SRP in your case? I found this article: iSCSI vs iSER vs SRP on Ethernet & InfiniBand
It seems that iSER on Ethernet is not so bad. And they used ConnectX-3 cards (MCX354A-FCBT).

BTW what is RDMA-over-IP? I can find some articles about "RDMA-over-IP", but no protocol definition or standard, and the first link is about RoCE.