Brocade ICX6610-48P-PE PoE+ 48 port - $175 shipped

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
234
43
Well, I mentioned it because they're SAS cables (and quite old ones) for disk shelves, not ethernet cables - their rated speed for the application they were sold for is 6gbps per channel, and 40gbE is nearly twice that. Like I said though I'd imagine it would still work given they're not crazy long and you have good transceivers on both ends (eg not cheap switches/cards)

I think you'll be the first one to try out the 15 foot long version, so post your results somewhere so we get STH in google results for even more niche questions :p
I can post numbers from both of those cables in a bit, I just got my desktop booting with a C-X3 installed. I have an FDR infiniband cable on the way too, so I can post numbers from that later this week.
 
  • Like
Reactions: fohdeesha

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
234
43
Here's the 2M Netapp QSFP cable:

iperf3 CX3 netapp SAS 2m mtu 1500.png

And here's the 5m:

iperf3 CX3 netapp SAS 5m mtu 1500.png

This is win10 as a client and Centos 7 serving. The 5m cable is around 10% slower than the 2m, and I'm CPU bound on the client machine. MTU is 1500, switching to 9000 dropped performance down to ~4Gb/s.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,039
1,868
113
30
fohdeesha.com
hmm, interesting. could you post the output of "show int e 1/2/1" from the brocade, substituting the port these are actually plugged into? I'm curious if there's any CRC/framing/link recovery errors that would be causing the speed diff. That's assuming these 2 machines aren't directly connected, which re-reading your post it seems they might be :p


If you use iperf (not iperf3) with the -P 8 argument, it'll run 8 instances multithreaded, which usually gets me over any CPU-bound issues and I don't have an issue hitting 39gbps on a modern CPU. iperf3 supports the parallel argument as well, but they all inexplicably run on the same core still, so you won't see any difference
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
234
43
The machines are directly connected. Ultimately I'm going to have them talking either 40GbE or FDR IB to each other on one port and I'll plug the other into the brocade.

Well, 8 threads of iperf yields better performance than iperf3 but I'm still CPU bound-- this time on the other end. Witness the awesome power of the Pentium G4400:

2m Netapp SAS:

iperf 40gb 2m SAS.png

5m Netapp SAS:

iperf 40gb 5m SAS.png

~35Gb/s from both cables with the linux machine's CPU bottlenecking throughput. I'll run some Infiniband tests soon too.

TL;DR -- the SAS cables work pretty darn well at 40GbE.
 
Last edited:
  • Like
Reactions: fohdeesha

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
656
234
43
Anyone have a link to stacking cables that work for these switches? I'm going to stack a few of em for my lab.
I bought a few of these recently... :)

NEW NetApp 112-00177 X6558-R6 External QSFP-QSFP SAS Cable, 2M | eBay

Edit - I'm only using the 40gb ports on the 6610s to stack.
I'm using the same cable for a direct connection between two CX3's, it should work just fine as a stacking cable. There's a link above to a 5m cable for the same price if you need a longer length, it works perfectly as well.
 
  • Like
Reactions: tjk

PnoT

Active Member
Mar 1, 2015
610
141
43
Texas
Both of the links in the OP are no longer valid. Does anyone have a new one as the seller's name was removed =0