Looking to go beyond 10g but very cheaply.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

boe

New Member
Apr 7, 2019
20
1
3
I have no idea what to get at this point. I currently have 10g at home but that isn't enough - my raid controllers are pushing my 10g to the limit when copying between systems (I don't have a switch - I have a single quad port connected to 3 other systems). When I look at throughput on the ports they are maxed out. I was considering 40g but as far as I know, there are no quad port nics and 40g switches are too expensive for me. I was also looking for 25 or 50g or 100g nics but can't find the right option which was simple with 10g. Anyone know of any cheap 25/50 or 100g switches that are cheap or 25 or 50 or 100g quad port nics? It isn't a crisis or anything but I'd like to go to the next level on my network. I often copy several TB at a time (up to 50TB) and it would be nice if I could do it more quickly.

Thanks in advance for your help. (I"m sure if I had unlimited funds I could easily do this but I'm hoping to connect 4 PCs at 25g or faster for under $2,000 total.)
 

Terry Wallace

PsyOps SysOp
Aug 13, 2018
197
118
43
Central Time Zone
You can get dual port 40gig connect-x cards for 70$ or less easily. Pair that with a 12-18 port melanox 40/56gig switch and your under 600$ total for 4 pcs and 40gig Ethernet.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Or get two 100g Mellanox EN Cards - even single port should beat 4x10G easily (if you run multiple connections at least).
What does your Raid push? And whats the setup (out of curiosity)?
 

boe

New Member
Apr 7, 2019
20
1
3
You can get dual port 40gig connect-x cards for 70$ or less easily. Pair that with a 12-18 port melanox 40/56gig switch and your under 600$ total for 4 pcs and 40gig Ethernet.
Thanks!!! Can you give a model or link for that switch? If they have more than one ideally I'd get one with no fan although that may not be an option. The ones I'm finding are in the $6000 range so I'm obviously looking in the wrong place.
 
Last edited:

boe

New Member
Apr 7, 2019
20
1
3
Or get two 100g Mellanox EN Cards - even single port should beat 4x10G easily (if you run multiple connections at least).
What does your Raid push? And whats the setup (out of curiosity)?
Thanks!! If you have any links I would greatly appreciate it. I found some Mellanox MCX456A-ECAT nics - no idea what transceiver to get. I'll need some 12' runs. I was thinking I'd like to go fiber just for the heck of it instead of dac.

My 2 main systems have i7's, 16Gs, Adaptec 3154, 16 x 12TB seagate ST12000nm007 (RAID5) and Intel 10g nics ethernetnet nic,s running win 10 pro.

I managed to get them to 16gbps throughtput briefly through a combination of cards. I have no idea if they could exceed that. I might be able to get them to higher if I switched to RAID50. A lot of people hate raid 50 for some reason but I've gotten some fast copies with RAID50. I need to start from scratch at some point with the boot partition at some point as I've bogged them down with junk over time.

My other 2 systems are similar but with only 8 drives and LSI controllers.
 
Last edited:

boe

New Member
Apr 7, 2019
20
1
3
Thanks for the help!


I think I nearly have a solution if anyone has any further suggestions in case I've gone wrong somewhere...


4x MCX456A-ECAT network cards $350x4= $1,400

2x AMQ28-SR4-M1 100g transceiver $90x2= $180

1x Karono MPO Female to MPO Female Patch Cord, 12-core Fibers, TYPE B, 16.5 ft (5M), OM3 Multi-mode Fiber Optic $50x1 = $50


I was thinking about a 5m dac splitter but I can't find a 5m or even 4m one. Or something like this 100G QSFP28 to 4X 25G SFP28 AOC - Fiberon Technologies
 
Last edited:

boe

New Member
Apr 7, 2019
20
1
3
Thanks - I thought I had half the equation complete with the nics, the 2x AMQ28-SR4-M1 100g transceiver $90x2= $180 and the Karono MPO Female to MPO Female Patch Cord. Doesn't that only leave the two going to the other port on the nic to deal with? I imagine I'll need some tranceiver for the other port on server 1, and 1 for server 3 and 4 = 4 transceivers and a fanout cable. Just not sure if I can buy them separate of if I need to buy some aoc or if I'll have to go dac.



Maybe something like this although I hope it is available somewhere for less and it splits 100g to 2 50g - 24-Fiber Single MTP/MPO to Dual MTP/MPO Fiber Optic Fanout Cable, Multimode OM4 | Cables Plus USA
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ah, misread your initial post, missed "I have a single quad port connected to 3 other systems."
The 4:1 splitter cables indeed won't run on the nic side.

Cheapest would be to get an EMC 6012/8 and convert (see forum thread, results in 56GBe)
Alternatively there are Brocade or Arista Switches to be had around 3-400 i think which have (4?) 40G ports
100G might be difficult for 4 boxes (especially since you need a PCIe3 x16 slot in each box)
-one option is the get CX5's and set up a ring
- You also can do 2 CX4s in one box and then do point to point if you have 2 x16 slots left
You could do 25G with CX4-LX's (SFP28) which have a x8 interface, then the 2 in a node/ptp setup might be more feasible and still sufficient for your current setup. I imagine the jump from 16 to something exceeding 25G (per remote box!) is a big one...
 
  • Like
Reactions: boe

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
I'm cheap, so I'd do this for under $600.

I'd go with either an EMC SX6012 and hack the firmware to run the switch in full Mellanox firmware gateway mode (quite a bit of work, real chance of failure and the possibility of bricking the device) or I'd pick up a Brocade ICX6650. In both cases I'd pick inexpensive ConnectX-3 cards, they're power efficient and the drivers are great on every OS I've used them on.

If you opt to hack an EMC 6012 you can LAG 40Gb (or 56Gb using FDR cables, this will get pricey at $25-40 per DAC) from everyone's favorite $25 ConnectX-3 VPI. Or you can opt for the Brocade ICX6650 and use the 4 rear 40Gb ports, link each machine at 40Gb to the rear ports with a backup 10Gb connection to one of the breakout ports (or just use one of the front 10Gb ports.)

If you go all Mellanox you can mix and match your transport modes, they'll do 40/56Gb Infiniband as well as 40/56Gb Ethernet.

Everything listed above will work with $7 NetApp QSFP DACs, they won't do FDR modes but they will do 40Gb no sweat.

I actually have a similar setup, though my 40Gb is limited to the two ports on the ICX6610 at the moment, I haven't had the time to start hacking on my SX6012 yet. That's a longer term project that I want to get to after I move at the end of May.

The nice thing about the Mellanox switches is that even if you hose one they're so cheap at $120 that you could try and fail to convert more than one and still be $500-1000 ahead of a single 40Gb switch purchase.
 

Terry Wallace

PsyOps SysOp
Aug 13, 2018
197
118
43
Central Time Zone
I personally have both a Mellanox SX6012 and SX6018 converted. I run Connect-X3 VPi dual port cards in ETH mode. I run a proxmox cluster and freenas storage nodes (ten nodes total) all on 40gig eth and sunk costs (160 switch, 8 dac cables from gtek-various lengths for 130 8 cards @ 37.50 = Grand Total 620) and I still have room to plug a few more in.
 
  • Like
Reactions: amalurk and boe

boe

New Member
Apr 7, 2019
20
1
3
I really appreciate everyone's help here - I'm going to start a new thread as I'm wandering all over the place as you try and bring a complete newbie up to speed. Again thanks so much everyone!
 

747builder

Active Member
Dec 17, 2017
112
58
28
Consider the Brocade ICX6650,$300ish on ebay , 1u, you can get it fully licensed from a forum member for free. 56 x 10ge and 6 x 40g (2 you can breakout to a extra 8 x 10ge ports.)
 

Yves

Member
Apr 4, 2017
65
15
8
38
I don't want to be rude an hijack this thread, but I ran into a similar question lately. I am trying to upgrade my homelab with a 40GB QSFP+ Switch. I saw a very very nice Cisco (N3K-C3164Q-40GE) but I am strugeling with the network cards. Since I never used more than SFP+ or 10GB -BaseT. The Mellanox ConnectX-3 VPI only do Infiniband? Or do they also work for QSFP+ 40GB? What would you recommend me?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The vpi's can be configured for ethernet too. not sure if they are compatible with the cisco though, but usually they don't pose any issues.
 

Yves

Member
Apr 4, 2017
65
15
8
38
@Rand__ long time no see ;-) thanks for the quick response. Well if I would use Cisco DACs? Or reprogramm the DACs for cisco? They "should" work...
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
@Rand__ long time no see ;-) thanks for the quick response. Well if I would use Cisco DACs? Or reprogramm the DACs for cisco? They "should" work...
Mellanox cards very rarely have issues with transceivers or DACs, as long as what you've got will work with the Cisco switch it should work with the CX3 without issue. You can either put the NIC in Ethernet mode with the firmware utils or add the override parameter to a modprobe definition ala:

echo "options mlx4_core port_type_array=2,2 num_vfs=4,1 probe_vf=4,1" > /etc/modprobe.d/mlx4_core.conf

You only care about "port_type_array=2,2" here.
 
  • Like
Reactions: Yves

Yves

Member
Apr 4, 2017
65
15
8
38
Yep you have been absent for a while;) Vsan still going? ;)
Yeah, was working on a few business projects. Not a lot of me time. Well vSAN. A topic I dont like to talk about :rolleyes: had to kill it :( I moved to making hourly backups of my lab machines. Performance hit was just to big with vSAN

Mellanox cards very rarely have issues with transceivers or DACs, as long as what you've got will work with the Cisco switch it should work with the CX3 without issue. You can either put the NIC in Ethernet mode with the firmware utils or add the override parameter to a modprobe definition ala:

echo "options mlx4_core port_type_array=2,2 num_vfs=4,1 probe_vf=4,1" > /etc/modprobe.d/mlx4_core.conf

You only care about "port_type_array=2,2" here.
Wow perfect. Thanks a lot for the detailed feedback. So I could go for the ConnectX3. Only pitfalls are the Cisco switches then.