Debating 10Gb for the home lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

chappys4life

New Member
Oct 19, 2015
9
0
1
39
I am debating 10gb in my home lab but trying to figure the best route. Currently I have 2 esxi 6 hosts with quad 1gb intel nics, a n54L with a m5015 for my nas/iscsi target using starwind virtual san.

I am debating adding a single port mellnox connectx-2 in each esxi host and directly connecting them. Then using a vm of emc vnx on a host to hold my esxi datastores or vsan.

My question is how would I add a 3rd host?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Beware if you try to use a vanilla connectx-2. In the newer drivers, mellanox removed SRP support, and (AFAIK) only the VPI models will do enet. So you are stuck with IPoIB, which is not as good, performance-wise. If you can score connectx-3 EN cards, QDR+/QDR+ cables allow you to connect PtP mode. 3rd host is an issue :( I have avoided dealing with it, due to lack of funds right now. Any kind of switch is going to be kinda pricey, even on fleabay...
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Actually IPoIB is pretty good... But 10GbE is easier to intermix with normal networks. Going IB lets you use cheap IB switches and 40Gb.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Good compared to what? SRP has better performance and lower CPU load. iSER is supposed to bring that as a replacement, but a lot of platforms and drivers don't yet have iSER :( Infiniband supposedly also has lower latency and less power consumption than 10gbe. The interoperability is a point, though...
 

chappys4life

New Member
Oct 19, 2015
9
0
1
39
Is there a different card I should look at? The Mellanox doesn't have to be the card that is just the one I read about the most.
 
Sep 22, 2015
62
21
8
Beware if you try to use a vanilla connectx-2. In the newer drivers, mellanox removed SRP support, and (AFAIK) only the VPI models will do enet. So you are stuck with IPoIB, which is not as good, performance-wise. If you can score connectx-3 EN cards, QDR+/QDR+ cables allow you to connect PtP mode. 3rd host is an issue :( I have avoided dealing with it, due to lack of funds right now. Any kind of switch is going to be kinda pricey, even on fleabay...
Are you sure about this? The Mellanox Connectx-2 I got off ebay for 30 bucks is identified only as an ethernet card and talks to my 10gb mikrotik switch without any config. I had to download a driver for windows, but esxi was plug and play.
 

darklight

New Member
Oct 20, 2015
16
5
3
40
What is the speed of your internal storage? If it is near 2-3 Gbit than there is not so much sense investing into 10GBit infrastructure since storage will still bottleneck everything.

Mellanox dual port cards should be fine for sure, however there are some difficulties in using a three-node configuration switchless. As already mentioned, one option is to connect the host in “chain” or “ring”. It means connecting node1.port1 -> node2.port1, node1.port2 -> node3.port1 and node2.port2 -> node3.port2 so each server is connected to the other one and use these networks for synchronization. iSCSI could be connected over other 1Gbit network or you can VLAN both sync and iSCSI traffic into same physical 10Gbit network however it is not recommended. Of course all the ports should have different IP addresses within the same network, so you have to configure 12 (or two times 12 going VLAN) different IPs for each VMKernel and StarWind VM.
 

stupidcomputers

New Member
May 27, 2013
18
19
3
Seattle, WA
www.linkedin.com
What is the speed of your internal storage? If it is near 2-3 Gbit than there is not so much sense investing into 10GBit infrastructure since storage will still bottleneck everything.

Mellanox dual port cards should be fine for sure, however there are some difficulties in using a three-node configuration switchless. As already mentioned, one option is to connect the host in “chain” or “ring”. It means connecting node1.port1 -> node2.port1, node1.port2 -> node3.port1 and node2.port2 -> node3.port2 so each server is connected to the other one and use these networks for synchronization. iSCSI could be connected over other 1Gbit network or you can VLAN both sync and iSCSI traffic into same physical 10Gbit network however it is not recommended. Of course all the ports should have different IP addresses within the same network, so you have to configure 12 (or two times 12 going VLAN) different IPs for each VMKernel and StarWind VM.
I am running EMC scaleio on my 3 nodes and can already saturate 230MB/sec through 2x 1gb ports. Looking to push the speeds even higher. I will definitely start investigating what it takes to setup a ring topology using IPoIB.

Does anyone know if the Mellanox ConnectX-2 VPI Dual-Port 40GB Adapter Card HCA-30024 700Ex2-Q are compatible with ESXi 6 update 1a?
 

chappys4life

New Member
Oct 19, 2015
9
0
1
39
What is the speed of your internal storage? If it is near 2-3 Gbit than there is not so much sense investing into 10GBit infrastructure since storage will still bottleneck everything.

Mellanox dual port cards should be fine for sure, however there are some difficulties in using a three-node configuration switchless. As already mentioned, one option is to connect the host in “chain” or “ring”. It means connecting node1.port1 -> node2.port1, node1.port2 -> node3.port1 and node2.port2 -> node3.port2 so each server is connected to the other one and use these networks for synchronization. iSCSI could be connected over other 1Gbit network or you can VLAN both sync and iSCSI traffic into same physical 10Gbit network however it is not recommended. Of course all the ports should have different IP addresses within the same network, so you have to configure 12 (or two times 12 going VLAN) different IPs for each VMKernel and StarWind VM.
I am starting to think the work involved is probably not worth it. I was thinking 10Gb for storage and vmotion would be nice but have 4 port gig nice is probably enough.

I will keep an eye on cheaper 10gb switches and more than likely just wait.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
40Gb IB (QDR QSFP) switches are down as low as $400 now, some 'tabletop' 8 ports and the usual 18/36 1Us.

$40 for those dual cards is a good deal, not sure about esxi. 3 hosts works fine with dual port cards in each host, no special configuration needed: just make 3 independent networks. (not sure where/why people are getting 12 or ring ideas from)

Prices on everything likely to keep trending down as well.
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
I can't see a good reason for IB in the home. The fact that it doesn't talk to Ethernet, and the fact that no laptop/Mac has it means that it would be relegated to the PC/server world.

At least you can talk 10gbe to 1gbe without a router
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
I can't see a good reason for IB in the home. The fact that it doesn't talk to Ethernet, and the fact that no laptop/Mac has it means that it would be relegated to the PC/server world.
It's great for adding high-speed networking to a lab environment. I use it for my "SAN" and for vmotion between ESXi hosts in my lab. Cheaper than 10GbE, and faster to boot!
 
  • Like
Reactions: Chuckleb

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Honestly though, I could say the same thing about 10GbE. It's really hard to push 10Gb traffic, most laptops and users would have a hard time pushing 1Gb. The IB is a backend network and where people use it are to connect quick clusters of either VM or compute.

If possible, definitely do the Ethernet route to save headaches, but you won't find good prices for switches with lots of ports. That's where the IB world is cheaper for now. In a year or two, 10GbE will be cheap and people will complain about 40GbE and why they can't get 4 port switches under $1500.... And the IB world will be at $200 for a 36 port 40Gb switch.

Back to my original point, it is really hard to hit >1Gbps for normal laptop and desktop use. Laptops are mostly concerned about wireless speeds, not wire speeds.
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Honestly though, I could say the same thing about 10GbE. It's really hard to push 10Gb traffic, most laptops and users would have a hard time pushing 1Gb. The IB is a backend network and where people use it are to connect quick clusters of either VM or compute.

If possible, definitely do the Ethernet route to save headaches, but you won't find good prices for switches with lots of ports.
.
Agreed!
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Honestly though, I could say the same thing about 10GbE. It's really hard to push 10Gb traffic, most laptops and users would have a hard time pushing 1Gb.


Back to my original point, it is really hard to hit >1Gbps for normal laptop and desktop use. Laptops are mostly concerned about wireless speeds, not wire speeds.
I got 10gb at home for the following reasons:
1. Home file serving purposes, having the storage almost act like local storage
2. Latency for "desktop" VMs (so I can work on cheap laptops and capitalise on the power in the server)
 

ecosse

Active Member
Jul 2, 2013
463
111
43
I have a Gnodal switch with 8 x 40GBE ports and 40 x 10GbE + a couple of Dlink switches and a Microtek with 2 x 10GbE uplinks dotted around the house + a 24 port QDR switch that I did use for my storage backbone but moved to 40GbE - more supportable and compatible. Mixture of Mellanox Connectx-2/3 and Solarflare cards. I bought them for the following reasons:

1. I could :)

Ok copying files from my media center downstairs is quicker but 1Gbe streams just as fine with decent NICs. I had no real use case, just seemed a cool thing to do :) Oh, the reduction in cabling is a nice fringe benefit.