Going to 10Gbe+

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
I'm setting up a new VM enviroment and I'm working on bumping up the SAN network to greater than 1Gbe speed.

I'll have 4 VM servers (2 ESXi, 2 ProxMox) and one SAN Server running NexentaStor (or other ZFS based OS).

At present I'm running quad 1Gbe NICs in each server and use iSCSI with Round-Robin MPIO to achieve greater than 1Gbe network speed. I like the simplicity of NFS but using ISCSI allows me additional performance boost via VAAI extensions.

I know that there are quite a few options. The only one I'm pretty familiar with is copper ethernet.
Based on some quick research, I could go to 10Gbe using 5x 10Gbe Nics (EXPX9501AT or XR997) for about $210 a pop average and XS708E switch for about $900. That would give me 10Gbe for just under $2K. Could also go with cheaper NiC like the RK375 for $150/ea though I don't know how good those cards are.

What other options are there within that budget? Fiber? IB? I don't have any experience in either and do not know the compatibility with ESXi or Nexenta/OmniOS.

Thoughts?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Your config is a perfect candidate for Infiniband.

Used 36 port QDR switch < $1000 on eBay. Search "Voltaire 4036".
QDR nics for <$200
Copper/SFP cables pretty much exactly the same price as 10gb SFP copper.

You'll spend less money and get QDR Infiniband speeds (32gbps for IPoIB, 40Gbpd native).

Works perfectly with NFS and SMB.

...posted buy a guy with ~30 active 10gbe links active on 10gbe switch and wishing I had done it differently....
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The appeal of 10G Base-T Ethernet is familiarity - this new version of Ethernet works just like regular Ethernet, and even sort of uses the same cables, but is ten times faster.

The downsides to 10G Base-T are expense and power consumption. Those older single port EXPX9501AT cards, for example, draw something like 25 watts each - ouch! Newer cards draw less power, but are way more expensive. The eight-port switch you have your eye on is reasonably priced, but when you end up needing more ports - and you will - then your costs will go up dramatically.

One alternative is switching from RJ45 based Base-T Ethernet to 10G Ethernet switches and cards with SFP+ connectors. Power consumption is far lower, as is latency, and the cards are cheaper - I bought a few dual-port cards for $90 each. The switches are generally 24 ports or more and very expensive, but can sometimes be reasonably priced on eBay. You can often buy an older 10G switch with CX4 connectors for under $1K, and then use CX4 to SFP+ cables.

Another possible option is, of course, Infiniband. The appeal of QDR Infiniband is massive speed (3-4 times faster than 10GbE in the real world) and very low cost when purchased used - as in $100-$150 for dual-port cards and around $1,000 for a 32-port switch. The downside to Infiniband is that it isn't Ethernet and may take more work to configure. You can go with IP over Infiniband, which works extremely well if your combination of OS and driver version get along and you can use RDMA, but then you may need to figure out how to route between your IB network and your Ethernet network. I haven't used IB with VMware or ProxMox, so do your homework if you choose this route.

I chose QDR Infiniband as my Hyper-V VM migration network, using IPoIB to make configuration a breeze and RDMA/SMB3 for speed. Because this is a standalone network used only for VM migration, I don't bother connecting it to my main 1G Ethernet network at all. I prefer Infiniband over 10GbE because I really need the speed. If 10G speeds were good enough for me, I'd prefer the familiarity of Ethernet instead.



I'm setting up a new VM enviroment and I'm working on bumping up the SAN network to greater than 1Gbe speed.

I'll have 4 VM servers (2 ESXi, 2 ProxMox) and one SAN Server running NexentaStor (or other ZFS based OS).

At present I'm running quad 1Gbe NICs in each server and use iSCSI with Round-Robin MPIO to achieve greater than 1Gbe network speed. I like the simplicity of NFS but using ISCSI allows me additional performance boost via VAAI extensions.

I know that there are quite a few options. The only one I'm pretty familiar with is copper ethernet.
Based on some quick research, I could go to 10Gbe using 5x 10Gbe Nics (EXPX9501AT or XR997) for about $210 a pop average and XS708E switch for about $900. That would give me 10Gbe for just under $2K. Could also go with cheaper NiC like the RK375 for $150/ea though I don't know how good those cards are.

What other options are there within that budget? Fiber? IB? I don't have any experience in either and do not know the compatibility with ESXi or Nexenta/OmniOS.

Thoughts?
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
Other than the given speed difference, what's the issues with 10Gbe ethernet?

Also, can you recommend some decent QDR nics to look out for?

Your config is a perfect candidate for Infiniband.

Used 36 port QDR switch < $1000 on eBay. Search "Voltaire 4036".
QDR nics for <$200
Copper/SFP cables pretty much exactly the same price as 10gb SFP copper.

You'll spend less money and get QDR Infiniband speeds (32gbps for IPoIB, 40Gbpd native).

Works perfectly with NFS and SMB.

...posted buy a guy with ~30 active 10gbe links active on 10gbe switch and wishing I had done it differently....
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
...Also, can you recommend some decent QDR nics to look out for?
Any Mellanox Connect-X2 VPI card. The ConnectX-2 is the second generation of card, which is greatly preferred over the ConnectX generation. The VPI means a card that can be switched between Infiniband and 10G Ethernet modes.

And if you don't want to spend the money to get QDR speeds, DDR Infiniband is REALLY cheap and yet still twice as fast as 10GbE. I was able to get 1,900MB/S file sharing using Mellanox DDR cards.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Speed is the main reason to look at IB. Cost is another, at least if you are buying used. Lots of enterprise grade QDR equipment on the used/off-lease market right now. 10gbe is there too, but not in such large quantities and good 24/48 port SFP+ switches are still about twice the cost of a Voltaire 4036.

For nics it's as simple as searching "QDR HCA" on eBay. The Mellanox and HP cards are the same and quite good. Don't know if the Q-Logic cards are any good,

Also, DBA is correct: 10gbe has the advantage of familarity. You just plug things in and it works the same as what you used for 1gbe, just faster. That can be a huge advantage if you don't really need the speeds.
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
Ok. Thinking out loud here. Pricing IB out.

1x Voltaire 4036 (VLT-30111): $1000
4x Dell JR3P1: $600 (For C6100)
1x HP 592520-B21: $180 (For SAN)
5x QSFP Copper Cables: $165

Total about $2K so same as going with 10Gbe Ethernet but 4x the speed.

Digging into NexentaStor support. There's a mention of needing Subnet Manager running to have IPoIB working. Does the Voltaire switch provide Subnet Manager functionality?

Looks like Infiniband is fully supported with vSphere 5.1, though I do not know yet the effort required to configure it.

So much to read...RDMA, iSER, SRP, datagram, connected mode..sheesh
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
one of the things I enjoy is unified driver, simple firmware (1 ufi file), esxi flashing, windows client (7/8) support, shit just works.

You will find that intel leads as far as "shit just works", followed by emulex. for real.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
You can very easily run a software subnet manager, but with the Mellanox/Voltaire 4036 you don't need to; the switch has one built in.

Ok. Thinking out loud here. Pricing IB out.

1x Voltaire 4036 (VLT-30111): $1000
4x Dell JR3P1: $600 (For C6100)
1x HP 592520-B21: $180 (For SAN)
5x QSFP Copper Cables: $165

Total about $2K so same as going with 10Gbe Ethernet but 4x the speed.

Digging into NexentaStor support. There's a mention of needing Subnet Manager running to have IPoIB working. Does the Voltaire switch provide Subnet Manager functionality?

Looks like Infiniband is fully supported with vSphere 5.1, though I do not know yet the effort required to configure it.

So much to read...RDMA, iSER, SRP, datagram, connected mode..sheesh
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Yep, ESXi and Solaris do not have subnet managers availabe. Linux and Windows do but the Voltaire should have one built in if you can access the command line for it.

DDR switches are usually around US$500, Connect-X cards can be had for US$75 each and the cables can be had for around US$25 each. Setup SRP between the Solaris and ESXi boxes and it is happy days. The cards need to be on firmware 2.7 (2.9 is the latest) to work correctly.

For ESXi you just need to start the ssh server and install the VIB available from Mellanox.

The Solaris setup is also a piece of cake;
Create a ZPool.
Create a ZFS FS.
Create a LUN.
Make the lun available.

The drives will just appear on the storage controllers tab in vSphere (vCentre) under the connected IB port. From then it is just searching for the drives in the storage tab. Of course you can also use the luns via RDM so they end up formatted in the VMs OS rather than as VMFS.

Connect-X2 is a nice option if the cash is available.
I have just picked up a couple of low profile Mellanox MHRH2A-XSR cards for US$65 each on ebay. There are some more out there and they tend to be quite cheap compared to other Connect-X2 cards. Of course the mezzanine cards for the Dell C6100s mean that you leave the PCIe x16 slot free and at that price they are a really good deal. Use them with DBAs article on applying the firmware to get RDMA and you should be zooming away.

The Voltaire 4036 I have at the moment is very noisy (like the unmodified C6100). It can run without fans but it gets hot pretty quickly. I also have a Flextronics F-X430046 (from DBA) which is working well and is fairly quiet and a Silverstorm 9024 which is also not very noisy but I have not actually hooked it up to any servers yet.

RB
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Thanks for all the extra details guys, it really helps us IB newb's ;) I keep waffling on trying to go 10GbE with SFP+, because that's what ... I know from work :) But if I can go 40Gb IB, that has a lot of attraction as well. Plus, I'm here for the same reasons everyone else is - to do stuff that most people wouldn't. So thanks again, there's never enough information!
 

shindo

New Member
May 7, 2013
26
0
1
WI, USA
Thanks for all the extra details guys, it really helps us IB newb's ;) I keep waffling on trying to go 10GbE with SFP+, because that's what ... I know from work :) But if I can go 40Gb IB, that has a lot of attraction as well. Plus, I'm here for the same reasons everyone else is - to do stuff that most people wouldn't. So thanks again, there's never enough information!
I just wanted to chime in and say that I was completely unaware that IB even existed before coming here, but was able to get IBoIP running on ESXi 5.1 pretty easily. Getting it to work on other machines without a switch/subnet manager was more complex and I still need to figure out how to get RDMA working, but the ESXi part was quite straightforward. Give it a go, it's fantastic to see 40000 Full speed in vCenter/vSphere ;)
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Is QDR cross compatible with DDR? If I had a QDR Switch could I mix DDR and QDR cards connected to it?
Yep it is backwards compatible across cards and switches so it will just fall to the lowest common speed.

I have not tried a QDR switch with multiple DDR cards and two QDR cards. I suspect they will all run at DDR though.

RB
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
Oh. I take it you mean that if I connect a DDR card to the switch, all cards will revert to DDR? If that's the case, that's not good. Will definitely have to make sure everything is QDR.

Yep it is backwards compatible across cards and switches so it will just fall to the lowest common speed.

I have not tried a QDR switch with multiple DDR cards and two QDR cards. I suspect they will all run at DDR though.

RB
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I just wanted to chime in and say that I was completely unaware that IB even existed before coming here, but was able to get IBoIP running on ESXi 5.1 pretty easily. Getting it to work on other machines without a switch/subnet manager was more complex and I still need to figure out how to get RDMA working, but the ESXi part was quite straightforward. Give it a go, it's fantastic to see 40000 Full speed in vCenter/vSphere ;)
Which cards are you using?