Fast networking for C6100

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

root

New Member
Nov 19, 2013
23
0
1
I am working on a separate project (first one was just a single 6100 for company-in-a-chassis setup) that will include two C6100 XS23-TY3s (8 nodes total):

3x VMware hosts
3x physical database servers with local storage
2x nodes set as shared storage (NAS) for vSphere

The question is now how to connect them so they can communicate via faster-than-gigabit speeds?

We have two options:

Intel 82599 10GbE Daughter Card - TCK99
Mellanox dual port QDR Infiniband Daughter Card - JR3P1

Which option is better?

Please note that the only switch I currently have is a 48-port gigabit switch with no 10GbE functionality; and I do not have any prior experience with 10GbE or Infiniband. The price is also a factor (that's why all of us purchase off-lease 6100s, right?).

What I was thinking is that I can have either of these fast cards installed on both shared storage nodes along with LSI Megaraid 9260-8i card and probably also on all 3 VMware host nodes that doesn't need local storage (planning to boot either from USB with ESXi or once I'm ready will install it on iSCSI target off the storage node). I have an license for vSphere Essentials and may later purchase "Plus" upgrade that will give me vMotion and HA. So far we have 2 (or 5) 10GbE or Infiniband controllers so I guess I'll need a switch for them that also have to be connected to existing network.

So, what do you guys recommend? It will be great if you can help me choose the right switch and cable for particular adapter and share your thoughts about whole setup and particular cards.

Thanks!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
If you end up wanting Infiniband but can't quite stomach the cost of a QDR switch, take a look at 20Gbit DDR Infiniband instead. DDR is about 2/3 as fast as QDR on a PCIe2 server, the HBAs cost half, and the switches are close to free.

I am working on a separate project (first one was just a single 6100 for company-in-a-chassis setup) that will include two C6100 XS23-TY3s (8 nodes total):

3x VMware hosts
3x physical database servers with local storage
2x nodes set as shared storage (NAS) for vSphere

The question is now how to connect them so they can communicate via faster-than-gigabit speeds?

We have two options:

Intel 82599 10GbE Daughter Card - TCK99
Mellanox dual port QDR Infiniband Daughter Card - JR3P1

Which option is better?

Please note that the only switch I currently have is a 48-port gigabit switch with no 10GbE functionality; and I do not have any prior experience with 10GbE or Infiniband. The price is also a factor (that's why all of us purchase off-lease 6100s, right?).

What I was thinking is that I can have either of these fast cards installed on both shared storage nodes along with LSI Megaraid 9260-8i card and probably also on all 3 VMware host nodes that doesn't need local storage (planning to boot either from USB with ESXi or once I'm ready will install it on iSCSI target off the storage node). I have an license for vSphere Essentials and may later purchase "Plus" upgrade that will give me vMotion and HA. So far we have 2 (or 5) 10GbE or Infiniband controllers so I guess I'll need a switch for them that also have to be connected to existing network.

So, what do you guys recommend? It will be great if you can help me choose the right switch and cable for particular adapter and share your thoughts about whole setup and particular cards.

Thanks!
 

root

New Member
Nov 19, 2013
23
0
1
If you end up wanting Infiniband but can't quite stomach the cost of a QDR switch, take a look at 20Gbit DDR Infiniband instead. DDR is about 2/3 as fast as QDR on a PCIe2 server, the HBAs cost half, and the switches are close to free.
So you think that Infiniband is better than 10GbE (I am not talking about the difference in speed only, but in general)?

Could you please share a URL to a QDR and slower switch (how do you call it, SDR, DDR?) that is compatible with the above adapter?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
My point is that the little-know and little-discussed Infiniband products are faster and have lower latency than 10GbE, and in many cases can be had for much less money. Drivers and configuration are a bit more complex than Ethernet, but not by much, and those who really need the performance will find the extra work more than pays off.

Read up, and feel free to ask additional, more specific questions of the STH community. Some URLs:

Voltaire ISR 9024 ISR9024 ISR9024D 24 Port Grid Infiniband SDR DDR Switch 10GB | eBay
VMware KB: Configuring Mellanox RDMA I/O Drivers for ESXi 5.x (Partner Verified and Support)

The excellent Mellanox ConnectX-2 cards use modern QSFP connectors while most inexpensive DDR switches use CX4 connectors. Fear not: just buy cables with QSFP on one end and CX4 on the other.

So you think that Infiniband is better than 10GbE (I am not talking about the difference in speed only, but in general)?

Could you please share a URL to a QDR and slower switch (how do you call it, SDR, DDR?) that is compatible with the above adapter?
 
Last edited:

root

New Member
Nov 19, 2013
23
0
1
My point is that the little-know and little-discussed Infiniband products are faster and have lower latency than 10GbE, and in many cases can be had for much less money. Drivers and configuration are a bit more complex than Ethernet, but not by much, and those who really need the performance will find the extra work more than pays off.

Read up, and feel free to ask additional, more specific questions of the STH community. Some URLs:

Voltaire ISR 9024 ISR9024 ISR9024D 24 Port Grid Infiniband SDR DDR Switch 10GB | eBay
VMware KB: Configuring Mellanox RDMA I/O Drivers for ESXi 5.x (Partner Verified and Support)

The excellent Mellanox ConnectX-2 cards use modern QSFP connectors while most inexpensive DDR switches use CX4 connectors. Fear not: just buy cables with QSFP on one end and CX4 on the other.

Good point :).

I don't mind playing a bit more with driver install if I can get more from technology that is available, supported officially under ESXi 5.1+ and faster than 10GbE. I can actually see the benefit of using Infiniband vs Ethernet after reading couple articles on Wikipedia and Network World.

Infiniband card is like $80 cheaper than Intel 10GbE, and the switch you've send me is also less expensive than similar for 10GbE as far as I can see..


OK, on VMware host I'll have to install the driver for IB. How about shared storage? What do you recommend to use/setup so I can booth my ESXi from a storage connected via Mellanox ConnectX-2 cards? I was thinking originally about setting up a NAS and use iSCSI.

Also, if the storage is built using 7200rpm drives, isn't it too "slow" for IB? Maybe I just have to get quad port Ethernet card or 10GbE? I am sure that these is a benefit of using IB with SSD drives, but with mechanical...?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Good point :).

...
How about shared storage? What do you recommend to use/setup so I can booth my ESXi from a storage connected via Mellanox ConnectX-2 cards? I was thinking originally about setting up a NAS and use iSCSI.

Also, if the storage is built using 7200rpm drives, isn't it too "slow" for IB?
...
Whatever shared storage you use, perhaps ZFS or VMware virtual SAN or StarWind, it will definitely use RAM as a cache, and you will likely decide to add an SSD or two as additional cache, both of which will greatly improve throughput and IOPS far above simple disk speed.
 
Last edited:

tjk

Active Member
Mar 3, 2013
481
199
43
OSNexus/QuantaStor has native SRP support for Infiniband, or you can do iSCSI or NFS using IPoIB.

I've used them in the past and it is stable, based off of Ubuntu 12.04. They have a 30 day trial you can download.

Tom
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Solaris has all the packages already installed apart from SRP which is very easy to add. Just takes a few commands to get the targets out their and available. The 'Free' storage appliances / flavours should also have most of the functionality. Linux requires quite a bit more work if you want iSer / SRP.

You need to have a subnet manager running on one of your boxes for the IB network to run (it manages the routing). There is not one as standard for Solaris or ESXi but someone has taken the time and trouble to get one compiled and has made it available for ESXi 5.1. I posted about it here so just do a search. From what I hear it is not compatible with ESXi 5.5 though.

Windows has a subnet manager as part of its IB drivers but does not support SRP (the drivers were removed from the packages), Linux also has a subnet manager available.

The advantage of using something like RDMA / SRP / iSER is that you bypass the kernel IP stack which reduces latency significantly making it great for remote shared storage.

RB
 

33_viper_33

Member
Aug 3, 2013
204
3
18
DBA,

Awesome knowledge and advice as always. The VMware KB link was very informative. Looking forward to giving this a try with the two cards you sent me under ESXi.

Had a chance to install them on two C6100 nodes last night running two instances of Windows 7 bare metal and primo ramdisk. I'm seeing 6-7Gbps throughput or about 800-900MB/s copying 14.5GB worth of files from one to the other. Still not as fast as I was looking for or as you achieved. Also note, I haven't done any tweaking what so ever as of yet, suggestions welcomed. I'm guessing my speed issues have to do with not using RDMA or the like to keep the Kernel out of the mix. I need to learn how to use iometer and do some further testing. I checked the IRQs which appear ok. The card uses IRQ -3 through -8 (best recollection without having it in front of me). The negative number through me for a loop since I haven't played with IRQs since pre windows XP days.

I'm assuming that setting up RDMA on ESXi is only one step in addition to the guest OS' support/setup, correct? Do the vNICs support RDMA or do I need to use SR-IOV?
 
Last edited:

33_viper_33

Member
Aug 3, 2013
204
3
18
You need to have a subnet manager running on one of your boxes for the IB network to run (it manages the routing). There is not one as standard for Solaris or ESXi but someone has taken the time and trouble to get one compiled and has made it available for ESXi 5.1. I posted about it here so just do a search. From what I hear it is not compatible with ESXi 5.5 though.
I have the OpenSM package working under ESXi 5.5.
 

root

New Member
Nov 19, 2013
23
0
1
My point is that the little-know and little-discussed Infiniband products are faster and have lower latency than 10GbE, and in many cases can be had for much less money. Drivers and configuration are a bit more complex than Ethernet, but not by much, and those who really need the performance will find the extra work more than pays off.

Read up, and feel free to ask additional, more specific questions of the STH community. Some URLs:

Voltaire ISR 9024 ISR9024 ISR9024D 24 Port Grid Infiniband SDR DDR Switch 10GB | eBay
VMware KB: Configuring Mellanox RDMA I/O Drivers for ESXi 5.x (Partner Verified and Support)

The excellent Mellanox ConnectX-2 cards use modern QSFP connectors while most inexpensive DDR switches use CX4 connectors. Fear not: just buy cables with QSFP on one end and CX4 on the other.
What's the difference between these: Voltaire ISR 9024, Voltaire ISR 9024M, Voltaire ISR 9024S and Voltaire ISR 9024D-M? Voltaire ISR 9024 and Voltaire ISR 9024S is cheapest on fleabay. "D" version appears to be 20GBps... From what I've read I need a Subnet Manager that should be either integrated in the switch or running on the server.

Can I reset the password on these switches if it is unknown? Some sellers cannot guarantee that the switch is coming with default password.

One more question: So if I connect all hosts/SAN/NAS to 9024, how will they communicate with "standard" gigabit network? Via Ethernet controller(s) on the hosts?

How about using 10GbE Dell mezzanine cards instead of IB (I have two hosts and two SANs on a single C6100) and connect them to HP H3C A5120-24G Ei Layer 3 1U Switch or something similar? Isn't it easier to setup and manage? Or maybe I can go directly with these Intel E10G42BT controllers and Netgear 10Gbit switch XS712T (not sure actually if the NIC adapter will fit)?
 
Last edited: