10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

33_viper_33

Member
Aug 3, 2013
204
3
18
mrkrad,

Thanks for the info. In reference to the CNA adapters, can I use any SFP+ optic or are there compatibility issues to observe? It appears that SFP+ copper can be used for short runs, but fiber is required for longer runs.

I've been using 2 x Intel x540-t2 cards with great success. Though this is awesome, I would like to link my dell C6100's 4 nodes via 10GB links. These Intel cards are a bit over my price range for 4 nodes. Just looking for other options.

-V
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,071
974
113
NYC
mrkrad,

Thanks for the info. In reference to the CNA adapters, can I use any SFP+ optic or are there compatibility issues to observe? It appears that SFP+ copper can be used for short runs, but fiber is required for longer runs.

I've been using 2 x Intel x540-t2 cards with great success. Though this is awesome, I would like to link my dell C6100's 4 nodes via 10GB links. These Intel cards are a bit over my price range for 4 nodes. Just looking for other options.

-V
You can sell those cards and buy DDR infinaband. 20gbps. $75 for dual port cards and $200 to $300 for a switch
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Brocade are pissy - avoid. Some older qlogic cards require active at large length >5M.

Otherwise you only need to be concerned with the switch requirements (if relevant).

DAC Cable is just 4 well shielded high quality wires.

Intel/emulex - won't be worry about driver or firmware support for the next 6 years ..
 

snclawson

Member
Feb 7, 2013
51
22
8
Brocade are pissy - avoid. Some older qlogic cards require active at large length >5M.

Otherwise you only need to be concerned with the switch requirements (if relevant).

DAC Cable is just 4 well shielded high quality wires.

Intel/emulex - won't be worry about driver or firmware support for the next 6 years ..
Ok, so avoid the Brocade 1020's?

Do you know anything about the difference in functionality between the Emulex Lightpulse LP21002 and the OneConnect OCe10102-FM cards (other than the OneConnect being much more compact/newer, and the fact the the former is currently $100 and the latter $185 (with SFPs!)? All I really need at the moment is 10G ethernet connectivity, but FCoE/PFC/QCN support would be a plus...not so much to actually use it, but for testing switches to make sure they actually support it.

Thanks!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Oneconnect is ethernet card with extra functionality based on keys, lightpulse is HBA.

You can turn everything off on Oneconnect (fcoe/iscsi) but i'm not sure you can do ethernet on an hba.

Try variants (I flashed my X320m dell models with straight up emulex firmware and it's cool)
 

snclawson

Member
Feb 7, 2013
51
22
8
They're both CNA's, so you should be able to just do Ethernet with the lightpulse card.

Looking through the installation instructions I came across a description omf it. All it is is a card with a 10gb Ethernet chip, a 4gb fcoe chip, a fcoe to Ethernet bridge chip and a pcie switch! Yikes! That's certainly one way to do it, but it's got to be a bit of a pig energy wise with all that hardware on there!m

The oneconnect cards integrate everything into a single chip. So definitely the way to go. I'll look for some oem versions. Thanks!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Okay let me explain: DAC is without SR or LR optics.

the SFP+ are just modules aka gbics. VERY EXPENSIVE modules. Each one converts the signal in different ways (10.3gbps SR, 4 x 2.5gbps CX4, 10gbps SCRAMBLER 10GBASE-T) or just DAC. Dac is just 4 wires. DUMB MODE. No scrambling, no serialization, just 4 wires with magical unicorn powder that can push 10gb pretty far (gigabit ethernet BASE-T uses 8 wires!!).

Remember emulex sold HBA's until oneconnect. They may not do ethernet .

Oneconnect rocks, one driver all cards. one firmware per generation. All support DAC (MSA Alliance) or any optics (MSA alliance).

I picked up 24 DAC cables new for $120 :) nice green colored ones. mix of 1,3 and 5meter. Talk about deal of the century for brand new!!

the OCE10102 comes in NIC, plus keys to enable FCOE/ISCSI.

The OCE11102 is the same plus SR-IOV -> Virtual functions aka I present you 4 nic's per port and tag them with vlan ID and your server thinks its one nic. Nice in theory, but trouble in reality.


Truth is the CNA functions suck, and are buggy. Use the CNA as just a plain old nic and it will rock. The best part unlike other CNA is you can disable FCOE/ISCSI so it doesn't boot up and dhcp an IP address.


OneConnect:
1. One driver to rule them all.
2. Flash from ESXi or linux or windows or dos.
3. Vcenter plug-in to manage all features from vcenter!! awesome!
4. One flash package per series. Ability to flash any OEM to standard Emulex firmware (newer drivers).
5. Windows 7/8 support (ahem broadcom/qlogic and stupid 2008 requirement)
6. Solid as a rock - Now I don't use those fancy features since the nic's are so damn cheap, and wiring is so damn cheap.
7. Works in x4 pci-e slot no problem. What you do is just run one active and one standby - but you have the option !
8. No fan!

Avoid the older lightpulse man , trust me. Get the oneconnect, investigate all the OEM options and you'll find what you need.

BE WARNED! There are two custom oem's that will not work. The FUNCTION LEVEL RESET HP card may not work (or work at x4 best) and the IBM PCI-E X9 card is longer than PCI-E (count the pins). They use the extra 1x to do IPMI/ILO over the nic's.
 

BThunderW

Active Member
Jul 8, 2013
242
25
28
Canada, eh?
www.copyerror.com
The best bang/buck is probably the Netgear XS708E. They're selling at retail less than most switches on eBay. I've seen them for about $800 on sale. It's only 8 port switch but good entry point. Buying new at that price gives you full warranty which is what you won't get on eBay.

This is what I was planning to go with until I got converted to the dark side (IB)

Anyone manage to find a cheap 10GBE switch on ebay?
 

mini-me01

New Member
Aug 6, 2013
25
0
1
Okay let me explain: DAC is without SR or LR optics.

the SFP+ are just modules aka gbics. VERY EXPENSIVE modules. Each one converts the signal in different ways (10.3gbps SR, 4 x 2.5gbps CX4, 10gbps SCRAMBLER 10GBASE-T) or just DAC. Dac is just 4 wires. DUMB MODE. No scrambling, no serialization, just 4 wires with magical unicorn powder that can push 10gb pretty far (gigabit ethernet BASE-T uses 8 wires!!).

Remember emulex sold HBA's until oneconnect. They may not do ethernet .

Oneconnect rocks, one driver all cards. one firmware per generation. All support DAC (MSA Alliance) or any optics (MSA alliance).

I picked up 24 DAC cables new for $120 :) nice green colored ones. mix of 1,3 and 5meter. Talk about deal of the century for brand new!!

the OCE10102 comes in NIC, plus keys to enable FCOE/ISCSI.

The OCE11102 is the same plus SR-IOV -> Virtual functions aka I present you 4 nic's per port and tag them with vlan ID and your server thinks its one nic. Nice in theory, but trouble in reality.


Truth is the CNA functions suck, and are buggy. Use the CNA as just a plain old nic and it will rock. The best part unlike other CNA is you can disable FCOE/ISCSI so it doesn't boot up and dhcp an IP address.


OneConnect:
1. One driver to rule them all.
2. Flash from ESXi or linux or windows or dos.
3. Vcenter plug-in to manage all features from vcenter!! awesome!
4. One flash package per series. Ability to flash any OEM to standard Emulex firmware (newer drivers).
5. Windows 7/8 support (ahem broadcom/qlogic and stupid 2008 requirement)
6. Solid as a rock - Now I don't use those fancy features since the nic's are so damn cheap, and wiring is so damn cheap.
7. Works in x4 pci-e slot no problem. What you do is just run one active and one standby - but you have the option !
8. No fan!

Avoid the older lightpulse man , trust me. Get the oneconnect, investigate all the OEM options and you'll find what you need.

BE WARNED! There are two custom oem's that will not work. The FUNCTION LEVEL RESET HP card may not work (or work at x4 best) and the IBM PCI-E X9 card is longer than PCI-E (count the pins). They use the extra 1x to do IPMI/ILO over the nic's.
I have two OCE10102-F cards that i picked up on eBay and flashed to the latest emulex firmware. I have these direct connected to each other with a DAC cable: One in my ZFS box and on in my ESXi Box. Do you think I should be using the software iSCSI client in VMware and the CNA as a NIC or turn on iSCSI HW config in these cards?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Yes use the setting to disable the iscsi/fcoe mode. Set it to NIC ONLY.

1. Make sure you also match the driver! the regular driver is 4.6 and the oem is 4.2 -> ensure your numbers match !!.

2. Yes use the software ISCSI client, it supports VAAI and round robin - definitely. With the $70 L5639 - you can afford the cpu and the more cores you have to process the RX/TX rings the better! The ISCSI hba mode is crap! Trust me! You can follow the multipathing guide to do round robin to ZFS and dial in the round robin by doing IOPS and/or BYTES to flop over.

3. Disable flow control. IF you are using E1000e virtual nic, tweak it by disabling flow control, and increasing buffers and ring queues.

4. Don't bother with jumbo frames, the emulex nic is very good at coalescing interrupts.

5. PRO TIP -> ALIGN IRQ's in the bios. it matters. disable everything you don't need: i.e.. serial ports, sata (carom), push all the ill junk to lower priority IRQ's (10,11), move the nic and raid controller to 5 and 7 and try to keep em separated (its hard but trust me here!)

6. Disable all power management (C states, P-states, C1E), aka set the bios to "MAX MAX MAX" - ensure the card is cool - they will disable or throttle if they get too hot!

7. Disable power management in the guest OS too!

8. SSH into esxi 5.1 and check your messages (dmesg , cat /vat/log/vmkernel.log /var/log/vmkwarning.log and look for issues. ie it will tell you if you put it in a x4 slot etc.

9. Learn ethtool (ethtool -S vmnic0 ) etc - Setup a VM to run the esxi emulex oneconnect and setup the plugin to manage the nic in venter!

10. Read the docs! Pinning the vm's to the same cores as the card can give massive performance gains. Check out SPEC benchmarks for tips on setting up advanced features. google things like low-latency tuning (if necessary!) tuning for esxi 5.1 - ensure vmtools is updated obviously.

VMXnet3 defaults to one core in the guest nic configuration - it also doesn't have guest flow control. It is faster in most cases but not all.

Tuning round robin is a PITA since you are tuning for linear iops or random iops - most folks that want benchmark queen numbers will tune for linear at the loss of random i/o.

So yeah, use the software ISCSI - it has VAAI and is superior since you probably have more cpu than you need.

Use RAMDISK (starwind has a free one) and benchmarks like anvil or ATTO or CDM to tune settings since SSD is usually too slow.

What I found was the fancy lsi megaraid was slow as random i/o. The sata 6 port intel is fastest (hp VSA), then the dumb LSI 2308 (hpVSA), then the megaraid 9260/9266/9271.

Let me know what you are trying to do (benchmark queen , sql server, etc) so I can give you more specific tips.

It's EASY to make big numbers using ATTO but that is not realistic for most folks !!

Remember there is overhead for vlan/qos/SIOC/NIOC - baseline without any of that first!

Great cards but like all cards to get real world peak performance is quite a bit harder than linear iops using ATTO to prove your card goes super-fast lol.
 
  • Like
Reactions: Stanza

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Compare the iscsi to nfs and let us know what you get!!

Remember ESXi is designed out of box for many vm's going - not 1 vm's going fast. Always baseline with 1 VM on 1 vmhost, then start adding more hosts!
 

33_viper_33

Member
Aug 3, 2013
204
3
18
How do these FC and IB cards interact with ESXi? Does ESXi see them as a NIC or a storage controller? Is there a way to attach them to the Vswitch?

-V


EDIT: Found the answer. RTFM! "When such adapter is installed, your host detects and can use both CNA components. In the vSphere Client, the networking component appears as a standard network adapter (vmnic) and the Fibre Channel component as a FCoE adapter (vmhba). You do not need to configure the hardware FCoE adapter to be able to use it."
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
10gbe nic over copper cat6 seem expensive?
any cables but cat6 seem expensive!! go price some 10GBASE-SR (2) and some fiber (om3 laser optimized) or some infiniband cable and you'll be really shellshocked!
 

mini-me01

New Member
Aug 6, 2013
25
0
1
Yes use the setting to disable the iscsi/fcoe mode. Set it to NIC ONLY.

1. Make sure you also match the driver! the regular driver is 4.6 and the oem is 4.2 -> ensure your numbers match !!.

2. Yes use the software ISCSI client, it supports VAAI and round robin - definitely. With the $70 L5639 - you can afford the cpu and the more cores you have to process the RX/TX rings the better! The ISCSI hba mode is crap! Trust me! You can follow the multipathing guide to do round robin to ZFS and dial in the round robin by doing IOPS and/or BYTES to flop over.

3. Disable flow control. IF you are using E1000e virtual nic, tweak it by disabling flow control, and increasing buffers and ring queues.

4. Don't bother with jumbo frames, the emulex nic is very good at coalescing interrupts.

5. PRO TIP -> ALIGN IRQ's in the bios. it matters. disable everything you don't need: i.e.. serial ports, sata (carom), push all the ill junk to lower priority IRQ's (10,11), move the nic and raid controller to 5 and 7 and try to keep em separated (its hard but trust me here!)

6. Disable all power management (C states, P-states, C1E), aka set the bios to "MAX MAX MAX" - ensure the card is cool - they will disable or throttle if they get too hot!

7. Disable power management in the guest OS too!

8. SSH into esxi 5.1 and check your messages (dmesg , cat /vat/log/vmkernel.log /var/log/vmkwarning.log and look for issues. ie it will tell you if you put it in a x4 slot etc.

9. Learn ethtool (ethtool -S vmnic0 ) etc - Setup a VM to run the esxi emulex oneconnect and setup the plugin to manage the nic in venter!

10. Read the docs! Pinning the vm's to the same cores as the card can give massive performance gains. Check out SPEC benchmarks for tips on setting up advanced features. google things like low-latency tuning (if necessary!) tuning for esxi 5.1 - ensure vmtools is updated obviously.

VMXnet3 defaults to one core in the guest nic configuration - it also doesn't have guest flow control. It is faster in most cases but not all.

Tuning round robin is a PITA since you are tuning for linear iops or random iops - most folks that want benchmark queen numbers will tune for linear at the loss of random i/o.

So yeah, use the software ISCSI - it has VAAI and is superior since you probably have more cpu than you need.

Use RAMDISK (starwind has a free one) and benchmarks like anvil or ATTO or CDM to tune settings since SSD is usually too slow.

What I found was the fancy lsi megaraid was slow as random i/o. The sata 6 port intel is fastest (hp VSA), then the dumb LSI 2308 (hpVSA), then the megaraid 9260/9266/9271.

Let me know what you are trying to do (benchmark queen , sql server, etc) so I can give you more specific tips.

It's EASY to make big numbers using ATTO but that is not realistic for most folks !!

Remember there is overhead for vlan/qos/SIOC/NIOC - baseline without any of that first!

Great cards but like all cards to get real world peak performance is quite a bit harder than linear iops using ATTO to prove your card goes super-fast lol.
Thanks! This is good advice and I will report back when I benchmark my options.