Just so we're clear - CNA adapters from brocade and Emulex are awesome performers. They are not wanted since most people don't get CNA and don't want FCOE/ISCSI.
But they are stupid fast, and dirt cheap.
$75 to $100 without SFP+ transceivers, $150-200 with SR.
"But they require DCB TOR (top of rack) switches! for FCoE/ISCSI" - yes and they only work with really expensive switches for a few SAN's.
But who the feck cares? You can pipe in a passive DAC HP SFP+ cable and plug it into the 10gbe SFP+ ports on that switch.
What switch?
Dude there are tons of $200-500 switches with 24 to 48 gigabit ports with 2 or 4 10GBE SFP+ uplink ports on ebay! duh!
"I just want to hook up two pc's and go fast!" - yeah so what? Get the supported DAC cable or 10GBASE-SR transceivers and fiber and go to town, they all work fine in PC to PC mode.
What about ISCSI/FCOE? - fuggedaboutit - unless you have a brocade/nexus FCOE switch that goes out FC to a san, that is supported, just pretend it doesn't exist.
So why are they so cheap?
"Because people are dumb and don't realize like LSI controllers you can cross flash them, and you can very much use them as standard accelerated tcp/ip"
Anything else I can do with these?
"Yes, some have custom APPS/Firmware, to do userland SDK programming, for traders on wall street. The main cool feature is multiple vnic's. A dual port nic will present itself as 8 nic's to your o/s, and you can configure the bandwidth. WITHOUT DCB (most of the time)".
"I heard about this with HP VirtualConnect and IBM Virtual fabric! - duh because both use the same emulex OneConnect nic which has that feature, 4 vnics per physical nic, plus ISCSI or FCOE if you want it (DCB required!)"
"So basically brocade,qlogic, and emulex just have co-branded adapters?" - Yes and unlike raid cards, they even post the cross-reference OEM part to their own part. No shame there.
Why would you want 4 vnic's? Because in virtualization, it is much faster to have a VM bound to its own "adapter" with its own LUN.
Think about divvying up the raid or networkcard, there has to be a fair-share method, that deals with hogging bandwidth. It causes world shifts when you use up your "share".
So if RAID cards virtualized their storage like these CNA cards, we'd get far superior performance?
Pretty much - as anyone who has put more raid cards in a VM host, they realize contention is sharing one raid card. Give each vm it's own raid card and disks and watch each VM run at 95% Native physical speed. Same for ethernet. Sharing has tremendous overhead that 1:1 can just ingore completely.
So everytime you pass up that "FABRIC adapter" For $100, remember that is a $1000+ card, that only a few of us really know that work quite fine as regular nic's if you can disregard the FCOE/ISCSI section. Heck some cards may not even have those features enabled because they can be sold at a "feature key" $$$ method.
Brocade 1010/1020
HP NC550/552
QLOGIC 8152
Emulex OneConnect
What else can a vnic card do? Well the newer SR-IOV one's can bypass the vswitch or have a simple vswitch in the card. If you are trying to poop ISCSI to others, bypassing the VSWITCH may give latency advantage since you can go directly from VM (VSA,NEXENTA,OPENFILER,FREENAS) to the network. Much like VT-D to a raid card.
Is DCB required for vnic? Not always, what most folks do is just vlan-tag the packets for you. Assuming you can afford that 24 port 1GB switch with dual SFP+ 10gbe uplinks for $200 , it can strip the vlan tags off and route them to separate port/port groups, or pass them on.
Why does 10gbase-T suck? power. used to take 25 watts per port to drive it. That LSI 9266 or P420 raid card uses 14 watts. Go put your hand on the heatsink of the raid chip and get back to me.
What else makes 10GBASE-T suck? Latency, 8 times (or more) over DAC or FIBER. Latency kills speed if you start adding up the hops. Latency causes you to not achieve peak speed.
But fiber is so expensive. Bull-shizzle. DAC SFP+ cable are $29+ on ebay. These go in place of the transceiver that all (but 10gbase-T) 10gbe nic's use. SFP+ tranceivers are like $39+ on ebay, two plus a little fiber cable ($10) and you can rock out with 10GBASE-SR.
GBIC's from the prior generation were $1000. SFP+ transceivers are dirt cheap. DAC Cables are just single SFF-8470 infiniband cables. They look very much like a external SAS cable.
Always consider there is $100 of value in getting two working compatible Transceivers. Most people split the baby and sell the SFP transceivers separately.
Contrary to popular belief - most popular cards i've tested work with HP DAC SFP+ cables. The only thing i've run into that won't accept non-HP cable, so far, is my HP switches