10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,188
1,548
113
This is actually one of the few "good" applications of link aggregation I've seen discussed on various forums. Collecting traffic from 20 rendering clients over 4x10Gbe links at the shared server node should work exceptionally well.
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
This is actually one of the few "good" applications of link aggregation I've seen discussed on various forums. Collecting traffic from 20 rendering clients over 4x10Gbe links at the shared server node should work exceptionally well.
Awesome! Finally some good news. Now I just hope that the supermicro 4 port card doesn't cost a fortune... It's a brand new product based on a new Broadcom IC so I'm optimistic. I'll find out this week hopefully.

Thanks for your encouraging feedback.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Can anyone clue me in on some of the pitfalls of teaming Four 10GbE SFP+ ports to feed a cluster of 20 servers connected through a 10GbE Switch?

I'm interested in connecting my file server to 10 Supermicro TwinBlade servers through a Supermicro 10GbE blade switch. It has 4 external SFP+ uplink ports - No 40GbE option is available...

I just discovered that Supermicro makes a 4 port 10GbE PCIe-3 card based on a Broadcom IC that could fit the bill but I have no practical experience with link aggregation, let alone SFP+ link aggregation.

My application is 3D rendering in a Windows 2008R2 environment using TCP/IP.

Any advice is much appreciated.
With link aggregation your max speed will be that of a single link. Most of the time people try to use link aggregation to speed up a single computer/client to a single server. In that situation it will not improve throughput, an exception is SMB3. For you, link aggregation will work well as PigLover said. With multiple clients, the load will be spread across the four links. You will still only get 10Gb from one client, but they will only have 10Gb links anyway. The only draw backs I can see are extra cabling and you are limited to 4x 10Gb vs dual 40Gb links to your file server. The latter is most likely a non issue because after you initially kick off a job, not all 10 render nodes will be hitting the storage server, unless your application is really good at breaking a job up equally. Also, if you can saturate > 40Gbps you have an amazing system
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
With link aggregation your max speed will be that of a single link. Most of the time people try to use link aggregation to speed up a single computer/client to a single server. In that situation it will not improve throughput, an exception is SMB3. For you, link aggregation will work well as PigLover said. With multiple clients, the load will be spread across the four links. You will still only get 10Gb from one client, but they will only have 10Gb links anyway.
Thanks for the clarification. This sounds like it will work really well for me.

The only draw backs I can see are extra cabling and you are limited to 4x 10Gb vs dual 40Gb links to your file server.
On the upside, there should be some failover there in case one link were to die, or am I wrong? I do wish SM had just put a 40GbE uplink on their switch... That would have made things much simpler.

The latter is most likely a non issue because after you initially kick off a job, not all 10 render nodes will be hitting the storage server, unless your application is really good at breaking a job up equally.
You're exactly right. When an animation job is kicked off all of the slaves are assigned a frame to render for which they then read assets (textures, geometry, particle caches, etc...) from a shared folder. Once they have the assets in RAM they just work in isolation until the frame is done, at which point they write the finished image back to the file server and are assigned another frame to render. So after the initial glut the network traffic is limited to 1 or 2 slaves at a time.

Also, if you can saturate > 40Gbps you have an amazing system
If I were to max out my LSI controller with 8 SSD drives it still wouldn't come close to saturating 40GbE... That said, I do plan on connecting my workstation to the file server via 40GbE directly using two NICs, but the slaves wouldn't benefit nearly as much from that.

Thanks again.
 

PigLover

Moderator
Jan 26, 2011
3,188
1,548
113
You've hit on the major advantages of Link Aggregation when used in an application like yours:

- 20 links from separate clients collected together on an n-way group of links to the server (rather than having 20 separate links on the server
- resiliancy in the face of single link failure (though you give this up somewhat if you go with the 4-way SM card since all four links are likely to fail together)

As Cactus points out, the usual fallibly of Link Aggregation is that it will speed up transfers between a single client and single server. It won't - but in your case that isn't what you are trying to do.
 

awedio

Active Member
Feb 24, 2012
776
225
43
With link aggregation your max speed will be that of a single link. Most of the time people try to use link aggregation to speed up a single computer/client to a single server. In that situation it will not improve throughput, an exception is SMB3. For you, link aggregation will work well as PigLover said. With multiple clients, the load will be spread across the four links. You will still only get 10Gb from one client, but they will only have 10Gb links anyway. The only draw backs I can see are extra cabling and you are limited to 4x 10Gb vs dual 40Gb links to your file server. The latter is most likely a non issue because after you initially kick off a job, not all 10 render nodes will be hitting the storage server, unless your application is really good at breaking a job up equally. Also, if you can saturate > 40Gbps you have an amazing system
Any possibility of "upgrading" o/s to Win8/Server 2012?

The hardware you have planned would be a "screamer". With SMB3, your 4 x 10Gb becomes a 40Gb link
 

renderfarmer

Member
Feb 22, 2013
249
1
18
New Jersey
Any possibility of "upgrading" o/s to Win8/Server 2012?

The hardware you have planned would be a "screamer". With SMB3, your 4 x 10Gb becomes a 40Gb link
It's currently running Win2012 but I'm getting comparatively terrible networking performance with my infinihost III NICs. So I framed my question around Win2008R2 as a fall back in case I had to use that.

I'm hoping that with state of the art ConnectX-3 cards win2012 will behave better.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
Just so we're clear - CNA adapters from brocade and Emulex are awesome performers. They are not wanted since most people don't get CNA and don't want FCOE/ISCSI.

But they are stupid fast, and dirt cheap.

$75 to $100 without SFP+ transceivers, $150-200 with SR.

"But they require DCB TOR (top of rack) switches! for FCoE/ISCSI" - yes and they only work with really expensive switches for a few SAN's.

But who the feck cares? You can pipe in a passive DAC HP SFP+ cable and plug it into the 10gbe SFP+ ports on that switch.

What switch?

Dude there are tons of $200-500 switches with 24 to 48 gigabit ports with 2 or 4 10GBE SFP+ uplink ports on ebay! duh!

"I just want to hook up two pc's and go fast!" - yeah so what? Get the supported DAC cable or 10GBASE-SR transceivers and fiber and go to town, they all work fine in PC to PC mode.

What about ISCSI/FCOE? - fuggedaboutit - unless you have a brocade/nexus FCOE switch that goes out FC to a san, that is supported, just pretend it doesn't exist.

So why are they so cheap?

"Because people are dumb and don't realize like LSI controllers you can cross flash them, and you can very much use them as standard accelerated tcp/ip"

Anything else I can do with these?

"Yes, some have custom APPS/Firmware, to do userland SDK programming, for traders on wall street. The main cool feature is multiple vnic's. A dual port nic will present itself as 8 nic's to your o/s, and you can configure the bandwidth. WITHOUT DCB (most of the time)".

"I heard about this with HP VirtualConnect and IBM Virtual fabric! - duh because both use the same emulex OneConnect nic which has that feature, 4 vnics per physical nic, plus ISCSI or FCOE if you want it (DCB required!)"

"So basically brocade,qlogic, and emulex just have co-branded adapters?" - Yes and unlike raid cards, they even post the cross-reference OEM part to their own part. No shame there.

Why would you want 4 vnic's? Because in virtualization, it is much faster to have a VM bound to its own "adapter" with its own LUN.

Think about divvying up the raid or networkcard, there has to be a fair-share method, that deals with hogging bandwidth. It causes world shifts when you use up your "share".

So if RAID cards virtualized their storage like these CNA cards, we'd get far superior performance?

Pretty much - as anyone who has put more raid cards in a VM host, they realize contention is sharing one raid card. Give each vm it's own raid card and disks and watch each VM run at 95% Native physical speed. Same for ethernet. Sharing has tremendous overhead that 1:1 can just ingore completely.

So everytime you pass up that "FABRIC adapter" For $100, remember that is a $1000+ card, that only a few of us really know that work quite fine as regular nic's if you can disregard the FCOE/ISCSI section. Heck some cards may not even have those features enabled because they can be sold at a "feature key" $$$ method.

Brocade 1010/1020
HP NC550/552
QLOGIC 8152
Emulex OneConnect

What else can a vnic card do? Well the newer SR-IOV one's can bypass the vswitch or have a simple vswitch in the card. If you are trying to poop ISCSI to others, bypassing the VSWITCH may give latency advantage since you can go directly from VM (VSA,NEXENTA,OPENFILER,FREENAS) to the network. Much like VT-D to a raid card.

Is DCB required for vnic? Not always, what most folks do is just vlan-tag the packets for you. Assuming you can afford that 24 port 1GB switch with dual SFP+ 10gbe uplinks for $200 , it can strip the vlan tags off and route them to separate port/port groups, or pass them on.

Why does 10gbase-T suck? power. used to take 25 watts per port to drive it. That LSI 9266 or P420 raid card uses 14 watts. Go put your hand on the heatsink of the raid chip and get back to me.
What else makes 10GBASE-T suck? Latency, 8 times (or more) over DAC or FIBER. Latency kills speed if you start adding up the hops. Latency causes you to not achieve peak speed.

But fiber is so expensive. Bull-shizzle. DAC SFP+ cable are $29+ on ebay. These go in place of the transceiver that all (but 10gbase-T) 10gbe nic's use. SFP+ tranceivers are like $39+ on ebay, two plus a little fiber cable ($10) and you can rock out with 10GBASE-SR.

GBIC's from the prior generation were $1000. SFP+ transceivers are dirt cheap. DAC Cables are just single SFF-8470 infiniband cables. They look very much like a external SAS cable.

Always consider there is $100 of value in getting two working compatible Transceivers. Most people split the baby and sell the SFP transceivers separately.

Contrary to popular belief - most popular cards i've tested work with HP DAC SFP+ cables. The only thing i've run into that won't accept non-HP cable, so far, is my HP switches ;)
 
Last edited:

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Q, well you have my attention :)

I picked up some Brocade 1020 CNA's, haven't yet had a chance to use them. My Dell guy confirmed they can be used with the PowerConnect 8024F with the latest firmware, which supports stacking and DCB, so that should work for me. I do have a couple PowerConnect 6248's with stacking in one slot, and a free slot - seems that would be a good enough place to put the 2 port 10GbE SFP module, and then I've got 10GbE to my CNA's, albeit only 4 ports and I have 4 hosts with dual port cards. Still, it'll work? Is there anything special that the switches that are either all 10GbE or have 10GbE uplinks must support or do, to work with the 1020 CNA's?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
If you are not using the FCoE to your FC san, or ISCSI, DCB is not necessary.

Just plug it up and enjoy one of the fastest nic's out there. It's amazing how cheap CNA's are - just because they are misunderstood.

You should be able to at any time spend $100 to $250 to connect two machines up at 10GB (or dual 10gb) including wiring. (2 nic's and a wire).

That's ethernet. Nothing exotic.
 
Last edited:

maxleung

New Member
Jul 20, 2011
11
0
1
If street price on the 8-port switch comes in at or under $1,000 (as suggested in the article) then these are the switches that represent the cracks in the dam. Not the floodgates, yet, but it is a real start.

These are what I had expected to see 3Q/4Q 2012. About 6 months later than expected and entering the market about 20% higher cost than expected (predictions were for sub-$100/port). But still, I am very, very excited to see them!
Here is a (very good IMHO) review of an 8 port 10 gbe switch:

NETGEAR XS708E ProSafe Plus 10GbE Switch | NETGEAR,XS708E-100NES,606449090550,ProSafe Plus,10GbE,Network,Switch,Review,Bruce Normann,NETGEAR XS708E-100NES ProSafe Plus 10GbE Network Switch Review by Bruce Normann

At the end of the review there is a link to an amazon.com seller - street price is $863!
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
I'd be afraid they would be (buggy). or fail. Like tying a sas2308 to cpu#2 only.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,242
52
48
no need for fans on anything but the 10gbase-T phy - one phy can easily be fanless, dual phy not so easy. you really don't want 10gbase-T due to latency and far higher costs for nic's