10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rudde

Member
Mar 10, 2011
49
0
6
Well since I don't have a switch that take 10GbE in its doesn't help me much :p

Well my server is already super-loud :p

Its going to be in the garage anyway.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
You can run them back-to-back, same as you would with infiniband...works wonderful as long as you just need to connect two machines...
 

VVendschuh

New Member
Jun 28, 2011
1
0
0
Eastern USA
I'm currently running some CX-4 cards back to back, and was wondering about any switches that would allow a handful of 10GbE cards to connect together.

Would a Dell 6000 series switch, with two (2 port) 10GbE uplink ports work? I'm seeing them at roughly $6-700. Would there be a better option for a few 10GbE computers, and many more 1GbE nodes?
 

maxleung

New Member
Jul 20, 2011
11
0
1
I have a pair of Dell XR997 (Intel EXPX9501AT) 10Gbase-t cards I'd be willing to part with. They work great - but I pulled them due to that screamy little fan. If your servers are put away somewhere that noise doesn't matter they'd do fine. PM me for price.
Too bad! I found a workaround to the screaming noise - remove the fan, attach a bunch of video RAM heatsinks (I used the blue Zalman heatsinks with the thermal tape and stuck them to the black heatsink that holds the fan - I was too scared to remove the original heatsink entirely), and make a circuit from here:

http://www.doctronics.co.uk/555.htm#astable

Choose resistors to generate 6000 RPM, then attach that to the pin where the yellow wire goes.

Then I placed this fan in close proximity to the heatsink:

http://www.amazon.com/Antec-Spot-Cool-SpotCool-System/dp/B000I5KSNQ

Done!
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Not really "too bad". Actually "pretty damn lucky".

I was considering a mod to get rid of the fan, but I really didn't know just how hot - and how heat-sensitive - those little Intel NIC chips were. Didn't really want to mess with that heatsink mod.

While I was still considering what to do, however, I had a serendipity moment. I was trying to buy one of the newer 2-port fanless Intel NICs on eBay when a reseller shipped me the wrong part (an SFP+ based Intel card instead of another 10Gbase-T card). When I contacted them they realized their mistake, realized they didn't actually have the part they offered at auction, and offered to just let me keep the (much more costly) SFP+ card. Figured I could just sell the more expensive card and get what i wanted later. When I went to work and told a co-worker he helped arrange a long-term loan on a Juniper EX2500 switch. Two bits of good luck, shop around for a couple more cheap SFP+ based NICs, and some real 10GBe love for my home network.

Its all running neat and sweet, but a little bit ghetto. Need to get a rack to put it all together in still. I'll update my "Playing with 10GigE" thread soon with updates.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
maxleung: we need pictures!!!

Piglover: Which SFP+ card? I have two "screamy little fan" cards I'm not using and one SFP+ card.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Piglover: Which SFP+ card? I have two "screamy little fan" cards I'm not using and one SFP+ card.
I currently have two Intel X520-DA2. One of them was the mis-ship that pushed me over the edge.

I also have one SuperMicro AOC-STGN-i2S, which appears to be an exact duplicate of the Intel card with SuperMicro silk screened on the PCB.

I believe they are both Intel reference design products.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Too bad! I found a workaround to the screaming noise - remove the fan, attach a bunch of video RAM heatsinks (I used the blue Zalman heatsinks with the thermal tape and stuck them to the black heatsink that holds the fan - I was too scared to remove the original heatsink entirely), and make a circuit from here:

http://www.doctronics.co.uk/555.htm#astable

Choose resistors to generate 6000 RPM, then attach that to the pin where the yellow wire goes.

Then I placed this fan in close proximity to the heatsink:

http://www.amazon.com/Antec-Spot-Cool-SpotCool-System/dp/B000I5KSNQ

Done!
I tried taking the HSF off the cards I got from PigLover and at least with the two I have the black sink is glued on from what I can tell. I didn't want to go to crazy on it and mess up the PHY, the chip under it. I was going to just put a larger fan in the slot next to it for desktop use, but putting some BGA sinks on wont hurt. Thanks for the idea. From what I have read it is not the intel chip, 82598, that makes the heat, but the PHY they used. Newer cards are using different PHY to lower power usage and heat.

@PigLover You were not kidding they are LOUD, they add a nice high pitch scream to the room they are in.
 

Malthe

New Member
Jul 28, 2011
3
0
1
Dell XR997 with Supermicro 9XSCM-F

Hi
Inspired by this thread I have bought 2 x Dell XR997. I have a xenon 9xSCM-F Centos 6.0 based server, using Intel newest driver (compiled from sources). Unfortuneatly after a while the NIC stops (I have tried them both) same thing happens. I get af hardware 15 error and there seems to be a huge amount of package loss. I have the newer intel version AT2 (no fans) and with that everything is fine. The main difference between the cards seem to be power usage. 25 watt vs 15 watt. Thought it might be the reason. You guys have an idea how to solve the problem?

By the way how about a little tutorial for building that fan emulator for us with no skills in electronics :) I was thinking about patching the driver to keep running even though the fan is removed, but haven´t tried yet.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I was able to get the PHY heatsink off my XR997. The ihs of the PHY is huge, so the stock heatsink with the pink putty Intel used stuck well.


PHY chip is a Teranetics TN1010-B2. I didn't find much on it. PLX Tech now owns Teranetics and didn't have anything for EOL stuff on the site. I did find it interesting their new silicone will run 10GBase-T over Cat5e for 45m.


I cut a new heatsink out of an old socket370 cooler I had. Not my best work but it will do. The holes of the board are too small to put a 6-32 screw through it so I got next smallest metric size, M3 I think. I have tried to run it without the fan connected, but no link is established. That makes sense. I routed the tach signal from a 80mm fan and it let me connect without trouble. I didn't think of checking the rpm of the fan before I cut the wire:confused:, I think it was in the 2k to 2.5k range. Either way, it seems the failed fan threshold is set low. I should be able to finish up my Cat6 wiring tomorrow to test 10GbE, currently only linked @1GbE to my switch. In the end I'll have point to point Windows to ESXi or Windows to ESXi passthrough to Ubuntu.


@malthe I found this discussion from a the Red Hat bugzilla. The fact that the AT2 is working for you, it uses the same Intel chip, 82598EB, would lead me to think it is bad hardware. Are you using the same driver for each? Also, try mcelog to see if you can get some more info on the error. Or put it into a different box with another OS.
 

Malthe

New Member
Jul 28, 2011
3
0
1
I was able to get the PHY heatsink off my XR997. The ihs of the PHY is huge, so the stock heatsink with the pink putty Intel used stuck well.


PHY chip is a Teranetics TN1010-B2. I didn't find much on it. PLX Tech now owns Teranetics and didn't have anything for EOL stuff on the site. I did find it interesting their new silicone will run 10GBase-T over Cat5e for 45m.


I cut a new heatsink out of an old socket370 cooler I had. Not my best work but it will do. The holes of the board are too small to put a 6-32 screw through it so I got next smallest metric size, M3 I think. I have tried to run it without the fan connected, but no link is established. That makes sense. I routed the tach signal from a 80mm fan and it let me connect without trouble. I didn't think of checking the rpm of the fan before I cut the wire:confused:, I think it was in the 2k to 2.5k range. Either way, it seems the failed fan threshold is set low. I should be able to finish up my Cat6 wiring tomorrow to test 10GbE, currently only linked @1GbE to my switch. In the end I'll have point to point Windows to ESXi or Windows to ESXi passthrough to Ubuntu.


@malthe I found this discussion from a the Red Hat bugzilla. The fact that the AT2 is working for you, it uses the same Intel chip, 82598EB, would lead me to think it is bad hardware. Are you using the same driver for each? Also, try mcelog to see if you can get some more info on the error. Or put it into a different box with another OS.
Thanks for the reply. It turns out the problem was a faulty patch cable. Bought some new shielded cat 7 cables and now it works. No errors. Seems the newer AT2 is better at handling noise, at least in my setup.
 

maxleung

New Member
Jul 20, 2011
11
0
1
I'm sorry I don't have any good pictures of my fan RPM circuit - it's a mess as it is still on a breadboard with wires running all over. One day I'll have a PCB solution.

It really is a simple circuit - it uses the same breadboard layout as in the links for the 555 chip I provided.

In the meantime, I've been trying to test OpenText's NFS Solo v14 client (with latest service pack), but for the life of me I cannot get this to work - it keeps hanging when it connects to my OpenIndiana box. It is so bad that Explorer.exe freezes, and I am forced to uninstall Solo to get my PC working again. :(

Setting it to NFSv3 doesn't work - same freezing problem. I'm out of ideas and for now I've given up running NFS Solo - thank goodness it was just an evaluation.

So I'm sticking with CIFS (I can get 80-100 MB/sec copying from one CIFS share to another over 10 gbe) and iSCSI (can get a little bit more speed than CIFS). It's not optimal, but I can combine iSCSI and CIFS file transfers to get > 3 GBe speeds on average.
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
Anyone play with IBM parts? Specifically, 49Y4202, a dual port pci-e x8 10gbe part. Tempted to pick up a pair for funs.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I've not noticed that IBM 49Y4202 card before. By appearances, it looks similar enough to the Intel X520-DA2 or SM AOC-STGN-i2S that it might be yet another build of the Intel reference design. It appears to be manufactured by Emulex for IBM, so its clearly not just a rebrand of the Intel card.

Would be curious to know if it is supported by the Intel Pro driver or if it requires its own specialized driver. That would tell the tale, as it were. I couldn't find any driver information in any of the searches that I did.

If it is a build of the Intel reference design then it should perform well. I've got both the Intel and SM cards in my little 10Gbe network and I can't tell any difference between them performance wise.

It is available for a much lower price on ebay than either of those other cards. If you do get one, please post a review. I have a couple of additions to my network that I'd be willing to do at $299, but I won't at the current prices offered for the other cards.
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
Did some research before I bit the bullet. It's a non-standard form factor card unfortunately it seems.

http://www.64bit.eu/soubory/2964/ib...3850-x5-x3950-x5-and-bladecenter-hx5.pdf?ms=3

.. has been customized with a special type of connector called an extended edge connector. The card itself is colored blue instead of green to indicate that it is nonstandard and cannot be installed in a standard x8 PCIe slot.
Pictures of the chassis/card mirrored here: http://imgur.com/a/JnbuW

Blargh. There goes that idea.
 

iieeann

New Member
Oct 14, 2011
4
0
0
Aww, wish i saw this thread before i did so many stupid things. There is very little writing explain about link aggregation does NOT increase throughput between 2 clients. Both QNAP and Synology website do not state that clearly. I purchased 4-port gigabit NIC hoping to get 4Gb speed and at the end a complete waste of money.

Just ordered 2x 10GBE cards to directly link the PC to the NAS, hopefully it works. I am just home user and no money to buy the crazily expensive 10GBE switch.
I have no idea about optic or fibre connection, i chose the RJ45 type.
 

billco

New Member
Oct 29, 2011
1
0
1
I was playing around with dual port Infiniband (copper based) cards. Performance is pretty good, but the Mellanox cards I picked up were awesome under Windows, but not so much under ESXi and OpenSolaris. The dual port cards were $30-40 each making the internal hardware very inexpensive.
I know the post is almost ten months old, but I was hoping to pick your brain. These inexpensive MHEA28-XTC cards, have you ever gotten them to work with ESXi ? I can't find them anywhere on the vSphere HCL. The only alternative I've found so far is a pair of the newer ConnectX cards at about $250-300 a piece. I just want to play around with 10gbe between my VM and iSCSI filer, but it's really just a "fooling around" rig so cheaper is better :) Any thoughts ?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Those fire right up in Windows, but not Solaris/ ESXi. I spent many nights working on the MHEA28 cards to no avail in that configuration.
 

iieeann

New Member
Oct 14, 2011
4
0
0
Some pictures of the card, not fully fired up yet because the HDD raid array is not yet ready (PC side).
Heard that Infiniband is a cheaper solution, but the bandwidth is 12.8% smaller than 10GBE network card.
More over, Infiniband is not listed as supported card by the NAS, so i stick with what is stated.




 
Last edited: