10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

XZed

New Member
Feb 3, 2011
66
0
0
Those are the exact cards I am using, although I have another one with 128MB onboard which cost a bit more.
Could you tell which one is "better" : onboard memory or memfree ?

Can't work out real impact... i'm afraid about bandwidth impact....

In fact : while i was going to opt for memfree products (as it seems to be the "new" product), i'll have a 128MB onboard one (but DDR :) )... don't know why but i'm afraid about performance (while what is relieving me : onboard memory products seems more expensive... better ?)

By the way, if someone could help me on this : i only find SDR cables... i suppose, for my next DDR card, i have to use DDR/QDR cables in order to really use the 20Gbs per port. I'm worried : because i found 2/3 references for DDR/QDR cables but prices were soooo high.... unlike the classical 10Gbs cables.

And i saw 3 types : screws / latches / claws...

I suppose the 2 latter ones are good for HCA ?

Thank you
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,810
113
Yikes! Let's see what I can shed some light on:
1. I like latches. Easy to use.
2. On cables, 10gbps is a ton. Transferring that much data over one pipe requires that you have two machines with setups fast enough to read/ write that fast.
3. If you think about DDR3, even DDR3 1066 transfers at over 8GB/s peak, PCIe is ~2GB/s over an x8 1.0 link.
4. OpenSolaris requires onboard memory. If you are doing Windows -> Windows it is super easy add Linux and it is harder but doable. Solaris/ ESXi 4.1 = rougher installation.

Frankly, I would get the Mem-free now, see how that works, then upgrade. It is a lot of performance for ~$100-$150.
 

XZed

New Member
Feb 3, 2011
66
0
0
Thank you Patrick !

1. What i'm afraid of : cable compatibility : i don't know if latches or claws impact while plugging. But reading your answer, i suppose it's about "comfort".
2. Obviously i considered r/w speeds on both parts :) . And admit that even 10Gbs is already an enough "pipe" :).
3. I was talking about Infiniband bandwidth (SDR / DDR / QDR) 20 Gbs :). But, as said previously : i alreadt have to reach 10Gbs at its max :)
4. Just to be sure : i understand your advice about testing and then upgrading if necessary. But, do you mean that, unlike what is wrote on websites : onboard memory is better than memfree ?

Once again : thank you for helping us :)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,810
113
Not sure if you guys saw but I have the Intel E10G41AT2, Mellanox MHEA28-XTC, and Supermicro AOC-STGN-i2S in the DP beast detailed in today's post:



I am at least getting there.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
How does that Supermicro AOC-STGN-i2S compare to the Intel X520-DA2? They both look similar and are both based on the same Intel baseband (82599ES). You called it a "fiber" card, but really its a dual SFP+ carrier, right? I need to get a couple of SFP+ NICs and the best price I can find on the Intel card is $630.

Amusingly, I'm getting a switch next week (Juniper ex2500-24f). Its a long story... The switch is coming with a mix of 10GBe-SR SFP+s, 1GBIC SFPs, and a few direct-copper SFP+s (basically two SFP+ modules with 3m of twinax between them). I ordered fiber, 2x1GBICs for my HP 1810g-24 (knockoffs - hope they work) and a couple of LC-keystone jacks for the one landing that needs to come inside the house, but so far I've go no love on reasonably priced NICs. I'll post pictures when I get it all set up in about 2 weeks.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,810
113
Sadly I do not have a X520-DA2 but I think they are going to be fairly similar. They are SFP+ carriers. I still need to buy some SFP+ 10Gb modules/ cables for testing it out. Any recommendations welcome. Would love to see pictures of that setup.
 

XZed

New Member
Feb 3, 2011
66
0
0
Hello,

After considering all of your advices, i finally bought and received my items :) !!!

MHEA28-XTC + HP IB 4X DDR (Ref : MHGA28-1TC) + SFF-8470 Cable

I flashed each one to the last fw version.

Initially, i had setup OpenSM to run as a service on each computer (2 * Win 7), as i had read that even if 2 OpenSM are running, as on is "detected", the other one becomes inactive... But, finally, I had to only keep one OpenSM starting and running, in order to have the network working.

Well, once done, i made some benchmarks after learning about the Infiniband tools (that were new to me).

I admit have been disappointed : some file transfers between smb folders, some iperf tests (playing on TCP window size), etc., gave me an average 3Gbs transfer speed :/ ...

Unless i had misunderstood, i thought being safe from the well-known TCP/IP overhead on 10GbE networks... but while testing, i thought about the fact that, even w/ Infiniband, perhaps problem will be persistent until keeping using TCP/IP ?

Anyway, i can't find any valuable reason about such a performance decrease...

(I hardly achieved 5Gbs with unrealistic TCP window size value of 1MB).

Even if i obviously noticed a load on CPU charge, it can't be the bottleneck... (E8400 & E5300).

I found some tools about RDMA protocol and was so glad when obtained such results : +900MBs and ~5µsec latency :) !!!

But, apart TCP/IP, i don't know any protocol from this new (for me) Infiniband world : RDAM, iSER, SRP, etc...

I can imagine that these protocols are dedicated to specific usages, but i think (hope) that thse protocols can possibly be setup for the basic usage : files transfers.

Obviously, i'm not requiring a quick drag-n-drop way as usual way. Just an achievable way to transfer files between 2 computers linked w/ my right new IB LAN :) !!!

Without using IPoIB in order to achieve right transfer speeds (unless someone know something that could explain the performance decrease).

Thank you very much.

Sincerely,

XZed
 

nilsga

New Member
Mar 8, 2011
34
0
0
Will CAT6 cables ever be able to handle 10GbE? I've read some places that CAT6 will be ok for shorter distances, but others are saying you definitively need CAT6A.

Edit: Should have googled first

http://en.wikipedia.org/wiki/Category_6_cable

When used for 10GBASE-T, Cat 6 cable's maximum length is 55 meters (180 ft) in a favourable alien crosstalk environment, but only 37 meters (121 ft) in a hostile alien crosstalk environment such as when many cables are bundled together. 10GBASE-T runs of up to 100 meters (330 ft) are permissible using Cat 6a.
 
Last edited:

XZed

New Member
Feb 3, 2011
66
0
0
Hello,

Meanwhile, I read a alot about RDMA, SDP, WSD, SRP, iSER, etc...

I quite thought having found the solution : set up SRP/iSER in order to transfer files and optimize bw using iSCSI over RDMA. But, it seems impossible to setup a simple iSCSI target/client scheme between 2 Windows computers (to be accurate, iSCSI target doesn't exist for Windows (only initiator) : only for Linux w/ SCST program)).

So, still asking how some forums posts tell how much they are proud reaching 8Gbs (10Gbs cards) speeds only w/ IPoIB w/o any hassle....

Thank you.

Sincerely,

XZed
 

XZed

New Member
Feb 3, 2011
66
0
0
Actually XZed,

Yer in luck, Microsoft just released an updated version of their ISCSI Target software yesterday for free that works on Server 2008 R2 Standard, Enterprise and Datacenter Editions. It couldn't of come at a better time. :)

http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx
Thank you very much for this great information :) !!!

Thanks again !

P.S. : For the iSCSI over RDMA part, i think i'll must opt to Linux/Unix server host.
 

techstyled

New Member
Apr 21, 2011
1
0
0
I'm confused. I keep hearing 10GbE and assuming y'all are talking about 10GbE _over copper_. However, I've yet to find a 10GbE _copper_ switch. Or rather, a 10GbE capable switch with transceiver slots that accept a 10GbE _copper_ module.

Maybe y'all are only talking about cross-connecting but I'm even unclear what cabling y'all are using there.

Sorry to be such a nub but if someone could be a bit more specific on cabling, I'd appreciate it.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
The first page of this thread shows three 10Gbase-T (copper, RJ45, Cat5/6) NICs that are readily available either new or via fleabay. It also discusses the problem that 10Gbase-t switches are not generally cost effective. They do exist, but either have a ridiculously small number of ports (e.g., 1 uplink port on the HP Procurve 6600, or 4 uplinks on the NETGEAR ProSafe XSM7224S) or they are enterprise/datacenter switches that you'd never use at home (e.g., the Summit X650). In all cases these switches cost many $$thousands.

The interesting use case for most of us here is a back-to-back 10Gbase-T connection between two machines, either between two servers or between a primary workstation and a server.

I've also been interested in a "poor man's switch" for 10GBbase-t - a CPU-based system with several 10Gbase-T NICs running PFSense or RouterOS or something. This would give you high throughput but crappy latency. Perhaps soon one of us will have the time to see how this plays for things we do every day (mostly file/disk sharing apps).

I've got a project going now to use a fiber-based 10Gbase-T switch. There is a partial writeup on it here but I've run into some time problems the last few day getting it going forward. I had an interesting twist on the switch I'll be using...ended up with a better one that I thought I'd get. Its a long story...which someday I might actually get posted.
 
Last edited:

Rudde

Member
Mar 10, 2011
49
0
6
When will we see the 10 GbE standard in desktop computers? And do you think they will keep the RJ45 or will they switch to fiber?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
My best guess is that we are still 3 to 5 years away. It won't show up in "desktop computers" as fiber - the economics of that don't work out for mass deployment. it will be 10Gbase-T (copper wire using Cat6/6a cable and RJ-45 connectors).

The biggest problem right now is that it is not cost effective to manufacture 10Gbase-t switches. The silicon for the line drivers and network processors is still too expensive (cost/port is just too high for the raw silicon) and consumes too much power. Intel has some new 32nm parts scheduled to release 4Q this year that will make things better. With these parts you might see 8-port 10Gbase-t switches available for ~$1000-2000, perhaps less, but that it still too high to get the market moving. Its going to take another generation of parts to get there.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,810
113
Just a thought, if you are going to use a 10GbE backhaul to servers that are nearby each other, Infiniband is not a bad deal, especially with Windows/ Linux.
 

Rudde

Member
Mar 10, 2011
49
0
6
Just a thought, if you are going to use a 10GbE backhaul to servers that are nearby each other, Infiniband is not a bad deal, especially with Windows/ Linux.
Well I lookd into that, and I do at least need 10 meter distance so a 10 meter cabel comes to 300 bux xD So then the point is gone...
 

Rudde

Member
Mar 10, 2011
49
0
6
Ohh, was like 300 at Amazon.

But I don't really feel like it since it doesn't say anything about FreeBSD support and I would like something that can give me 10 GbE to gigabit switch to multiple users, since I won't use 10 GbE between my two main-stations. But I may use 10 GbE from the server when I have friends over.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I have a pair of Dell XR997 (Intel EXPX9501AT) 10Gbase-t cards I'd be willing to part with. They work great - but I pulled them due to that screamy little fan. If your servers are put away somewhere that noise doesn't matter they'd do fine. PM me for price.