10-Gigabit Ethernet (10GbE) Networking - NICs, Switches, etc.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

azev

Well-Known Member
Jan 18, 2013
769
251
63
I am also looking for some affordable 10Gb SFP+ switches. I only need around 8 ports but I will settle with 4. Does anyone have any recommendation on what switch to get ??
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
cut-through ? its in the manual. the 8024 and netgear all run the same broadcom software on vxworks, so in many ways both manuals can be used freely.

4 ports? Dude just get two nic's and run openvswitch there's you low cost switch :) no joke!
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
I think the 4-port switch I'd do what mrkrad said or find like a 24 port gigabit switch with SFP+ uplink ports.
 

RADCOM

New Member
Aug 27, 2012
11
4
3
Are 10Gbe NIC's backward compatible with a Brocade 200E 4Gb switch or will I be limited to using actual 4GB Nics?
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
if you want fiber channel cheap: over 10 8 port (12 unactivated/unlicensed) 8gbps FC switch

HP StorageWorks 8GB Simple San Switch 8 Ports AK241A 0008835856168 | eBay

just because DDR infiniband, some sas, some 10gbe ethernet, and fiber channel use the same cable (DAC) doesn't mean the protocol being spoken is compatible. FC and infiniband are not compatible with 10gbe ethernet. Some cards can dual purpose like do 8gbps fiber or 10gbe ethernet but the switches none that I know of can switch on the fly!

4GB is fiber channel, there is no network protocol that I know of (IPoFC) = even though it uses the same cabling it is more akin to SAS.
 

RADCOM

New Member
Aug 27, 2012
11
4
3
Sorry mkrad you need to break it down really simply for me, do you mean my mileage may vary depending on the card and drivers? I'm pretty new to this networking malarkey and yes I though if it's connected then it will talk :) I have couple of emulex 4Gb LPE1150 cards will they be able to chat over the Brocade Switch? will these work link
 

phroenips

New Member
Jul 14, 2013
15
0
1
FC and infiniband are not compatible with 10gbe ethernet. Some cards can dual purpose like do 8gbps fiber or 10gbe ethernet but the switches none that I know of can switch on the fly!
Minor correction, FCoE is compatible with standard 10Gbit Ethernet. That's still using Fiber Channel Protocol (FCP) though, and you need a fiber channel name server, etc. Like you though, I'm not aware of any protocol to do IP over FC
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
lightpulse is the driver the 10gbe emulex nic's use. i wonder if those guys made some sort of IPoFC ? cool.
 

mini-me01

New Member
Aug 6, 2013
25
0
1
It's been a few weeks and just catching up on the forums here...I only used my XSM7224S for about a week before coming to the conclusion that while it was neat to play with new HW, it was complete and utter overkill for my use case....I created an openvswitch using 2 emulex OCE10102 and it worked great. I used Openvswitch 1.11 and Oracle Linux 6.4 with the Red Hat compatible kernel following the directions here if anyone want to do the same: CentOS 6.4 – Openvswitch installation | n40lab

I ended up replacing my HP 1810-48G and the XSM7224S with a Netgear GS748TXS so I have 48 Gigabit ports and 4 10Gb ports which seems to be a good compromise and it runs under 50W with the green features turned on.

I have some switches for sale in the classified forum if anyone wants a great deal on a Netgear XSM7224S (24 Port 10Gbe), HP 1810-48G (52 Port Gigabit) or Netgear GSM7252S (48 Port Gigabit)
 

mini-me01

New Member
Aug 6, 2013
25
0
1
You are not using an N40L for the four 10GbE ports, correct?
No, I did this with a supermicro box. You can't get 2 (PCIe x8) OCE10102 cards in an HP Microserver unless you cut down one of the Emulex cards or open up one of the PCI slots and in both cases you are going to severely limit the bandwidth to the card.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
I'm thinking of picking up some infiniband cards to play and experiment with. Just to make sure I understand correctly, QDR is made up of 4 x 10Gb/s links. These links are aggregated similar to Ethernet but through a single connection/wire. This means like Ethernet aggregation, the maximum speed for a single thread is a single link’s max speed. This means, you will never see a single file transfer exceed 10Gb/s. The only way to take advantage of 40Gb/s is multiple I/O tasks over that connection.

I can see this being very useful in a cloud environment with VMs moving from server to server and taking advantage of online storage.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I'm thinking of picking up some infiniband cards to play and experiment with. Just to make sure I understand correctly, QDR is made up of 4 x 10Gb/s links. These links are aggregated similar to Ethernet but through a single connection/wire. This means like Ethernet aggregation, the maximum speed for a single thread is a single link’s max speed. This means, you will never see a single file transfer exceed 10Gb/s. The only way to take advantage of 40Gb/s is multiple I/O tasks over that connection.

I can see this being very useful in a cloud environment with VMs moving from server to server and taking advantage of online storage.
Infiniband automatically and transparently stripes data over the four links way down deep in the protocol. Just think of each port as a 40Gbit/s link.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Same as SAS - matter of fact SAS wiring used similar connectors at one point.

Its like CX4 but 4 times faster! Parallel versus serial. Cabling is always cheaper if you can modulate with less wiring. SFP is 4 wires, i'd suspect QSFP is 16 wires. Fiber is one wire (each direction). 10gBASE-T is 8 wires. There is also now IPoTB (thunderbolt) coming soon to eliminate the costly nic when doing very short haul (think one mac server to 4 clients at a very short distance).

Or SMB3 which is multi-connection sometimes as well. without optics (lasers) it is hard to go faster for any distance which is why intel is working out using optics instead of wiring(traces) for communication on a motherboard and even between two chips.

It's likely if you wired your house with fiber, you could have gone from gigabit to 2 to 4 to 8 to 10 and on without rewiring at a short haul distance of between floors. Just change optics ! :)

SFP cables are very touchy around 5 meter. I've had issues with passive DAC SFP+ cables only at 5 meters. Usually when using two ports. If its one SFP+ fiber and one passive DAC - no problem. Two fiber 300m no problemo. Seems to be reaching the limit with 10gbe DAC passive.

Remember passive SFP cables are just 4 wires with high quality copper with a ton of shielding (over each wire and over all 4 wires).

10gbase-T is a special bird since it uses a scrambler instead of a serializer so it has to encode the whole chunk (packet'ish) and send it. this is 8 times slower than a serialized fiber stream and probably 10 times more power to run it over 50meter of wire. And a ton of cross-talk :)

The real problem is generating data fast enough - some PC's don't have fast enough ram bandwidth to sustain 40gbe! let alone two ports of 40gbe yumminess!

how many hard drives would it take to sustain 40gbe at 100% random read/write pattern? alot!