10Gbe networking question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rentay

New Member
Apr 16, 2013
10
0
1
Hi all,

About to purchase the following items:

i) ZyXEL XGS1910-24 Gigabit switch
ii) Intel EXPX9502CX4 10Gbe adapter

What are your opinions on these? Should I be looking for other switches/adapters? I am wanting to run 10Gbe if possible!

This is for my home media/file server.



Regards,
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
i'd look for the dude on ebay selling xsm7224s for cheap, sfp+ cards are 8 times lessor latency than 10Gbase-T but dac cabling or sfp+ is very expensive.

You need 4 sfp+ for a dual port nic, msrp of $600 each (plus fiber). 1M dac cable (HP brand is $200 msrp). two per nic. ouch.

10gbase-T is cheap cabling, cat5e to cat6a - for short hauls cheap wire works (cat6 is a good place to start). But many features do not work. such as priority flow control (vlan based flow control, instead of port), fcoe and iscsi generally require data center bridging which will never happen on 10gbase-T due to latency being so high.

ping time of 10gbase-t is like 0.030 where ping time of dac/sfp+ is like 0.004 which may not seem alot but imagine many hops!

CX4 is interesting because its actually 4 x 2.5gb streams instead of 1x10gbps .

But the last point is that most cards won't do 10gbit, if you get 2.5 with smb2 and no tuning, that is good. If you get 5gb with smb2 that is stellar. If you use SMB3 (windows 2012/8) you may get 9gigabit, if you actually get 10gigabit, wow amazing stupendous.


Rule of thumb: Most nic's were never designed to go 10gigabit to one destination . Most nics do not send and receive at different rates and different queue sizes.

Things like VMDQ might setup 6 queues of 512 buffers to receive and 6 queues of 1024 buffers to send (one per core).

So if you want to get 10gbe from one pc to another (you can direct connect without a switch with certain PHY like 10gbase-T or DAC/SFP+ ) you may need to go faster. IE 40gbps Infiniband might actually get 20gbps between two machines without much tweaking.

But windows 7 may never get more than 7gbps and there is a reason why 2.5gbit x 4 is used by cx4, most streams are about 2.5gbps so if you have one copy operation and it's 2.5gbit - great. Reason why windows 8/2012 can do multi-threaded smb3 to get 8-9gigabit.

ESXi is not aggressive with 1 machine at all, it doesn't even try to use all cores to send with 1 vm. So 2.5gbps~ is about all you get with 1 operation. When you have 4 vm's that scales to 9gigabit fine ! but you'll never hit peak potential - matter of fact I was seeing faster speeds with multiple nic vmotion gigabit (6xgigabit) versus 1x10gigabit.

i'd honestly look into infiniband if its just windows smb, it will be easier to reach 10 gigabit rate (and then some).

You will probably be disappointed with older cards. Dual port cards without ram eat 2gb of your ram for buffers (or more). They are slower since they share cpu ram and require more help from the cpu to move data. I've seen cards take a Q6600 to 75% to move ramdisk to 8gigabit. Another newer card required 10% cpu on 4 cores to move the same data.

Intel cards are pretty damn easy to deal with - easier than any other card but very expensive.

Qlogic 8152 and older BE2 emulex cards are great for cheap sfp+ ~75-99 each. You can use a DAC cable to cross-connect two machines or buy cards with optics that match and run fiber. Fiber is really not that expensive any more (well the optics may be).