10GbE for workstation/server Recommendation question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

KenE

New Member
Feb 14, 2012
34
0
0
Ok does anyone have suggestions on how I can speed up my link to my server?
I've got about 10ft to run and I'm currently on a 1GbE link running through a switch. I'd like to direct connect between my file server and my workstation (I'm the only one on this server). When I upgrade my server I'd like to put the card in the new machine.

Sever config:
MS Server 2003 32bit (running file replication to the home office)
Xeon 3060 4GB ram
3Ware 9550se 4i 256MB raid card (1.1card with a 4x link)
4x500GB SATA drives raid 5 -(atto transfers max out about 265MB/sec)
1 free pcie 8x slot open (pcie 1.0 I think)

Workstation config:
Windows 7 pro 64bit
Xeon e3-1230 8GB ram
Mushkin 120GB SSD
1 free pcie 8x slot (wired 4x) 2.0

Looking on fleabay my options are:
Intel 10base-t 1st gen 10GbE with a cat 6 interconnect (maybe even patricks cards)
Myricom cx4 cards with cx4 cable interconnect
Mellonox infinitiband cards (not sure which model on these)
Thoughts?

And will I be able to even see any speed increases with a standard TCP/IP com between the computers or do I need to do something special to get a speed boost?

I'd like to keep things quite in my office my server is only about 6 ft away and my UPS is louder than my server and I like the passive cooling thing.

Thanks,
Ken
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
Ken,

The cheap Mellanox dual port 4x IB cards work great under Windows, but subnetmanager is a real pain in the...

The 1st generation Intel cards are too loud for being in an office, unless you want to play with the fans. You could probably tape another heatsink on there and turn them and have a large case fan (or set up a quiet 120mm fan) to blow over them and be fine. Still, stock, I have to say that they would not be an option unless you did a fan/ heatsink swap.

Myricom might be worth a shot but the ones that do have things like ESXi 5.0 compatibility are much more expensive (like the Mellanox Infiniband stuff). I just haven't used their stuff before.

Hope that gives a few ideas. If you do want to try the 10GbE Intel cards let me know (but you will need to work on the cooling.)

Regards,
Patrick
 

KenE

New Member
Feb 14, 2012
34
0
0
Thanks Patrick,
What I'm thinking the that simplest way to set up communications to to the domain is pull the 1GbE nic connection from my box and let all traffic run through the server (share internet services). That way I'm thinking is Workstation <-10GbE-> Server <-1GbE-> switch & router of my company network, that way I don't have to fool with having the IT guys re-write domain scripts and log ins.

Would that clear up the subnet stuff?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
If its just two workstations you could also try finding a pair of used Intel (or rebranded Dell/SuperMicro) 10GBe SFP+ cards and getting an SFP+ direct-connect twisted pair cable. If your within 10 feet it should be just fine (cables come in lengths up to 9 meters, though they are somewhat pricey).
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Are the X520-t2 out of your price range? There have been a few for ~$300 on ebay as of late. If you want to stick to RJ45, that is the direction I would go. Also, I agree with piglover, look at the SFP+ cards. TwinAx is more expensive than even Cat6A, but from what I have seen at work, there are going to be more SFP+ surplus switches than RJ45 and it is less power and heat.

For the first gen intel cards I think I might get some PCBs made to trick the fan warning circuit. I have one of my 1st gens hooked up using a breadboard, a simple 555 timer circuit and a low speed 120mm fan blowing front to back and the PHY sink was cooler on the modified card than on the stock card.
 

KenE

New Member
Feb 14, 2012
34
0
0
I don't need switch gear simply because it's just me and the server at this time. Since this is at work I'm trying to keep thing simple and quite. Not really into modding boards at this stage of my life. Too much other stuff to keep up with.

I'm thinking the SFP+ cards are out of the budget right now. We are watching cash flow right now so I'm looking to do this for under $300 for everything.

Should I be able to interconnect my machines with these?
Fleabay mellanox dual port card
Can I use standard CX4 cables for this or do I need to track down the IB cables?
 

KenE

New Member
Feb 14, 2012
34
0
0
Ok I ordered two mellanox cards (MHEA28-XTC) for $25 a piece and found a 15m IB belkin cable for $79!!! It's way too long but hey it gives me options in the future.
All in for $146.
I'll update once I get the cards in.
 

KenE

New Member
Feb 14, 2012
34
0
0
Ok I've got it up and running, BUTTTTT it's still seems to be limited to 1GbE speeds. I don't know what I'm doing wrong. Even with 2 ram disks on both sides I'm only getting max 12% usage of the 10GbE link. If anyone is thinking about doing this make sure you flash the cards first, it was stupid slow until I flashed it. This is the email I sent to Mellanox, maybe someone here has had better luck configuring 10GbE in the windows environment.

I am running a pair Infinihost III rev A cards (MHEA28-XTC), I flashed them to 5.3 (they had no version at all when I flashed) Neither show the performance tabs in the drivers, but both seem to have the right values in the registry (w7 seems to be missing some registry values though compared to Server 08r2).
Both have firewall turned off, and ICS turned off.
Workstation - W7Pro64bit
E3-1230 8GB ram SSD Sata III
PCIe 8x wired as 4x slot
Drivers - Openfabrics 3.0.0.3376
Running the subnet manager that came with the driver set.

Server
Server 2003sp2 32bit
Xeon 3050 3GB Ram 3Ware Raid 5 (4 disk 500GB SATAs) (4x PCIE 1.1a) Mellanox Card 8x slot Drivers - Mellanox 2.1.1.5750

I have a 15m belkin IB cable between the 2, both still have their 1GbE enabled for internet traffic but my workstation seems to pull all data directly from the IB card, which is good.

So here are 3 questions and I'll be done:
1. Did I screw anything up?
2. Do I have the right (and current) drivers installed?
3. My workstation seems 'snapperier' on lots of small files, but when I transfer a large file over (like a 12GB aerial) it starts at 140MB/sec transfer then steadies to 110MB/sec and CPU usage of the Server jumps to 60-80% (the IB port is only showing 8-10% usage), am I hitting the limits in my old server raid array?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
The PCie slot in the server might be the issue, could it be maxing out

The servers PCIe slot is at least a 8x electrical ? not just 8x physical ?
PCIe 1.x has 1/2 the bandwidth of v2

If you are running RAM drives to test this should be able to saturate the 10Gb connection

The RAID5 array will probably only just be able to saturate a 1Gb connection

If you are getting 140MB/s speeds that means the connection is over 1Gb/s, the aprox limit for 1Gb connection = about 100MB/s incl overheads

The servers 3GB RAM and a RAM drive is not helping either, you'll be lucky to have any RAM spare to make a RAM drive ?
 
Last edited:

KenE

New Member
Feb 14, 2012
34
0
0
It's 8x electrical. The mellanox card is 1.0a compatible so that's not the problem. Could it be the CPU? It's getting into the high 80's in usage (both cores).
On the ram drive, the server is only acting as a file server (dfs replication to home office) so most of the time I'm only using 1GB of ram. I made a 512MB ram drive (big enough that I could see what was going on). Still strange to see the thing max at 10-12.5% network usage, it's almost like it's capped in drivers or something.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Somethings to try:

Flowcontrol enabled
Interupt Moderation enabled
Enable Jumbo frames or even Super Jumbo Frames if available, or set to large value ie 9KB MTU (will reduce latency on larger files)
Make sure RX and TX Checksum offload is enabled (stops CPU from doing it)
Any other offloads should also be enabled

The above should offload as much from the CPU as pos
 

KenE

New Member
Feb 14, 2012
34
0
0
Didn't work. Broke the link, completely. But I did learn some things:
1. Check the through put of your RAID system first. (The best that my 4 drive 500GB SATA 3.0 7200rpms can do is 75Mb/sec, this is with a 3ware card with 256MB of ram and a BBU).
2.Make sure that there are drivers that are still supported. I attempted to use the open fabrics drivers (2.3) and every time they failed on install. I had to use the Mellanox drivers 2.1.1. I think this is my bottle neck with Server 2003.
3. Don't cheap out, I should have just gotten a pair of 10GbE intel cards.
4. I did learn that I can run dual nics in my domain. Windows 7 and Server are smart enough to send all data through the 1GbE direct connect with jumbo frames and all other traffic runs through the switch.
5. I'm going to save the hardware, and when we get around to upgrading the file server I stick the Mellanox card back in and try this again, (with the 3.0 drivers).
Thanks for your help everyone, this has been fun, sort of...
Now the fun part, how to build an windows array that can actually use a 10GbE...