Playing with 10GigE

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
A few months back, I decided it would be fun to start playing with 10GigE. We are probably still 12-24 months away from it really becoming affordable in general, probably longer until it gets widespread for SOHO and longer until we see it at home...maybe sooner in the homes of people who bother to read here. :)

Like all good projects, this one has gotten massively carried away. Thought I'd start a thread to document and discuss the journey.

Background

I run a "business" taking video and stills of youth sports - mostly soccer - and producing videos. Mostly short memory videos for players or teams, some training/education video for coaches, and individual highlight videos for HS players looking to get noticed by college coaches. "Business" is a rather loose term since I do it mostly because I love it and nobody has ever paid me enough money to be viable. In fact, most of my work is given away... The real basis for this "business" is to feed my love of tinkering with the two non-human loves of my life, computers and high-end cameras, in a way that won't disgust my wife and get my funds cut off. Can anybody else relate to that? I knew you could! Along the way both my wife and my youngest daughter have become accomplices in this by becoming quite accomplished photographers and videographers.

Before anyone chimes in with "you don't need 10GigE" - I already know that! But it is both interesting and within my means, so why not?

So what am I trying to achieve? First, I want to get all my storage away from my workspace. It makes noise, generates heat, requires fans and takes up space. That's generally no problem, but for the video work I need high-performance storage or the editing/rendering process is just painful. I've experimented with several SSD/raid-0 configurations, but I can't make them large enough to just store all my active projects on - and copying projects to/from the storage server is unacceptably slow - projects average about 150-300GB of source material. I either need a local, working raid array or a way to move projects between the storage server and the SSDs quickly. Even better would be a network fast enough to work with the files directly on the storage server (more on that later, but I need ~250-400MB/s reads from disk to get satisfactory performance).

Here's where it gets interesting. A few weeks ago, I was describing all this to a colleague at work, whining a bit that point-to-point 10GigE was possible but would be something of a PITA. GigE switches - especially 10Gbase-t - are still ridiculously expensive. The next day he told me "your switch is on its way"... So, in the next couple of days I will be the proud owner of a Juniper EX-2500FB switch, a collection of 10G fiber SFP+ plug-ins, a few 10G wire direct attach copper connectors, and two Intel X520-2 NICs.

So here we go...
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Here are the four systems that will be part of this experiment:

BigPig, the video workstation:

Chassis: Chieftec BRAVO BA-01B-B-B full tower (the "big Cheiftec")
MB: SuperMicro X8DAH+/-
CPU: 2x Xeon X5550
HSF: I've actually forgotten - some random heatpipe tower cooler.
Memory: 24GB ECC (6x Kingston KVR1333D3D4R9SK3 4GB Registered ECC DIMM)
GPU: Sapphire HD 4350 (no fan, low power, perfect!)
Raid: Areca 1261ML SATA Raid w/4GB cache
System Disk: 4x Vertex 60GB SSD raid-0 on Areca controller
Raid array: 8x Hitachi 7k2000 2TB, Raid-6 on Areca controller
Photo array: 4x Seagate "LP" 2TB drives, raid 5 on Intel ICH10
Other disk: Seagate 7200.11 1.5TB for emergency boot disk and Ghost images of boot array
Other: a 4x 2.5" caddy is installed and pre-wired to the Areca controller for a future raid array.
PSU: Corsair 1000HX
OS: Windows 7 Ultimate x64

BigPig has been a work in progress for over two years. I did quite a lot of work to get this PC reasonably quiet and, to the best of my limited abilities, it worked out OK. But you are never going to "silence" a PC with two relatively hot CPUs, 13 spinning disks, 11 fans, etc. I'm reasonably pleased with it as it looks good, doesn't sound like it has jet engines in it, and handles my editing/rendering jobs with ease.

BigRedBarn, the storage server:

Chassis: Norco 4220 with home-made Lexan 120mm fan bracket
MB: Supermicro X8SIA-f
CPU: Xeon X3460
HSF: Scythe Shuriken
Memory: 32 GB ECC (4x Kingston KVR1333D3Q8R9S/8G 8GB Registered ECC DIMM)
GPU: Onboard IPMI
HBA: IBM M1015 (LSI 9240-8i in OEM clothing)
Expander: HP SAS expander
System Disk: 2x Seagate 2.5" 500GB 7200 RPM laptop drives in ZFS mirror (raid-1)
Storage: 20x Hitachi 5k3000 2TB ZFS pool in 2x10 drive RaidZ2 vdevs
PSU: Corsair 650HX
OS: Solaris 11 express

The BigRedBarn is a exercise in massive overkill. But its been fun and I've learned a lot with it. So I'd call it a success even if my wife does call it a vacation she didn't get to have. This system started out as a simple replacement an existing WHS machine I was using to back everything up. The initial plan was just to run WHSv2 (Vail), but over the last 6 months I've tried just about every possible combination of hardware raid/software raid/non-raid storage with different OS. I've tried about 5 different raid cards and/or HBAs. I've played with WHS v2, Linux, FreeBSD, Server2008 & Solaris. I've tried them bare-metal as well as virtualized with both Hyper-V and ESXi. After all this and dealing with various performance and reliability problems of each configuration, I think I've landed with the idea that a single purpose NAS/SAN on bare-metal Solaris 11 is the "final" configuration here. Final, of course, means nothing more than I'll leave it this way for at least 6 months...

FarmHouse, the web/other-stuff server:

Chassis: Currently Ghetto...I won't even describe it. Still shopping for a 2u or 4u case
MB: Supermicro X8SIA-f
CPU: Xeon L3426 low-power
HSF: Scythe Shuriken
Memory: 24 GB ECC (6x Crucial CT2KIT51272BB1339 4GB Registered ECC DIMM)
GPU: Onboard IPMI
HBA: None right now
System Disk: Seagate 2.5" 500GB 7200 RPM laptop drive
PSU: MiniBox PicoPSU 150
NIC: Intel "ET" quad GigE NIC
OS: ESXi 4.1 hypervisor hosting Server2008 email & web server, fedora Linux, and a couple of other VMs

The FarmHouse hosts my simple website, a simple email server (hMailserver), and generally serves as a playground. Until recently, I had my storage server running under ESXi and all of these things running as VMs, but I had some recurring problems that led me to decide I needed to separate the NAS/SAN from the VM host (example: cold-start after a power fail always needed to be "fixed" by hand because timing how things started was unpredictable). I had the hardware laying around to separate them so I did...

BTW, I also tried running Trixbox in a VM for my phones. Leaned a lesson. Don't do it. Yes, you can do it. It works on both Hyper-V and ESXi. But people expect phones to work...and the VM host was too much of a playground. If you need a phone system get a dedicated machine for it. I ended up running it on a Shiva-plug using PlugPBX.

I also have a ton of disks still lying around. My old WHS machine was stuffed with Seagate 7200.11 drives - troublesome drives for hardware raid but software raids and non-raid don't seem to be a problem. I am considering building a safety backup storage array on FarmHouse.

LittlePig, the "day to day" PC:

Chassis: Antec Sonata proto (the Sonata w/out a PSU)
MB: Gigabyte GA-H57M-USB3
CPU: Intel i5-661
HSF: Venomous-X, ducted to rear case fan
Memory: 2x4GB DIMMS (I've forgotten...)
GPU: On board graphics only
System Disk: 2x Vertex 60GB SSD raid-0 on ICH10R
User disk: 2x Seagate 7200.11 1.5TB raid-1 on ICH10R
Other: Intel "PT" GigE NIC. Had it lying around...
PSU: Corsair 520hx. Overkill for this, but it was on sale.
OS: Windows 7 Ultimate x64

So what do you do when you love tinkering with your computers but, when you have it all torn down, your wife complains that she can't read her e-mail or update her Facebook? Or the kids complain that the shared printer attached to your system isn't accessible? A lesser man would just load up an old laptop with a new OS for her to use and have the complaint go away. But its so much more fun to just build a new system, isn't it? So right next to BigPig, sharing monitors, sits LittlePig. Designed to be cheap, stable, near-silent, low power & always on. I think I hit the mark pretty well.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
The Network, before and after.

When I moved into my current home the first two things I did were install ceiling fans and wire the whole thing for TV & LAN. Every bedroom (except the master, amusingly), the entertainment center, the kitchen and the living room was wired with 2x Cat6 and 1xRG6 drops. "My Desk" was wired with 4xCat6. The Cat6 was taken to a patch panel inside an old kitchen cabinet that a previous owner mounted in the corner of the garage. In that same cabinet I put my cable modem, a mikrotik 450g router and an HP 1810-24g switch. It now also has an old d-link 16 port GigE switch I use for a "management" LAN and the ShivaPlug running my phones. All hidden in my amateur MDF "closet".

My home network is something I've worked to get reasonably right. I am not a "network guy". I know some will complain about the "neatness" seen in the pictures. But it is what it is...highly functional, very flexible, and most of all well documented (every cable is labeled and a key is stored in the closet and on my PC).

Closed it just looks like a POS used kitchen cabinet hung in the corner of a garage...


Open and its my little MDF. A bit of a mess, really, but very functional. Notice the little white box next to the cable-modem & router. That's the ShivaPlug running my PBX.


After

So after I confirmed that I really was getting the switch I started getting ready. I got a keystone panel for my "closet", a bunch of fiber "LC" keystone inserts, and found a source for the fiber cables. I wired 4-pairs of fiber patch cords to a new keystone panel behind my desk (where BigPig and LittlePig both live). Luckily, the desk is on the same floor and just through one wall from the my little MDF. Hard to see, but I've also added two SFPs to the HP router and run fiber patches to uplink to the new Juniper switch. I also moved everything on my "management VLAN" onto the unmanaged d-Link switch to free some ports on the HP. The black duct running out to the right has 6 pair fiber jumpers running over to my server's temporary home, which also happens to be where the new switch will have to go until I get a proper rack.


Here are the Cat6 runs into the house, including the new fiber cables snugly inside a split-seam duct. They are basically running into the furnace closet. The other Cat6 running off to the left are shown in the next photo.


The rest of the cables run up into the 2nd story attick via the same chase the builder used for the 2nd zone furnace upstairs:


My desk is on the wall just behind the furnace cabinet. Basically, I ran into the cabinet, down to the floor and to the back. There are keystone blocks just on the other side behind my desk. The two "blue" cat6 were added later - so now I have 6xCat6 and 4x fiber pairs. That should be enough to last me forever...right? And yes - when I am sure everything is stable I plan to fill these holes with fire-blocking foam.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
Yea this is exciting. One other idea for you or maybe Nitrobass24 is even showing how to setup things like P2P 10GbE networking. Not everyone has access to a Juniper switch!

Keep up the good work.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Yea this is exciting. One other idea for you or maybe Nitrobass24 is even showing how to setup things like P2P 10GbE networking. Not everyone has access to a Juniper switch!

Keep up the good work.
Thanks Patrick. Let me get done with this one first and I'll see if I have any energy left to write up the point-to-point version.

BTW, I'll give a preliminary conclusion based on what I've done so far: wait for 10G-baseT switches to become reasonable. I haven't even got the switch yet and I am convinced that running fiber like this in a SOHO or Home situation is just not practical. Its expensive - I've spent over $200 just on fiber cable and I haven't even lit up my first link. Cat6 is so much easier to work with.

One other thing that kinda tick'd me off: I love the HP switch. But HP did something that ought to at least be discouraged - they locked the SFP ports so that only HP-branded SFPs would work in them, and then they charge 4-5x what a comparable fiber interface costs for other vendors. SFP is supposed to be a completely open standard. Imagine if somebody figured out how to only make their cat-5 cables work or something like that. Good news is that the Chinese manufacturers are very, very smart and there are sources for "HP compliant" SFPs at reasonable prices. HP would probably call them counterfeit...but since what HP did was despicable I really don't care.

Now I need to see if anybody was home to sign for that Juniper switch today!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Hmmm. A bit of a shipping issue has come up and no switch until next week.

Maybe i'll find some time this weekend to write something up on point-to-point 10Gbe as requested by Patrick! No promises...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Sorry for starting a project thread and then going quiet.

Bit of an update. The switch that was being sent got changed...into a monster. Can't really try it yet because I need to get the right 1Gbe modules to connect it to the rest of my network. I have a few 10Gbe SFP+ SRs and some 3m SFP+ DAC cables, but no 1Gbe optical or copper. Actually, I'm not sure I want to keep it. Tested it with a Kill-A-Watt and this beast idles at 250w (ouch!) and its LOUD! But hey - 48 ports 1/10 Gbe plus four 40Gbe uplink ports (yes - really - four 40Gbe ports - bet I won't find a NIC for that any time soon!). Guess that much horsepower need to be fed well and cooled.

Should have the parts to tie it all together in a week or so...


 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Well, that Juniper switch worked perfectly. Except that the Fan noise was unbearable. And the power draw was unsustainable. My power company has smart-meters and when I checked on line my total daily power usage jumped by 20%. Ouch. So as wonderful as it was to have access to a modern enterprise-class rack-top 10Gbe switch - back it goes. My source is looking for something more reasonable for me.

For now, I've got BigPig (the main workstation) connected back-to-back with the BigRedBarn (the storage Server) and have been doing some benchmarking. Its interesting stuff, actually. Learned quite a lot...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
How to configure back-to-back 10Gbe

Its not really specific to 10Gbe - this really applies to any back-to-back Ethernet/IP connection. Its incredibly simple!

Simply connect the machine's NICs together...

First - install the NICs in each machine and connect them directly with a cable. For copper connections (RJ45), assuming they are 1Gbe or higher, just use any cable. You do not need a specialized "crossover" cable because the 1Gbase-T and 10Gbase-T specifications require 'autosense' capabilities NICs automatically adapt.

In my case, I am using a fiber connection. This is easy too - the LC-connector is designed so that the transmit from one end will plug into the receive on the other, etc.

Confirm that the two endpoints can see each other. Assuming at least one end is a windows machine, open the device manager, select the right NIC, and open its properties sheet. Many NIC drivers include a "link speed" page that will show what speed the links are connected at.



Configure the IP information

Once you have the NIC connected, you need to give it an address. Because you are back-to-back between two machines there is no DHCP server to offer you an address to use. You have to set it up yourself using "static" assignments.

First, you need to select what address you are going to use for each machine. The addresses used for this link need to be in a unique "subnet", different than anything else used on your PC. Normally its good to have it different than anything used anywhere else on the Internet too. I won't go into a whole discussion of IP subnets here - it would be too long. For most home users with a normal cable/DSL/FIOS connection, you will be using addresses that look like 192.168.1.xxx, with your router at 192.168.1.1. Just choose a new "subnet" that starts with 192.168 and a different third number for your back-to-back machines and assign the machines addresses in this range. I use subnet 192.168.100 for my machines, with addresses 192.168.100.2 and 192.168.100.3 (most network admins always reserve address ".1" for the subnet's router - in this case there isn't one, but by numbering them this way I won't have to change anything when I do have one after getting a new switch).

To set the Static IP address, open the "network and sharing center" from the right hand side of the taskbar (or from the control panel) and select the new network adapter's "Local Area Network Connection". Click "Properties" from the lower-left part of the box. Then double-click on "Internet Protocol Version 4". Click the button for "Use the following IP address" and fill in the IP address you selected for this machine (for me, 192.168.100.3). In the "subnet mask" field enter "255.255.255.0" and for the "default gateway" fill in the address you assigned to the machine on the other end of the link (for me, 192.168.100.2).





Repeat this on the machine at the other end, swapping the IP addresses, and you're all done!

Configuring a static IP address on Solaris

For me, the other end of the connection was on a Solaris 11 Express server. The interface name on Solaris for an Intel-based 10Gbe NIC is 'ixgben',where "n" is the unit number. For me, it is "ixgbe0". To do this on Solaris, open a command tool as "root" and type the following commands:

ifconfig ixgbe0 plumb
ifconfig ixgbe0 192.168.100.2 netmask 255.255.255.0 up


Open up the Windows Firewall

I'll assume you are using Windows Vista, Windows 7 or Server 2008 on at least one of the links. If you are, there is one more step - you have to identify your new network connection as a "home" network. This will give you the most fully open set of firewall rules for this new connection. To do this: open the "Network and Sharing Center" again and click the network type under your new network. Then select "Home Network".




Give it a test!

Once you are all done, make sure the two computers can communicate over the new link. In Windows and "ping" the address of the other machine:



A few other adjustments

I also did a few more things. On both ends, I:

- Enabled jumboframes at 9014 bytes.
- Made sure that "flow control" was set the same way, on or off, on both ends
- Increased both the transmit and receive buffers on both ends
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
So how did it do?

After getting things all set up, I tested performance to a share on the big ZFS array on the Solaris server using Samba/CIFS. Note that this pool benchmarks locally on the server (using Bonnie++) at over 1.2GB/s for both read and write, so it should be able to easily saturate the 10Gbe link...



Hmmm, not so good...

Read speeds no better than with 1Gbe. Write speeds a bit better, but this was a lot of work to get a 50% bump. Not exactly what I was looking for...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Lets try it again...

Thinking there might be something wrong with the disk pool, I set up a Ramdisk on the Solaris machine. Benched it locally - amazingly fast (over 10MB/s reads and writes - no big surprise). Shared it over CIFS and we get this:



Hmmm. Same general speeds,but actually a little worse than the disk based share. Again - not exactly what I wast expecting.

If anybody knows how to tune Samba/CIFS shares on SE11 I'd love to hear it...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
So lets give NFS a try...

Not at all satisfied with the performance of CIFS/Samba, I decided to give NFS a try. So I went back to the server, enabled NFS on all my shares, and enabled the "client services for NFS" on BigPig.

Here's the results:

[image coming...I guess I didn't save this one, gotta re-test]

Better, I guess, but still a long, long way from what I was hoping for.

Hmmm.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
OK. Lets try iSCSI

Getting desperate now...really looking for breakthrough performance with these expensive, high-speed NICs. So I set up iSCSI using Comstar on the SE11 server. Comstar lets you use a "volume" or a "file" for the iSCSI target. I tried both, but the "volume" service is far, far faster. It has some downsides (like you can take ZFS 'snaps' of a file-based iSCSI target, but not a volume). But I'm looking for speed so this is what we'll go with here.

Here's how it looks:



Now that's what I'm talking about! This is the kind of performance I wanted to see from this experiment in 10Gbe. With some tuning/testing and network optimization I'm sure I could get this consistently >800MB/s for large reads/writes.

The only problem is its using iSCSI. iSCSI is a block-based driver that lets the client build its own filesystem. Its not like using a file share, its more like using a disk drive that happens to be remote'd over the network. This means that I can't easily share this file system with anything else.

Still not exactly what I need wanted...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
A commercial NFS client...

After digging around some more, there appear to be a few commercial NFS clients out there. The claim is that MS's built in NFS client software is very poorly optimized for high speed networks and a "good" client can work wonders.

The one that appear to be most highly recommended is the old "hummingbird" client, which is now owned by a company called "OpenText" and called "NFS Solo". Its not cheap ($245/machine) but it might be worth it. I picked up a 30 day evaluation and...

Here is is on a real (disk based) share:


And since I had the ramdisk already set up:


Since the Ramdisk is effectively the same speed as the 'real' disk (even a bit slower) I'll presume that the NFS client is about as optimized as its going to get. From here I'll have to work a bit on a few more network optimizations and maybe some server side optimizations too.

But for now it looks like I've found my solution! Now on to setting up the working environment for my video business and comparing 'real' results.