Thoughts on the C6100

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Alfrik

New Member
May 4, 2013
11
0
1
So, new here and I've been spending the last week or so lurking and soaking up all I can on the C6100 chassis. It's completely thrown my home lab plans for a loop. As such I'm going to throw out some questions and see what sticks.

This lab will be a mix of VM/Hyper-V training and testing facility for side projects, along with my "production" systems for home.

I'd originally intended in repurposing my wife's current dual-core Pentium (LGA755) with a bunch of internal drives (and external as the need grew) running FreeNas as bulk storage and VM images. The home woud be shared out using FreeNas's SMB implementation, with access rights managed by my domain.

The VM hosts were going to be a 2-3 E3-1220v2 "baby dragon" type white box servers or Dell T110 II systems. I was happy with the plan and build out times I had set myself. Then I came across the C6100, and things went south.

I'm sure it's still more power expensive, but for the aproximte price of a single Dell T110, and just a bit more then the whitebox I planned, I get 4 dual CPU servers with a boatload of RAM in each. I immediately thought that I could dedicate one node to storage, throwing in a HBA and attaching it to an external JBOD array and throw FreeNas on that. The one node would be a hyper-v based host, and the other two ESXI. I could virtualize my current pfsense atom box and retire it, and basically consolidate everything into the "one" box.

Then I read that IOPS and bandwidth would be pretty tight running a bunch of VMs on the three nodes. So - infinband to the rescue and link things that way, except freenas doesn't support infinband. Ok, some cheapish 10gige cards, right? Except that would use my single available PCIe slot on the freenas node - so no HBA.

Some feedback, and thoughts would be greatly appreciated. Also my apologies if this actually belongs in an other forum.

EDIT: a thought just came to me, are there any 10gigE cars thar fit the mezzanine, or HBAs that connect externally that fit as well? Do I just re-wire all the bays and keep the spinning rust and SSDs in the chassis?

EDIT EDIT: an implied question is, do I just stick with my original build plans?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
There is a Dell dual-port 10Gbit ethernet mezzanine card available for the c6100.
Also, the Dell Infiniband mezzanine card is Mellanox VPI, which means that it can also operate as a 10Gbit ethernet card (according to the docs - I have not tried it). Of course it's still QSFP so that doesn't help if you have a switch with BaseT ports.
And finally, all of the LSi external-port HBAs, such as the 9200-8e, 9205-8e, 9202-16e, and 9207-8e work in the c6100.
 

Alfrik

New Member
May 4, 2013
11
0
1
There is a Dell dual-port 10Gbit ethernet mezzanine card available for the c6100.
Also, the Dell Infiniband mezzanine card is Mellanox VPI, which means that it can also operate as a 10Gbit ethernet card (according to the docs - I have not tried it). Of course it's still QSFP so that doesn't help if you have a switch with BaseT ports.
And finally, all of the LSi external-port HBAs, such as the 9200-8e, 9205-8e, 9202-16e, and 9207-8e work in the c6100.
Ok, that's damn cool about the mezzanine 10gigE cards. I'll grab one of the C6100s next pay period and find me some 10gigE cards and a switch as I build things out. Now to find one of these super cheap procurve or power connect 48 port 10gigE switches...
 

Alfrik

New Member
May 4, 2013
11
0
1
Yoikes, Shaggy, those 10gigE cards aren't exactly cheap on eBay, and needing four of them...
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Yoikes, Shaggy, those 10gigE cards aren't exactly cheap on eBay, and needing four of them...
This is why a lot of people, myself included, are going the Infiniband way. Cheaper second user parts and potentially much faster but it is more for storage sharing (iscsi, SRP etc) than for network communication (MPI excluded of course).
 

Alfrik

New Member
May 4, 2013
11
0
1
Yeah, but unless infiniband is supported with freenas I'm kinda stuck - ZFS was one of the things I'd hope to implement. Hmm...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Yeah, but unless infiniband is supported with freenas I'm kinda stuck - ZFS was one of the things I'd hope to implement. Hmm...
Infiniband is well supported on current Linux distros. ZFS on Linux is solid. Doesn't have the pretty UI of freenas.

You could also go with Solaris 11 instead of OI. Solaris has fixed the issue with expanders. And it supports NappIT.
 

Alfrik

New Member
May 4, 2013
11
0
1
It was my understanding ZFS on Linux was experimental and also ran in user space and not kernel level. Would that not take a hit on performance?

Maybe I need to learn me OpenIndiana, which from my rough knowledge is an open source version of Solaris, no?
 

Alfrik

New Member
May 4, 2013
11
0
1
Infiniband is well supported on current Linux distros. ZFS on Linux is solid. Doesn't have the pretty UI of freenas.

You could also go with Solaris 11 instead of OI. Solaris has fixed the issue with expanders. And it supports NappIT.
What are/where the issues with expanders?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
It was my understanding ZFS on Linux was experimental and also ran in user space and not kernel level. Would that not take a hit on performance?

Maybe I need to learn me OpenIndiana, which from my rough knowledge is an open source version of Solaris, no?
Those are quite dated throughts about ZoL. Is native (kernel) mode and quite ready. See here.