So, new here and I've been spending the last week or so lurking and soaking up all I can on the C6100 chassis. It's completely thrown my home lab plans for a loop. As such I'm going to throw out some questions and see what sticks.
This lab will be a mix of VM/Hyper-V training and testing facility for side projects, along with my "production" systems for home.
I'd originally intended in repurposing my wife's current dual-core Pentium (LGA755) with a bunch of internal drives (and external as the need grew) running FreeNas as bulk storage and VM images. The home woud be shared out using FreeNas's SMB implementation, with access rights managed by my domain.
The VM hosts were going to be a 2-3 E3-1220v2 "baby dragon" type white box servers or Dell T110 II systems. I was happy with the plan and build out times I had set myself. Then I came across the C6100, and things went south.
I'm sure it's still more power expensive, but for the aproximte price of a single Dell T110, and just a bit more then the whitebox I planned, I get 4 dual CPU servers with a boatload of RAM in each. I immediately thought that I could dedicate one node to storage, throwing in a HBA and attaching it to an external JBOD array and throw FreeNas on that. The one node would be a hyper-v based host, and the other two ESXI. I could virtualize my current pfsense atom box and retire it, and basically consolidate everything into the "one" box.
Then I read that IOPS and bandwidth would be pretty tight running a bunch of VMs on the three nodes. So - infinband to the rescue and link things that way, except freenas doesn't support infinband. Ok, some cheapish 10gige cards, right? Except that would use my single available PCIe slot on the freenas node - so no HBA.
Some feedback, and thoughts would be greatly appreciated. Also my apologies if this actually belongs in an other forum.
EDIT: a thought just came to me, are there any 10gigE cars thar fit the mezzanine, or HBAs that connect externally that fit as well? Do I just re-wire all the bays and keep the spinning rust and SSDs in the chassis?
EDIT EDIT: an implied question is, do I just stick with my original build plans?
This lab will be a mix of VM/Hyper-V training and testing facility for side projects, along with my "production" systems for home.
I'd originally intended in repurposing my wife's current dual-core Pentium (LGA755) with a bunch of internal drives (and external as the need grew) running FreeNas as bulk storage and VM images. The home woud be shared out using FreeNas's SMB implementation, with access rights managed by my domain.
The VM hosts were going to be a 2-3 E3-1220v2 "baby dragon" type white box servers or Dell T110 II systems. I was happy with the plan and build out times I had set myself. Then I came across the C6100, and things went south.
I'm sure it's still more power expensive, but for the aproximte price of a single Dell T110, and just a bit more then the whitebox I planned, I get 4 dual CPU servers with a boatload of RAM in each. I immediately thought that I could dedicate one node to storage, throwing in a HBA and attaching it to an external JBOD array and throw FreeNas on that. The one node would be a hyper-v based host, and the other two ESXI. I could virtualize my current pfsense atom box and retire it, and basically consolidate everything into the "one" box.
Then I read that IOPS and bandwidth would be pretty tight running a bunch of VMs on the three nodes. So - infinband to the rescue and link things that way, except freenas doesn't support infinband. Ok, some cheapish 10gige cards, right? Except that would use my single available PCIe slot on the freenas node - so no HBA.
Some feedback, and thoughts would be greatly appreciated. Also my apologies if this actually belongs in an other forum.
EDIT: a thought just came to me, are there any 10gigE cars thar fit the mezzanine, or HBAs that connect externally that fit as well? Do I just re-wire all the bays and keep the spinning rust and SSDs in the chassis?
EDIT EDIT: an implied question is, do I just stick with my original build plans?
Last edited: