I am seeking advice on what I should do. I currently have one box (check my sig). It is set up in a "all in one box solution" which would never be found in a datacenter, but its great for a home lab for obvious reasons. It's been running solid for a while. I'm just bored and ready to try something new. I also want to try more advanced features of vSphere using multiple ESX hosts.
My requirements:
I can think of four possible ways to go about this:
#1 - Two ESX whiteboxes plus shared storage using 10gig ethernet (3 physical boxes)
Build two physical diskless ESX hosts with 10Gbe NIC, and connect them to a storage box. Boot them with USB disks
Repurpose box in my signature for file server only, install a dual 10Gbe NIC and connect directly to each ESX host
Each box will be based off E3-1230 Xeon, Supermicro X9SCM, 16 or 32GB RAM
#2 - Dell C6100 with 3 ESX hosts and 1 fileserver for shared storage among them
2U form factor vs. at least 4-6U, maybe even 12U with the above setup
Cheaper than the above setup
I don't care about older generation Intel CPU tech
I'm definitely going to need to do PigLover's fan mod
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
#3 - One single Dell C6100 to do it all! Two ESX hosts, and two fileservers for shared storage. (1 fileserver for slow "nearline" storage, and 1 fileserver for fast SAS storage)
Because the maximum number of disk bays you can assign to a node is 6, and I need to make use of all 12, I would do the following:
Node #1- Fileserver #1 - Connect 6x 3.5" 2TB SATA, create a RAIDZ2 storage pool and share via NFS/iSCSI as needed
Node #2- Fileserver #2 - Connect 6x 3.5" 146GB SAS disks, create a 3-way ZFS mirror and share via NFS/iSCSI as needed to ESX hosts and database servers
Node #3- ESX host
Node #4- ESX host
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
#4 - Diskless Dell C6100 with 3-4 ESX hosts and separate physical fileserver (repurpose the box in my sig)
C6100 nodes #1-4 will be ESX hosts, or maybe I'll leave 1 for a pfSense system
C6100 nodes will boot off USB drives
The box in my sig will be repurposed as a fileserver and provide SATA/SAS storage via NFS/iSCSI
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
* Also need a QDR Infiniband PCI-E card for the separate fileserver
What would you do if you were me?
My requirements:
- I want to learn vSphere's more advanced features so I will need 2+ bare metal ESX hosts
- Shared storage. I will be using Linux + ZFS on Linux (aka "ZoL") now that it is mature enough
- For general storage, I currently have 6x2TB RAIDZ2 (RAID6), I don't think I would need any more than that
- For fast storage, I currently have 4x 10K 146gb SAS drives in a striped ZFS mirror, I don't think I would need any more than that
- NO hardware RAID controllers. All data, including VM guests, will reside on ZFS storage
- Rackmountable chassis only
- Cool, quiet, and power efficient
- If I can't have perfectly cool, quiet, and power efficient then I must come as close as I possibly can to this goal
- Capacity for comfortably running 10-20 guests with very low load
- At least 16GB per ESX host
- Nice cable management, no hacked up parts, has to look sexy in my new 25U cabinet
I can think of four possible ways to go about this:
#1 - Two ESX whiteboxes plus shared storage using 10gig ethernet (3 physical boxes)
Build two physical diskless ESX hosts with 10Gbe NIC, and connect them to a storage box. Boot them with USB disks
Repurpose box in my signature for file server only, install a dual 10Gbe NIC and connect directly to each ESX host
Each box will be based off E3-1230 Xeon, Supermicro X9SCM, 16 or 32GB RAM
#2 - Dell C6100 with 3 ESX hosts and 1 fileserver for shared storage among them
2U form factor vs. at least 4-6U, maybe even 12U with the above setup
Cheaper than the above setup
I don't care about older generation Intel CPU tech
I'm definitely going to need to do PigLover's fan mod
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
#3 - One single Dell C6100 to do it all! Two ESX hosts, and two fileservers for shared storage. (1 fileserver for slow "nearline" storage, and 1 fileserver for fast SAS storage)
Because the maximum number of disk bays you can assign to a node is 6, and I need to make use of all 12, I would do the following:
Node #1- Fileserver #1 - Connect 6x 3.5" 2TB SATA, create a RAIDZ2 storage pool and share via NFS/iSCSI as needed
Node #2- Fileserver #2 - Connect 6x 3.5" 146GB SAS disks, create a 3-way ZFS mirror and share via NFS/iSCSI as needed to ESX hosts and database servers
Node #3- ESX host
Node #4- ESX host
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
#4 - Diskless Dell C6100 with 3-4 ESX hosts and separate physical fileserver (repurpose the box in my sig)
C6100 nodes #1-4 will be ESX hosts, or maybe I'll leave 1 for a pfSense system
C6100 nodes will boot off USB drives
The box in my sig will be repurposed as a fileserver and provide SATA/SAS storage via NFS/iSCSI
Each node will get an QDR Infiniband mezz card, and I will buy a DDR switch (cheaper than QDR) and the cables to network it all together
* Also need a QDR Infiniband PCI-E card for the separate fileserver
What would you do if you were me?
Last edited: