vSphere ESXi – AIO vLAB in a 2 Nodes Cluster - Advices?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dreamkass

Member
Aug 14, 2012
31
5
8
Build’s Name: vSphere ESXi – AIO vLAB in a 2 Nodes Cluster (Nested ESXi, Shared Storage: ZFS, CacheCade 2.0 Pro or other)
Operating System/ Storage Platform: ESXi 5.0U1 and NAS/SAN (using some sort of VSA)
CPU: 2 x Intel Xeon E5 2620 6 Core 2.0GHZ
Motherboard: Supermicro X9DRH-7TF [SAS2 from LSI 2208 (1GB cache) and Intel® X540 Dual port 10GBase-T]
Chassis: Supermicro 836BE26-R920B
Drives: 8x Hitachi 7K2000 2TB, 2x SanDisk Extreme 240GB 2.5IN SSD (for SSD Cache ZIL/L2ARC or CacheCade)
RAM: 64GB - 4 x Kingston 16GB 1333MHz DDR3L ECC Reg CL9 DIMM DR x4 1.35V KVR13LR9D4/16I
Add-in Cards: Intel i350 Quad
Power Supply: Redundant 920w from the Supermicro chassis
Other Bits:
  • Tripplite 25U 4-Post SmartRack Open Frame Rack or other
  • UPS (to be determined)
  • 1Gb Switch with 10GbE (CX4 or SFP+)


Usage Profile: Personal Virtual LAB - vLAB, running nested Hypervisor ESXi / Hyper-V / Xen and native VMs

I work in IT consulting, VMware virtualization and Microsoft Infrastructure (AD, Exchange, SCCM and more) and I since I work from home when I’m not at a client, I want more horse power for my lab than my 2 year old workstation (i7 920, 12GB Ram, W7 and VMware Workstation).

I would also run basic network services for my home, media sharing and personal backup

Basic Idea:

Run 2 identical vLAB nodes each with a VSA (Virtual Storage Appliance) VM attached to the onboard SAS LSI 2208 chip using VMDirectPath.

Each VSA with replicate to the other VSA on the other Node, using block replication or software replication (ex: Veeam)

I have 2 x 10Gb-T ports RJ45 onboard each node:
• Might be connected directly as a Backend Network for storage, vMotion and FT
• Add NIC with 2 x 10Gb ports (CX4 or SFP+) connected to a switch (ex: Dell PowerConnect 6224 + 2 modules with 2 x CX4 each)

Another options for storage interconnect Direct or Switched
• FC using Qlogic 2562
• FCoE Intel X520-DA2

Nice to have features
1. VMware VAAI support
2. Block base replication and VMware SRM compatibility


Option 1 - VSA VM with LSI 2208 and LSI CacheCade 2.0 Pro with 2 SSD as read/write cache.

Potential VSA OS:
• VMware VSA (NFS)
• HP Lefthand P4000 VSA
• NetApp OnTapp Simulator (waiting on public access…)
• DataCore SanMelody
• Windows Server + StarWind
• EMC VNX Simulator (support only NFS)
• SvSAN
• Other?​

Option 2 - VSA VM with LSI 2208 in IT mode (if supported) and ZFS (ZIL/L2ARC Cache on SSDs)

Potential VSA OS
• Nexenta CE
• Illumian + nappit
• OpenIdianna + nappit
• FreeNAS
• Others?​

Recommendations and advices I’m looking for:
1. How is this build looking anyone a build similar setup?
2. I heard Supermicro chassis are nice but they are made for datacenters, so in a home environment how noisy are they, can they be quieted down?
3. To save money, any recommendation on good Tower Case with 12 to 16 HDs, Case + 5in3 HS HD cages + Expanders + PSU?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
Did you buy the components above already?

Generally saving money going with the tower cases then adding 5-in-3's doesn't really save you much since you pay a good amount for those.

One thing you could do is look for something like a Supermicro 7047A-73 at 27db. There are quite a few similar options. You can house the 8x 3.5" drives then mount the 2.5" SSDs elsewhere. If you wanted to expand you could then 5-in-3 or add 4-in-1's for 2.5" drives. Cheaper may be the Norco RPC-4224 with quiet 120mm fans added although lower quality.

In either case, most motherboards have lower fan speed modes to keep things fairly quiet so long as they are plugged into the motherboard.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Your OP says 2x Sandisk SSDs. Didn't Patrick do some 4 or 6 ssd chassis reviews a few months back on sth?

Do you really need redundant PSUs for this? Get a good PSU and if it lasts a week and you don't overload the thing it is going to be fine for years IMNSHO.

Redundant PSU is nice in a home lab but I would rather have quiet. If you go with another tower, make sure you use a full modular PSU so all internal cables come off. If you do that, it takes 2 minutes to swap in a spare.
 

dreamkass

Member
Aug 14, 2012
31
5
8
Bad news

I've contacted Supermicro support regarding IT mode on the LSI 2208 chip, since I couldn't find anything on the manual or FAQ

The Onboard LSI 2208 does not have IT mode. Unfortunately, the onboard 2208 is limited to 16 drives.
No It Mode and maxed at 16 HDDs :mad:
 

dreamkass

Member
Aug 14, 2012
31
5
8
If you want LSI CacheCade and or FastPath I've found the SKUs

LSI FastPath
- AOC-SAS2-FSPT-ESW

LSi CacheCade 2.0 Pro
- AOC-CHCD-PRO2-KEY
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
I heard about the 16 drive thing. Total bummer actually as I could see someone wanting to service one of the ultra dense Supermicro 4U storage servers with these. On the other hand, you do save a bunch with these.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Hi,

If I were looking to go for a 2 node cluster and had that sort of cash to throw at it then, personally, I would probably go for a 3 server setup, even for a lab.

4U storage, E3, ECC ram (SAN).
2x 2U, E5 (single), ECC ram (vSphere hosts).

Turn the 4U in to a SAN, 2Us in to ESXi nodes with their own SDD cache etc.

Reasons;
More likely to simulate real world environments.
Eggs not all in one virtual host basket.

I also prefer to pay the extra and get standalone cards (raid / network) where possible as they are transferable. If my board fails then I can just pull and put in another backup machine (likely of lower spec) and get it all up and running pretty easily. I suggested a single E5 so you can test requirements before going double and grow in to the units rather than doing the big bang and finding it lacking. Cheaper to setup unless you are under budgetary pressure to spend it or loose it.

You could also virtulize the SAN if you wanted and have another nested vSphere as a hot spare so if one of your nodes fails (simulated or real) then you could fail over to the backup on the SAN and still be clustered.

As for cases, I am building a machine based around the 846A-R1200B which does not have an expander but I am so far very impressed. Very good build quality but the fans are spinning at 6,000 rpm so it is pretty noisy. The reason for no expander was to enable passthrough of various drives via separate raid controllers. I also very much like the SC813MTQ short chassis but they are 1U and so not the best for expandability with add-in cards.

Of course another option is to get a base unit from HP / IBM / Dell and if you are not so worried about the warranty then you can just add in Kingston ram, off the shelf drives etc. Where I am there are sales on the HP DL360pG8 & ML350pG8 or IBM X3500m4 which may be worth looking at although you may need to play with the config a bit.

RB