ESXi Cluster + vSAN Home Lab Build Ideas

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Nope, witness can even be a laptop compat w/ spec to run vSphere and just load it with ESXi and ONLY run that one vSAN witness appliance on it. I was gonna look into this avenue until I realized that adding a 3rd 2011 node would only put me roughly 1 amp concurrent over what I was using on my old 3-node E3 setup compared to what am using now so another $15 bucks a month at my local KwH pricing.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Nope, witness can even be a laptop compat w/ spec to run vSphere and just load it with ESXi and ONLY run that one vSAN witness appliance on it. I was gonna look into this avenue until I realized that adding a 3rd 2011 node would only put me roughly 1 amp concurrent over what I was using on my old 3-node E3 setup compared to what am using now so another $15 bucks a month at my local KwH pricing.
What licensing do you use for your 3 node vSAN setup?
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
How does the following look for a 3-node vSAN cluster?

Host #1

Xeon D-1541
64GB DDR4 RAM
Dual 10Gb SFP+

Host # 2

Xeon D-1541
64GB DDR4
Dual 10Gb SFP+
3 x 400GB Hitachi HUSSL4040ASS600 SSDs​

Host # 3

Xeon D-1508/1518
16GB DDR4 RAM
Dual 10Gb SFP+
3 x 400GB Hitachi HUSSL4040ASS600 SSDs


Host #1 would be my main computing box running all my media applications along with some Windows VMs (AD, DNS, etc.). Host #2 would be my test VM box for running all my testing VMs. Host #3 would contribute storage only but would have the potential to run 1-2 low spec VMs if need be.

An important note here is that I wouldn't provision more than 32GB of RAM in total for the VMs on Host #1 and #2. So while they would both be running separate VMs on a daily basis, if need by I could vMotion the VMs on Host #1 over to Host #2 and vice versa.

In this scenario I'm wondering which host would be the best place to run vCSA. Also, I'm wondering how much usable storage space I'd have in this scenario with a FailuresToTolerate setting of 1? I also have 4 x 480GB Intel 730 SSDs laying around that I could throw into this setup if it makes any sense.​
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
How does the following look for a 3-node vSAN cluster?

Host #1

Xeon D-1541
64GB DDR4 RAM
Dual 10Gb SFP+

Host # 2

Xeon D-1541
64GB DDR4
Dual 10Gb SFP+
3 x 400GB Hitachi HUSSL4040ASS600 SSDs​

Host # 3

Xeon D-1508/1518
16GB DDR4 RAM
Dual 10Gb SFP+
3 x 400GB Hitachi HUSSL4040ASS600 SSDs


Host #1 would be my main computing box running all my media applications along with some Windows VMs (AD, DNS, etc.). Host #2 would be my test VM box for running all my testing VMs. Host #3 would contribute storage only but would have the potential to run 1-2 low spec VMs if need be.

An important note here is that I wouldn't provision more than 32GB of RAM in total for the VMs on Host #1 and #2. So while they would both be running separate VMs on a daily basis, if need by I could vMotion the VMs on Host #1 over to Host #2 and vice versa.

In this scenario I'm wondering which host would be the best place to run vCSA. Also, I'm wondering how much usable storage space I'd have in this scenario with a FailuresToTolerate setting of 1? I also have 4 x 480GB Intel 730 SSDs laying around that I could throw into this setup if it makes any sense.​
That'll do good sir!

If you're looking to do vSAN w/ witness appliance run the witness on your 3rd smallest box and choose 'small' config, run VCSA on either of the other hosts, HA and DRS will do their jobs :-D

Could make it a 3-way legit vSAN w/ 2 of those drives in each host and nix the vSAN witness appliance idea. Just thinking here, I dunno if the VMUG EVAL plan allows/enables AFA vSAN so you have better chack that and if not get some magnetics to place w/ it...else setup a stg appliance ZFS homerolled, napp-it, freenas, etc. appliacnce on node 3 and put all disks there and use it as a pure stg box. With 6 400 gb disks you could go raidz-2 and get decent perf/capacity (1.5TB roughly of slc all-flash config).

Food for thought.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
That'll do good sir!

If you're looking to do vSAN w/ witness appliance run the witness on your 3rd smallest box and choose 'small' config, run VCSA on either of the other hosts, HA and DRS will do their jobs :-D

Could make it a 3-way legit vSAN w/ 2 of those drives in each host and nix the vSAN witness appliance idea. Just thinking here, I dunno if the VMUG EVAL plan allows/enables AFA vSAN so you have better chack that and if not get some magnetics to place w/ it...else setup a stg appliance ZFS homerolled, napp-it, freenas, etc. appliacnce on node 3 and put all disks there and use it as a pure stg box. With 6 400 gb disks you could go raidz-2 and get decent perf/capacity (1.5TB roughly of slc all-flash config).

Food for thought.
What is the advantage of splitting the 6 drives amongst 3 hosts instead of 2 if I'm using a FTT of 1 and don't need re-protect ability for this cluster? If I was to put 2 of the Hitachi's in all 3 nodes, how much usable storage space would that give me with a FTT of 1? I used the vSAN Datastore calculator linked on Yellow-Bricks but you have to know how many VMs and the average size of each for the calculator to work and that I'm not sure of at this point. Also, can you think of any good uses for the 4 Intel 730s I have laying around in this vSAN cluster?

Good call on checking the licensing because I certainly can't really afford any more than the VMUG membership with regard to licensing considering how much I'm putting into my network hardware. So yes, if VMUG doesn't provide vSAN cluster licensing than I will have to go the dedicated storage appliance route.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
@whitey Do you happen to know the answer to question as to whether or not the "witness" node can run VMs/storage outside of the vSAN datastore while still acting as a witness? I ask because I'd like to run my bulk media storage arary/OS (50+TB) in a VM on that node. However I don't want that VM or the 50+ TB of data being in the vSAN datastore for obvious reasons. I can't seem to find an answer to this question through my Googling.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@whitey Do you happen to know the answer to question as to whether or not the "witness" node can run VMs/storage outside of the vSAN datastore while still acting as a witness? I ask because I'd like to run my bulk media storage arary/OS (50+TB) in a VM on that node. However I don't want that VM or the 50+ TB of data being in the vSAN datastore for obvious reasons. I can't seem to find an answer to this question through my Googling.
Never tried it but I'd image if the witness if fully functional to get your 2-node vSAN going that simply adding (or it may already be in same cluster) the ESXi host running that witness appliance 'could' consume vSAN storage w/out actively providing disks to vSAN but CAN still consume it's storage.

My 2cents. Test and let us know but I bet I may be right.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
What is the advantage of splitting the 6 drives amongst 3 hosts instead of 2 if I'm using a FTT of 1 and don't need re-protect ability for this cluster? If I was to put 2 of the Hitachi's in all 3 nodes, how much usable storage space would that give me with a FTT of 1? I used the vSAN Datastore calculator linked on Yellow-Bricks but you have to know how many VMs and the average size of each for the calculator to work and that I'm not sure of at this point. Also, can you think of any good uses for the 4 Intel 730s I have laying around in this vSAN cluster?

Good call on checking the licensing because I certainly can't really afford any more than the VMUG membership with regard to licensing considering how much I'm putting into my network hardware. So yes, if VMUG doesn't provide vSAN cluster licensing than I will have to go the dedicated storage appliance route.
You'd have roughly 2.4 TB of vSAN storage when everything is healthy vSAN-wise and would drop down to 1.6GB roughly if a host failed or needed maintenance in a 3-node vSAN config if your suggesting using 2 of the hussl4040ass600 drives in each host contributing to vSAN.

My vote, go w/ the 3-node vSAN config initially, if it meets your needs keep it, if not go back to dedicated stg appliance running in a VM driving those 6 hitachi hussl dev's and served up to the vSphere cluster...no sense in COMPLETELY dedicating a storage node IMHO unless that gives ya a warm/fuzzy feeling v.s. a vt-D HBA passthru to a stg appliance VM...else you can always go back to 2-node ROBO vSAN w/ witness appliance...you have options good sir, test them all out and let us know your findings or end-state utopia.

HELL, you could even got 2-node vSAN w/ witness, just use those 4 phys ssd disks, the witness will take a few hundred GB vdisk if memory serves me correct, and STILL use a virtualized/vt-D stg appliance w/ those last two ssd drives as read/write cache and throw a small handful of magnetics in the mix w/ those on OmniOS/FreeNAS/etc., serve up to vSphere cluseter over preferred protocol (NFS/iSCSI) and be happy as a pig in sh|t...balls to the wall perf on AFA vSAN, capacity while still giving some good performance on the hybrid array on that 3rd ESXi host, throw that guy in same cluster as 2node-vSAN to consume/provide both stg platforms...

Just saying'...your limitations are only your imagination currently, you will have some legit gear when you get those three Xeon-D nodes built out buddy with what you already have!

Careful...it's a slippery slope sir...you all know what I mean :-D

BWAHAHAHA
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Never tried it but I'd image if the witness if fully functional to get your 2-node vSAN going that simply adding (or it may already be in same cluster) the ESXi host running that witness appliance 'could' consume vSAN storage w/out actively providing disks to vSAN but CAN still consume it's storage.

My 2cents. Test and let us know but I bet I may be right.
Not sure if I'm explaining this right. I wouldn't want the VM on the witness node consuming any vSAN storage. I'd want to run the VM and it's local storage outside of the VM cluster all together but still have the node act as a witness.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
You'd have roughly 2.4 TB of vSAN storage when everything is healthy vSAN-wise and would drop down to 1.6GB roughly if a host failed or needed maintenance in a 3-node vSAN config if your suggesting using 2 of the hussl4040ass600 drives in each host contributing to vSAN.

My vote, go w/ the 3-node vSAN config initially, if it meets your needs keep it, if not go back to dedicated stg appliance running in a VM driving those 6 hitachi hussl dev's and served up to the vSphere cluster...no sense in COMPLETELY dedicating a storage node IMHO unless that gives ya a warm/fuzzy feeling v.s. a vt-D HBA passthru to a stg appliance VM...else you can always go back to 2-node ROBO vSAN w/ witness appliance...you have options good sir, test them all out and let us know your findings or end-state utopia.
And with 2.4TB of vSAN storage that gives me 1.2TB of usable storage with a FTT of 1 since everything has to be mirrored right? I'm def. going to have to play around with my options once all my equipment gets here. This smells of a build log post :D.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
You can do that, your limiting yourself in a sense though capabilities/feature-wise of what you 'could' do w/ this lab w/ just as much redundancy/robustness and maybe even more arguably...to each his own.

So in this scenario you would just leave that last ESXi host outside of the cluster than has the 2-node vSAN setup but it can still be managed by VCSA...just as a hypervisor host connected to vCenter outside in it's own mgmt type cluster or no cluster at all since it will be a single host sounds like for the long run. That host running the vSAN witness in this desired config will NOT be able to consume vSAN storage I'm sure you know and maybe that is the idea/intended design.

This will net ya roughly 1.4 TB usable vSAN datastore space while everything is healthy and 700GB or so when you have a single node failure keeping in mind that your vSAN witness appliance BETTER be up on that outside the cluster/mgmt type ESXi box if that scenario plays out.

Sounds like you have a plan, go forth and conquer!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
And with 2.4TB of vSAN storage that gives me 1.2TB of usable storage with a FTT of 1 since everything has to be mirrored right? I'm def. going to have to play around with my options once all my equipment gets here. This smells of a build log post :D.
I was sayin' 2.4TB w/ a 3-node vSAN setup using all 6 of your 400gb hussl devs. 2-node numbers in latest post.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
I was sayin' 2.4TB w/ a 3-node vSAN setup using all 6 of your 400gb hussl devs. 2-node numbers in latest post.
Ahhh, I see now. I was originally thinking putting 3 disks in 2 nodes as opposed to 2 disks in 3 nodes would net me more overall storage. Now I think I understand that with 3 nodes I'll have 4 disks available in the vSAN datastore if a single host goes down/is taken down.

So with a 3-node cluster, would I need to place my bulk media array on a 4th node outside the cluster or could I run it inside one of these 3 nodes without adding those disks to the vSAN datastore?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Ahhh, I see now. I was originally thinking putting 3 disks in 2 nodes as opposed to 2 disks in 3 nodes would net me more overall storage. Now I think I understand that with 3 nodes I'll have 4 disks available in the vSAN datastore if a single host goes down/is taken down.

So with a 3-node cluster, would I need to place my bulk media array on a 4th node outside the cluster or could I run it inside one of these 3 nodes without adding those disks to the vSAN datastore?
Yes you would just run another stg appliance VM on any of the hosts with a different LSI HBA passed thru that of course maps back to disks on your chassis/backplane and bob's your uncle.

I run a 3-node dedicated vSAN w/ a seperate stg appliance VM on every one of my nodes btw...I'm a stg junkie though, I like to have plenty of options to test/tune with. Two LSI 9211-esque HBA's on each node, one dedicated to vSAN duties and the other for vt-D stg appliance use. 6 HBA's total, when they are $50-60 used makes the choice/cost/flexability an easy one for me.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Ahhh, I see now. I was originally thinking putting 3 disks in 2 nodes as opposed to 2 disks in 3 nodes would net me more overall storage. Now I think I understand that with 3 nodes I'll have 4 disks available in the vSAN datastore if a single host goes down/is taken down.

So with a 3-node cluster, would I need to place my bulk media array on a 4th node outside the cluster or could I run it inside one of these 3 nodes without adding those disks to the vSAN datastore?
You could do the 3 disks in each node in your 2-node ROBO vSAN w/ witness config as well, one would be tagged as flash the other two capacity and get a lil more space that way of course, sorry I misread your orig post in that respect.

Again, options options options...decisions decisions decisions. :-D

GL on build, let us know how she progresses or if you do a dedicated build thread.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Yes you would just run another stg appliance VM on any of the hosts with a different LSI HBA passed thru that of course maps back to disks on your chassis/backplane and bob's your uncle.

I run a 3-node dedicated vSAN w/ a seperate stg appliance VM on every one of my nodes btw...I'm a stg junkie though, i like to have plenty of options to test/tune with. 2 LSI 9211-esque HBA's on each node, one dedicated to vSAN duties ant the others for vt-D stg appliance use. 6 HBA's total, when they are $50-60 used makes the choice/cost/flexability an easy choice for me.
Awesome! That was exactly what I wanted to know and I suspected that was the case but wanted confirmation. Seeing as how you clearly have a lot of experience with this based on how many storage appliance's your running (3 separate stg appliances per node? You are one sick man :D) I'm thankful for you to have cleared that up.

I'm going to continue to think on the 2-node vs 3-node scenario. I just want to give myself the most usable space for VMs while being able to take one node down but I can see I can probably go either way here.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
If I felt the urge ( ;) ) to pickup 2-3 additional disks to use as cache to go along with the 400gb hussl's for capacity, what might be some good options (size and model wise) without breaking the bank?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Intel DC s3700 100GB's may be a good fit IF you can source them for $60-75 each...else a hussl4010/4020 drive may be up your alley. I'll resist shamelessly plugging myself again but yeah there's that :-D
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,516
650
113
Intel DC s3700 100GB's may be a good fit IF you can source them for $60-75 each...else a hussl4010/4020 drive may be up your alley. I'll resist shamelessly plugging myself again but yeah there's that :-D
What do you think about using 2-3 (depending on which setup I wind up going with) of the Hitachi's as the cache layer and using larger capacity "prosumer" SSDs (2x 1TB per storage contributing node) as the capacity layer?

I'm assuming with a 2-node ROBO setup that my write performance is only going to be as fast as a single cache device in one of the nodes correct? And for reads, that will be dependent on how many disk groups I have in a single node or across all contributing nodes?

With the money I'm dishing out to upgrade my network and all my servers to 10GbE I'd like to configure my storage in a way that maximizes that throughput as best I can (realizing that some network tweaking will be necessary).
 
Last edited: