VSAN design considerations for Soho

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
So this started as a question on how to best use my existing disks for a soho vsan setup.
Then I tried to ask more general questions regarding the behavior of vsan in soho setups. I then realized that i had a bunch of questions that impact the possible scenarios, so i thought i'd ask those first without a ton of text. Feel free to read the stuff below though ;)

So looking for answers/comments on the following:
Expectation: vSan does not take differences in hardware performance into consideration (Y/N)

Q:will vsan prefer to utilize storage on a different host over additional local drives? (Y/N) (i.e. will FTT>0 take host failures into account). What will happen if there is space left on a local drive but not on a remote drive?

Q: Are stripes per disk or per disk group (ie striping possible on 2 disks in the same disk group)? What happens if i only have a single disk/disk group but have configure stripes=2? Will both stripes be written to the same disk or will it fail silently (i.e. simply not being striped)?

Q: Is it better to have multiple disk groups per node or have multiple disks per disk group
(assume I had 12xS3700, would it be better to have 2x (5+1) or 2 x 2 x (2+1))? Will this matter for striping?

Thanks:)







----------------------------------
looking for some clarifications on vsan disk handling which I have not been able to find anywhere.

So (for various reasons) I am currently reworking my vsan setup and was looking to a more scientific approach on deciding which drives to use.

So in a more abstract matter - I know that vsan does not need identical hardware on each node, but I have found no explanation how different hardware (drives especially) will change the behavior (or performance if vsan is not smart enough to adjust which i assume).

So lets assume we have the following scenarios (using a robo cluster w/ witness to keep it simple and make it more soho like):

FTT=0 (ie single copy of objects/blocks), Stripes=1
-2 nodes with (near) identical hw with (near) identical performance -> (near) identical behavior
-1 node with vastly superior cache (p3700) -> this node will be significantly faster in writes, same read speed
-1 node with vastly superior cache and drives -> this node will be significantly faster in everything

FTT=1 (ie multiple copies of objects/blocks), Stripes=1
Q:will vsan prefer to utilize storage on a different host over additional local drives?

-2 nodes with (near) identical hw with (near) identical performance -> (near) identical behavior, one copy each on both nodes
-1 node with vastly superior cache (p3700) -> this node will be significantly faster in writes, same read speed, one copy each on both nodes
-1 node with vastly superior cache and drives -> this node will be significantly faster in everything, one copy each on both nodes
-1 node with more capacity drives, single disk group -> If vsan caters for availability by using remote objects then all objects will be distributed to two nodes and then the operation should fail if only one node has storage left
-1 node with multiple disk groups -> A diskgroup might count as another object, so at some point objects might get distributed to two disk groups instead of two nodes if no space is left on the secondary node

Expectation: vSan does not take differences in hardware performance into consideration

FTT=1,Stripes=2
Q: Are stripes per disk or per disk group (ie striping possible on 2 disks in the same disk group)?
I would assume identical behavior as case above except writes are distributed over disk or disk groups
Q: What happens if i only have a single disk/disk group?



-----------------------------------------
-----------------------------------------
My actual Details:
This is a home installation [ridiculously overspent already;)] so availability of additional drives is limited.
I was originally looking at a Stretched Cluster with the witness off site, but that caused trouble since the witness was connected via Sophos red link which endpoint was hosted on the vsan - so no more vsan when i had both nodes down one day b/c without witness it could not start up any more:eek:

-I have 2 M1215's and a bunch of old LSI-flashed Dell H300's (which caused issues with my installation on 6U2).
-I have 2xP3500, 1x 750, all 400GB for cache
-I have 4xS3700 (400gb), 2x PM863's (960GB)
-I have 3 HGST SSD400M's (400GB)
-I dropped 2 x 850 Evo 1TB's due to inconsistent load (starts fast but quickly diminishes)
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Glad to educate myself if someone can supply a link or more useful google-fu keywords:)
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
So is that question too basic for an answer?
Wrong environment (only enterprise experience?)
Not clear?
Or did nobody ever wonder about/test this?
:)
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Ok, not entirely sure what IB has to do with the questions here but i am looking into it as well;)
Or are you referring to the HW used ?

But seriously, I think on #1 vsan prefers local.
Hm that wouldn't make any sense (to me) from a resilience point of view. Not that its impossible, iirc i had an issue once where vms where down when the primary host was down, but i never checked whether that was a sync issue or by design to be honest.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Ahh, yes thats true.
But given the different hw available the questions still stand. Its not about how to set it up, I had it running already.
I was looking for "why "to set it up in "which (generic) way" and what is the inner behavior that suggests/mandates that particular setup :)

Thanks :)