VSAN design question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eroji

Active Member
Dec 1, 2015
276
52
28
40
My original plan was to go with a FreeNAS+iSCSI backed storage for a 4 node ESXi cluster, however, after taking some feedback, I wanted to entertain the idea of VSAN instead. The limitation is that I only have 3 drive slots per node. The server chassis I'm using is Intel H2312XXKR2, with 4 nodes of dual E5-2670 and 64GB of RAM (will upgrade later on). Each node also has a X520-DA2 which will be connected to the LB6M. I been reading up VSAN documentations but I want to get some input on the design first so I'm not spinning my wheels. I want to go with a 200/400GB SSD cache (likely S3710), and 2x 2TB 7200RPM drives. Should I be able to expect reasonable performance with this setup?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
That depends on what 'reasonable' performance is for you.
vSan is using a single host for writing (before it gets striped/FTTed), so you'll get S3710 write performance max with a single client. Read performance will then be your 2 drives.
I ran with ssd drives only initially and found it too slow for single user requirements; switched to nvme now. Better;)

But vSan is not designed for single user - the same performance you get on single user you can get on X users as well; so depending on your use case yes - or no;)
 

alex1002

Member
Apr 9, 2013
519
19
18
For the vsan you also need to keep the licencing Costs in mind

Sent from my Nexus 6P using Tapatalk
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
So in other words nothing close to saturating a single 10Gb link even for sequential. That's not quite what I had hoped to be honest.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Well it depends (again). With enough disk groups/hosts you can begin to use multiple disk groups per vm so you can multiply performance.
But I don't think this will work with 8 drives total; but I only have 3 hosts so have not been able to test performance with 4 hosts.
I know I tried with Robo (2 hosts) and 2 diskgroups per Node and it was better but still not really good. Especially given the amount of HW I threw at it (8 ssds).

At the moment I run it with 3 hosts and 7 nvme drives. Multi User performance is o/c very good, single user good enough that I have not felt compelled to measure yet. But most of the drives are 400gb so I am space limited. Looking to go to 6 or 7 drives with 1.2 tb intel 750's or P5320's when I manage to get them cheap enough
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Sample case config that performs... (for what it's intended for anyway)

10 x servers (e5-2680v4, 512g, 2 x hba 5+1 disks see below, 10g networking all round), boot disks or USB etc
- 2 x 800gb SAS3 fast disks, write intensive ones per server (cache tier) per server
- 10 x 960gb SAS3 mixed use or read intensive (data tier) per server

Yes that's over 100TB SSD raw and not cheap but 10-12 nodes you get good scaling, running the 5+1 config (x2) seems to be a kind of sweet spot for our workload.

Ok how this translates to your situation, you I assume are using only 2 copies of data , but with the small number of disks this will only really give 2x2 spinning disks for read... assuming it was sequential that's let's say 400mb sec so no where near maxing 10g links I am afraid. Writing to a cache drive again nothing at all close to maxing the network.

It will be interesting to see how NVMe effect is but SAS3 seems to do a pretty good job for general use now.
 
  • Like
Reactions: BoredSysadmin