Recommended SSD config for VMs?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ninja6o4

Member
Jul 2, 2014
92
22
8
45
I'm planning to build a new-to-me server, with capacity for 10+ light duty Windows VMs on Hyper-V. It will be dual-cpu E5-2680v2 10-cores w/ 64GB RAM, so I think I'll have enough provisioning for that.

My question is, what would be a good configuration for the boot drives? I'm contemplating Intel S3500 series SSDs, maybe 4x300GB or 600GB in RAID10 on LSI 9271, would this be a good start? I am open to suggestions on other drives, or options I might not have considered. I don't have any NVME slots, but the board has 6x PCI-e 8x slots available, and I'm not well versed on AIC cards.

This is the setup I'm getting, for anyone curious: Supermicro | Products | SuperStorage Servers | 4U | 6047R-E1R36L
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
I think a raid 10 for the os drive is overkill.

In my setups I started to use sata doms for the os and hw/sw raid for data/vm storage.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I can’t comment on hyper-v aspect
Nothing wrong with the intel sata SSD but since you have a HBA then heaps of used enterprise scsi disks like the hgst, Samsung etc that would be cheaper and perform better I think.
For 10 normal VM you will have ample cpu and sufficient memory if the VM’s are not too big.
I actually don’t raid for most VM just backup via snapshot often.

I use Samsung pm1633 or sm1635 in 960gb and 1.6tb respectively
I think otherwise the go to cheap used enterprise SSD in say 800gb size is, people please correct me if I am wrong
HGST Hitachi Ultrastar SSD1600MM
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I guess the prices of the hgst drives are up a lot these days, they used to easily undercut intel Sata drives.

Intel drives are solid for sure and if they are the right price just do it. The others I have liked for light workload and Sata is the Samsung sm863/pm863
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Pm953 SSD?
Sure absolutely but pcie
I was just thinking that since he had a SAS adapter to take advantage of cheap used SAS.

Otherwise I absolutely say PCIe / NVMe for performance and SATA for bulk data.

If I was wise enough to do what I just said I would dump my SAS but since I have drives of a decent size I just use them.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I don't think you can easily Raid those M2 cards yet... If you want raid you need SATA/SAS. Then any cheaply available enterprise class drive will do just fine
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Yeah I think raid will be the issue. Otherwise they well out perform the sata ssd’s
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
Hmm, two standalone pm953 will run circles around RAID1 S3500's, judging by the Read IOPS (240,000 each vs. 2x 75,000)
So, if I opt out of RAID, the M.2 + AIC is the better choice.
Could I do maybe software redundancy at the hypervisor level..?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Mirrored storage spaces in Hyper-v? If using Proxmox, you can use a ZFS mirror. But with most other hypervizors you will end up going the AIO route.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Otoh - if you only have the OS on the boot drive and take a note of/ backup the config then thats easily replaced in under 30 minutes (if you got a spare drive/stick). Datastores are much more important not to loose.

O/c make sure in an AIO situation that you also have a backup of the config for your AIO storage box (freenas/napp-it whatever) which will reside on the OS drive potentially.
 

ninja6o4

Member
Jul 2, 2014
92
22
8
45
The SSDs would only have the hypervisor and vm boot volumes contained on them, and let's assume that all configs will be backed up to separate disk/offsite. My goal here is reduced downtime from single disk failure. Essentially doing a RAID1/SW mirror would save me the trouble of reinstalling the OS/restore config 10 times over in the event of one disk failure, and while it's not difficult, it is time consuming. I could do vm snapshots as well, but in the case of a single failure at 3am, I still need to actively restore snapshots to a donor disk and get services running again.

How about this config:
- LSI RAID1 2x 100GB-200GB 2.5" Intel S3500, for hypervisor OS itself
- 2x 800GB-1TB M.2 PM953 + 2x M.2 PCIe x4 AIC, configured as mirror in OS, for VM boot volumes

I don't think the mobo in this setup supports PCIe bifurcation, so does it make better sense for each to have its own PCIe AIC, rather than a dual M.2 AIC that is sharing one x4 lane?

EDIT: thinking about this some more.. maybe I can configure Hyper-V to use a regular spinning disk as a failover in the event of disk failure. It'd be significantly slower but the services wouldn't care, it'd be temporary until I can repair it, but I wouldn't have any downtime.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Hm there seems to be a thing like auto-bifurcation on X11 boards but honestly I wouldn't bet a dime on it working on a X9.
You could ask SM support though:)
 
  • Like
Reactions: ninja6o4