VM Storage - SAS, SATA or SSD?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

LFletcher

New Member
Mar 26, 2013
24
0
1
UK
I'm in the process of moving my VM's to a new server so have the opportunity to setup the storage optimally.

Currently I have 7-8 VM's running off of 3 SATA drives in RAID5 - I'm aware this is not optimal for IO performance.
One of the VM's might write up to 20-30GB a day, but the rest generally don't have a great deal of IO occurring. This is also a home setup, so although nothing is critical I'd rather not run RAID0 for example.

For the new server I was thinking of maybe having 4 SSD's in RAID10, but I'm concerned that the daily writes might kill the SSD's prematurely.

Using SAS drives was then another option which occurred to me the other day (also in RAID10).

Alternatively should I just use SATA drives in RAID10? Will that give me a big enough jump on the current RAID5 performance that the other 2 options aren't worthwhile.

I also have a LSI card, so using Cachecade with either SAS or SATA drives is also an option.


Thoughts and options please.

Thanks
 

Mike

Member
May 29, 2012
482
16
18
EU
Can't you split the IO heavy VM from the rest, and give it it's own regular array?
The rest of the disks could go on 2 mirrored ssds and still outrun your old setup by miles.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I have found that RAID-1 is superior (samsung 840 pro/LSI) using small strip sizes (8K) given the 512gb model writes in smaller pages (4k). Newer ssd write in 16K pages like C500 960gb.

Writing in 64kb page strip would be be 16 pages per SSD per write (STRIP=per drive, STRIPE=per array).

Then span the volumes in ESXi to get more storage and it will sort of naturally balance itself (not perfect).

This is stable. My old 830's are still rocking out in this fashion to reduce write amplification.

Raid-10 means 4 SSD => change 1 bytes, write to all 4 SSD. STRIP size on top of that. (64KB STRIP = write 16 pages even though only 1 byte changed) * number of drives.

LINEAR performance scales well with RAID-10, but random i/o does not scale at all with LSI and 840pro (!!).

Most VM users are not doing linear i/o but very random i/o.
 

LFletcher

New Member
Mar 26, 2013
24
0
1
UK
Thanks for the responses.

For reference this would be Hyper-V on Server 2012.

I could split the VM onto different arrays/luns. I have up to 8-12 disks to play with in total depending on which case I put them in.

So assuming splitting the VM's is a good idea, for the one(s) which write 20GB a day what would be the best setup? Would it be better just to stick this on a single SSD and ignore RAID1 and resilience?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Thanks for the responses.

For reference this would be Hyper-V on Server 2012.

I could split the VM onto different arrays/luns. I have up to 8-12 disks to play with in total depending on which case I put them in.

So assuming splitting the VM's is a good idea, for the one(s) which write 20GB a day what would be the best setup? Would it be better just to stick this on a single SSD and ignore RAID1 and resilience?
I very highly recommend simple SATA SSD drives for VM storage. In fact, if you place all of your seven current VMs onto a single SSD drive - or on a mirrored pair if you wish - you'll see dramatically better performance than you have now.

If you have a heavier than usual write workload, ease the burden on the SSD drive(s) by buying a larger drive than you need and formatting it to leave part of the drive empty. A 512GB drive formatted to 400GB will last a very very long time with only 30GB/day of writes - likely more than ten years.

If your VMs won't fit onto a single drive (or mirrored pair of drives), consider adding another, separate, drive or drive pair instead of going with RAID. RAID will give you better sequential read/write performance, but most VMs do far more random disk access than sequential, so a separate new disk will in fact give you better performance than RAID.

Lastly, with SSD drives and your average VM workload, the RAID card is not needed. Buy SSD drives with power loss protection (supercaps, etc.) and use a cheap HBA instead. You'll reduce the complexity of your system, reduce power consumption, and even save money (sell the RAID card to buy a larger SSD drive).

I'm running a Dell c6100 as my VM server. Three nodes each have a 512GB SSD drive for VM storage and each runs 8-12 VMs or more. I have room for a second SSD drive, but so far I haven't needed it either for capacity or performance. Just as an FYI, I have stopped mirroring my VM storage drives and instead use VM replication as my disaster recovery strategy. The 4th node in my c6100 is a VM replica storage node.

Caveats: If you have a highly sequential workload, then ignore the above. If you have terabyte-sized VMs then you need a different strategy.
 
Last edited:

LFletcher

New Member
Mar 26, 2013
24
0
1
UK
I very highly recommend simple SATA SSD drives for VM storage. In fact, if you place all of your seven current VMs onto a single SSD drive - or on a mirrored pair if you wish - you'll see dramatically better performance than you have now.

I'm running a Dell c6100 as my VM server. Three nodes each have a 512GB SSD drive for VM storage and each runs 8-12 VMs. I have room for a second SSD drive, but so far I haven't needed it either for capacity or performance. Just as an FYI, I have stopped mirroring my VM storage drives and instead use VM replication as my disaster recovery strategy. The 4th node in my c6100 is a VM replica storage node.

Caveats: If you have a highly sequential workload, then ignore the above. If you have terabyte-sized VMs then you need a different strategy.
Thanks for all the info.

The VM's are currently small, < 100GB each.

Any (consumer) SSD's better than others for running VM's? I have a Samsung 830 and a couple of Crucial M4's lying around which I could use.

Funny you should mention replica as I was having a look at that the other day and was planning on implementing it.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Thanks for all the info.

The VM's are currently small, < 100GB each.

Any (consumer) SSD's better than others for running VM's? I have a Samsung 830 and a couple of Crucial M4's lying around which I could use.

Funny you should mention replica as I was having a look at that the other day and was planning on implementing it.
Intel S3500 is the bees knees, having reliability, capacity, and data loss prevention. That said, I happen to have tons of Samsung 830 drives, and they have been excellent for VM storage. Of course the 830's don't have data loss protection, but then again... replication. I set up replication every five minutes and a consistent snapshot replica every four hours. My only complaint is that my VM replica node, which uses big non-SSD drives to store replicas for 40 odd VMs, is a short on IOPS and is a bottleneck. I'm hoping that the SSD write-back caching in Windows Storage Spaces R2 will cure that ill.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
the 830 is a most excellent drive and works very well with LSI megaraid 9260-8i (or m5014). I use the M5014 with the 830 and 840pro and they work awesome! Cheap $66 oem controller. But if you buy new, get the 9266/9271 as they are much faster.

i found that if you go half as fast as possible (830 speed with 840 pro) you get 10x more reliability and 4 ssd are still faster than my 8 15K SAS drive raid-10 combo @ less than half of the wattage!

Not sure about the crucial, they have very aggressive power profile and that is what kills your stability. Servers need to run at 1 speed (FAST!) or shut them down until needed. The whole "balanced profile" causes far more damage than good.

I'd bet 99% of 2-socket westmere/nehalem servers spend their life in ESXi in P12 state only exceeding 20% load when backups are going. So folks will upgrade and then wonder why things are still just as slow. Then complain.

lol.

Use the 830's they are very stable. Use Raid-1 and small strip size, and SPAN volumes. RAID-1 + RAID-1 = bigger vmfs volume.
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
Great information guys. Dba may I ask what you are using for making the replications? I know Veeam is excellent here but are there any decent "free" alternatives? I really like the setup you are using.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Unitrends gives out free for the free hypervisor backup. I think nobody else does. The stuff seems really expensive.

for ESXi you want vSAN (5.5 public beta) or just do what I do SPAN raid-1 volumes. Each span gets its own queue depth so you can push the ssd deep.

Once you put two vm's on a lun, you will get severe choking due to the sharing methods in advanced settings. QD will never get deep without using a lot of RAID-1 in span or separate volumes.

I found the 9266-4i times 4 with 4 drives each is far better than 9266-8i with 8 drives each !! More cards is better! like DBA says.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Great information guys. Dba may I ask what you are using for making the replications? I know Veeam is excellent here but are there any decent "free" alternatives? I really like the setup you are using.
Windows Hyper-V 2012 includes very good replication for free, and that's what I use. Even better, Hyper-V 2012 "standalone" is itself free, with no RAM restrictions! I use a "full" copy of Windows 2012 instead of the standalone version myself, but that's just because my laptop is still Windows7, which means that I can't easily run the management client for the standalone version.
 
Last edited: