best storage for ESXi VM environment

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hieu2ueih

New Member
Oct 15, 2013
7
0
1
Hi,

I'm new to the serverthehome forums and to building servers at home. I currently have 4 1TB drive in a Raid5 on the on board Intel ICH10r controller. I continue to have a particular drive pop out when the array is not in use, but will usually reconnect without issue. However, sometimes I do have to rebuild and my drive is horrendously slow. I wanted to do an upgrade and was able to get my hands on a 1GB RS25AB080 raid controller for $200 dollars. I don't know much about the card other than it's an LSI card and looks similar to the IBM M5016. I have been contemplating a new ESXi VM environment and wanted to know what people thought would be best storage option. I was considering 6x 2TB drives in RAID10 and 1x 500GB Samsung 840 Pro for Cachecade PRO. I was wondering if this is an optimal solution or is Raid5 still optimal? Also any suggestions on drives? I was thinking either the WD Nas or Seagate Constellations.

Thanks!
 
Last edited:

lpallard

Member
Aug 17, 2013
276
11
18
Hmm good question, I am in the same boat myself. Right now, I am leaning toward a ServeRAID M5016 with 2X 2TB SAS drives in RAID1 (new drives of course). With your 6x 2TB drives in RAID10, do you really need 6TB of storage space for the VM's?

For me the budget is the barrier to my ambitions... For now I will concentrate the crucial/critical stuff (priority 1) onto the RAID1 array and have my other stuff I want to keep (priority 2) on a RAID5 array with weekly rsnapshot backups to offline hot-swappable drives.

I was wondering if this is an optimal solution or is Raid5 still optimal?
I personally have more confidence in RAID10 (or simple RAID1) VS a standalone RAID5 or 6 array.. I currently use a RAID5 array but I am sure you know as much or more than me, with RAID5 or 6, the larger the drives are (especially the one which will fail) the more time to reconstruct the array = more statistical chance for another drive to go awol which will mean losing the whole array.

IMO RAID5-6 etc are only good for very fast/relatively small drives. I'd feel a lot more confident with the duplication that RAID 1 brings to a RAID5 or 6 (=RAID50 or RAID60)

I have used Seagate Cuda's for a while now. I have mixed feelings about them. While most of them have ran "flawlessly" for several years now, others have died after weeks/months of operation, sometimes the RMA drive lasted even less than the original retail drive!

IMO Hitachi's Deskstar were the most reliable on the market (consumer level). I have 11 of them running 24/7, and since 2007. With my experience, I strive to avoid WD consumer drives like the plague. Cant speak about their RE line.. Havent tried them but probably will not.

I hope this helps you make up your mind!
 
Last edited:

Mike

Member
May 29, 2012
482
16
18
EU
The number of drives that can fail before the shit hits the fan is the same though...
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
We have been running close to 150 3TB WD red drives for 9 months and no significant failure. All failures (5 or so) were DOA, none have failed in regular use yet. We have 250 x4TB Seagate NAS drives that we are about to fire up and don't expect problems either. We use RAID6 with "packs" of 12 drives, 9 data, 2 parity, and 1 global hotspare. We do 3 packs in a chassis so 3 GHS per chassis.

Runs solid.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
We have been running close to 150 3TB WD red drives for 9 months and no significant failure. All failures (5 or so) were DOA, none have failed in regular use yet. We have 250 x4TB Seagate NAS drives that we are about to fire up and don't expect problems either. We use RAID6 with "packs" of 12 drives, 9 data, 2 parity, and 1 global hotspare. We do 3 packs in a chassis so 3 GHS per chassis.

Runs solid.
Bet that stores a LOT of STHbench log files :p:cool:

What chassis are they in? (is this OT/ is there a build thread?)
 

lpallard

Member
Aug 17, 2013
276
11
18
We have been running close to 150 3TB WD red drives for 9 months and no significant failure. All failures (5 or so) were DOA, none have failed in regular use yet. We have 250 x4TB Seagate NAS drives that we are about to fire up and don't expect problems either. We use RAID6 with "packs" of 12 drives, 9 data, 2 parity, and 1 global hotspare. We do 3 packs in a chassis so 3 GHS per chassis.

Runs solid.
Wow! Impressive to say the least!

Are these drives all assembled in one large 750TB pool or separate pools?

What are the power /cooling requirement?


I guess the Red's are not so bas as I thought after all... ;)
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You can use this for ESXi but the latency is going to bomb your system out!

also use SAS option when possible when using near line drives, they respond far quicker and with SAS expanders you can connect to the same drive up to 16 times versus 1 (SATA), plus LVD signaling which is both full duplex and uses more voltage for a much more stable 6gbps connect!

I'm not sure why you would want to use near line drives, the only near line drives i've been using are samsung 840 pro in raid-1 with esxi and that is for balls-out speed. Reliably. Otherwise everything else runs on 15K SAS 3.5" drives (450gb) in raid-10 with 8 drives per server, it is robust and very stable.

I find it easier to separate the servers that need to GO FAST and the servers that can go fast or slow into both silos:

1. Fast server samsung 840 pro - raid-1 - extent spanning - 1 VM per HOST (L5639 servers)
or
2. Slower vm's that need a ton of space, multiple vm's per host , 8 drive raid-10 15K SAS 450gb DL180 G6 (P4300G2) server. L5639 as well.

All in all it works out well, the fast goes fast without impediments to slow it down, and the slow goes rather slow - keep in mind 8 raid-10 15K SAS drives are very fast still, no saw-expander to slow things down, and P420/1gb FBWC to help offload disk iops.

The LATENCY warnings in ESXi are very real, if you start to see them, they are a precursor to failed heartbeat and loss of datastore connectivity and you should be very careful about when these happen out of ordinary!


Stick to raid-10 for ESXi, sas drives (10K/15K/SSD) - avoid technologies that are not consistent (cachecade). You'll find that crashing and timeouts to data stores are not very fun when you have to rebuild the entire server from scratch because of a poor choice :(
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I agree with mkraid that the SAS drives are good for nearline storage, we have a few servers with data that we care about and speeds we need on those.

Regarding the SATA systems that I have, they are basically bulk storage for lots of imagery. Speed is not as important, we have them connected to compute systems via QDR IB and they are also accessed via CIFS over GigE, which becomes the new bottleneck instead of the SATA.

The systems are in three stages, we are migrating off of a gluster system because the DHT model is utterly slow for folders that have 20-50k files in them. We're rebuilding the nodes as a big lustre array. The nodes themselves are generally a 36 bay standalone compute system and we build 25TB arrays (x3) per each shell. So in gluster, that would be 3 bricks in a node. On our SAS systems, we have a 36 bay controller and 2x 45 bay expansion chassis attached to it with 2TB enterprise SAS drives. We just got our first batch of the 4TBs in and are getting that built over the next month, and migrating the 3TB NAS drives for other purposes. After seeing how it runs, we'll migrate in another order of storage nodes and associated 4TB disks. This is all meant as bulk storage where speed doesn't matter as much, though we realize that some speed is important, which is why the lustre buildout.

Sorry for the thread hijack too. I should start a new one when I have time. Only meant to chime in about reliability of NAS drives, now desktop drives, whoa diggity.
 
Last edited:

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I would also echo the suggestion with SAS drives.

I used to run my home setup on 3x 1.5TB Seagate Barracuda SATA drives. I went out and got a bulk of 15k 146GB dual port 3.5" SAS drives which came out as around $12 each in a lot of ten pieces. I put them in a MD1000 disk shelf ($400 off EBay) and dual connected them to a Dell H200 SAS controller ($100 or so from EBay). The server they are connected to has them configured as a 9 drive RaidZ2 array (one drive was DOA) and one drive was sold meaning the array is currently degraded. The array is still nice and fast, rock solid even degraded although it is a bit power hungry.

If I was doing it again then I would probably get 3 or 4 of the 500GB+ SSDs and just make sure I have a decent backup routing running.

The 300GB and 450GB 15k SAS drives are also coming down in price and may be viable. A lot of companies refresh after a couple of years, prefering to get new drives in rather than run a risk after the initial 2 year period. Most of the drives, in their retail versions, have a 5 year warranty so should be good for another 3 years or so if looked after.

Again, just to repeat... this is for a home setup and with a reasonible backup in place.

RB