Best alternative to striped mirrored raid for best random i/o speeds?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
I have a perdicament. I will have a total of 16 3TB drives with maybe 4 more on the way to round out my norco 4020.

The research I did shows that doing a raid-0 like setup with a bunch of raid-1 volumes is best for random io. But that cuts my usable space in half. I don't need that kind of fault tolerance.

I was thinking 4 disks in a raid-z aka raid-5 if I am thinking right. And then take those raidz volumes and make them into a single raid-0 setup. Is this fatally flawed?

My usage consists of about 4-6 users doing all sorts of things like streaming movies, backing up their albums, accessing said music, compiling from source (one of my VMs will be gentoo probably).

Also, I have a 32GB X25-e I'd like to put to use as a cache if possible. I figure that will limit me to 19 disks as I'll have to use one port for the SSD, I'm using the PCMIG backplane from one of the articles.

Thanks,

alex
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Basically that is the same as a RAID50. If you do that and any RAID-z volume fails, you will lose everything, becuase it is part of a RAID0. Personally not a big fan of nested raids. Honestly though for 4-6 users a 16-20 drive RAID-z2 i think would be fine IO wise.

Another option is to create a smaller RAID10 pool for your VMs and put all the data on a RAID-z2/RAID6 model. Take your 4 new drives and make a 6TB RAID10...for VMs and certain backups.
That is how I have my setup at home and I am running 10 VMs on a 4x 640gb Raid10, then have all my data plus VM backups on a 9x2TB RAID6. I can easily transfer files at full Gbe while I have the TVs streaming content and my vms doing there thing.
 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
i figured the likelihood of losing a single vdev that is basically a raid5 vdev would be low as it would mean that more than 1 drive in the raidz vdev setup would have to fail. Hopefully before that happened I'd somehow see it going bad and replace it.

is there a way after esxi is installed to add storage to vmfs for the vm's?

I'll have 4 1.5TB drives that I could do a striped raid-1 setup on to basically give the VMs storage which would be located within the ESXi server and not in the VM that will be powering the ZFS fileserver

3TB for the VMs I figure is plenty

I'm just trying to find the balance between the speed of striping and the fault tolerance of the different raid levels
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
I would agree with nitrobass' suggestion, do two pools. Also, a 6TB VM pool is HUGE. You might look at just doing a 20 drive Z2 or 3 pool and getting a mirror or mirror over stripes SSD pool for VMs.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
yeah I'm thinking 2x 1.5TB for 3TB for VM space for vmfs.

As for the DAS ZFS pool thinks aren't as clear...

The thing is if I create a 16-drive RaidZ2 pool with none of teh striping business from before what happens when I want to add the last 4 disks? Can I grow the volume?
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Cant speak to ZFS, but it would be logical that if you could/could not add them doing a 16 drive Raid-z2 pool, then then same should be true for a raid-Zx of RAID0 pools.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Statistically speaking its not very high.

From a realistic perspective when I have lost drives, its not necessarily because of an issue with the drive itself. Just last week I had several drive timeout because of a backplane/power issue. I am running a pure RAID6. The volume was in a failed state, I shut the system down, fixed the issue and booted to the cards BIOS and verified the array was detected and in a normal state. Even though I lost more than 2 drives, I did not lose any data. Had those been striped, I dont know if I would have had the same outcome.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
ZFS permits the addition of vdevs to an existing pool. They become striped additions to the pool (i.e., similar to Raid-0, except unlike Raid-0 all vdevs participating in the pool do not need to be the same size).

There are practical limitations to this for what you describe. When you add the additional vdev the existing data is not "re-striped" to make use of it. What exists remains as is and new data written to the pool stripes across all of the vdevs. This generally results in non-uniform distribution of data across the vdevs and unpredictable performance characteristics. While you are OK even if one of the vdevs gets full (ZFS will simply adjust the striping to account for that) you will also get even more unpredictable performance.

For these reasons (and a some other related issues) adding different sized vdevs to an existing pool is discouraged in the ZFS best practices document.

You cannot change the size of a RaidZ vdev (i.e., you can neither add to nor remove devices from a RaidZ vdev).
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Yes, to both...yes, you can do a pool of RaidZ vdevs and yes - you are blowing this out of proportion :).
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
I believe that if speed is what you want, get a RAID Controller with gobs of cache. The number of disks and SSDs needed to make RAIDZ blazing fast is cost prohibitive for the average home server builder compared to doing 8 spinny disks on a RAID Controller with cache. A lot of people can get Speed Greedy and Space Greedy when building storage devices.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
space and speed greedy is what i'm becoming it seems

I would do a giant 16-way raidz2 setup but I heard resilvering on something that large takes eons

I think a striped raidz setup makes the most sense...I dunno

say I suffer a power outage and teh whole esxi box with the vm that controls the zfs volumes all goes down at the same time, if upon reboot all drives are properly connected and everything is back to normal will the striped pool fail?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I run 20 drives in a pool of 2x 10-drive RaidZ2. Gives me 16 equivalent data drives, reasonable performance, reasonable protection. Resilvering exposure is limited to a 10-drive vdev - it will take a day but shouldn't be a disaster.

You could easily do 4x 4 drive RaidZ(1). You'll get better random IO performance and your resilver time will be defined by by a 4-drive vdev. You'd be potentially exposed to a 2nd drive failure in the same vdev but that is a low probability. That is also why you keep telling yourself over and over again that 'raid is not backup' and take action accordingly (i.e., no matter how you configure the ZFS pool you still need backups!).

If you do this, you could add drives 17-20 as another 4-drive RaidZ vdev later. Yes - you'd have imbalance issues between the then 5 vdevs in the pool, but they are probably not that bad for you.

As for the power outage, that's why I own one of these and am actively shopping a good deal for one of these to go with it!
 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Hi gigatexal

Give this a look....it might help explain some of the "rules" about zfs to you and help make things a bit more clear

http://forums.freenas.org/showthrea...explaining-VDev-zpool-ZIL-and-L2ARC-for-noobs!

ZFS is made to do nested raids....that's how you are able to get the monster pools of storage. Personally I wouldn't dream of doing single parity vdevs with 3TB drives...rebuilds would just make me way to nervous.

-Will
Haha thanks guys. Sorry I'm being so difficult.

Read the presentation. Pretty good stuff there. I think I'm going to do a pool of 5 4 drive raidz vdevs in a stripe and take my chances there.
 
Last edited:

dyu

New Member
Jan 15, 2013
1
0
0
Haha thanks guys. Sorry I'm being so difficult.

Read the presentation. Pretty good stuff there. I think I'm going to do a pool of 5 4 drive raidz vdevs in a stripe and take my chances there.
If your data is not as important, its a good strategy.
PigLover's stripe of 10 raidz2 drives is the best choice otherwise.