ZFS SSD configuration options

brianmat

Member
Dec 11, 2013
58
9
8
We're looking at adding some SSDs into our current NFS storage for our SQL Server VMs. I am looking at probably 256GB Samsung 840 Pro drives. We run everything in VMWare to a Napp-It NFS server.

We're looking at 5 drives to start and I am torn between a Raid-Z configuration or going with mirrors. We've had great performance on our mirror setup compared to what we used to get on our MD3000i array.

The server is running IBM 1015 controllers flashed to IT firmware.

I like going with mirrors for the performance and ability to extend the array in pairs of drives. On the other hand, with the SSDs that's a lot of space to lose. We're sitting at about 8-9 drive slots free, so we are a little limited on our expansion unless we add an additional chassis. I would love to add one of the 45 drive 2.5" Supermicro JBOD chassis to the mix at the end of the year.

Which way would you go with this if it were up to you? Mirrors or RaidZ?

Pros (Mirror):
- Increased speed
- Cheaper to add bits of storage space (pairs vs. full RaidZ pool)

Cons (Mirror):
- Less space per chassis density

Pros (RaidZ):
- More space available
- faster(?) read

Cons (RaidZ):
- More expensive to add to the pool

Decisions, decisions, decisions...
 

dswartz

Active Member
Jul 14, 2011
393
33
28
Isn't sql going to be random I/O mostly? If so, you do NOT want any flavor of raidz...
 

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
For VMs specially with databases, you need good IO values. Multiple mirrors are best for I/O. But SSDs have several thousands IO vs several hundred with spindles. Therefor I see no real problem to use raid-Z with SSDs.

But you can also use a single 2way or 3way mirror with SSDs up to 1 TB today - optionally with an extra ZIL to avoid small random writes on the SSD (ex with a write optimized SLC SSD or a Intel S3700). For max performance stay below 50% fillrate, for good performance stay below say 70%.

This way you can easily add more mirrors with max I/O performance.
 
Last edited:

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Isn't sql going to be random I/O mostly? If so, you do NOT want any flavor of raidz...
Correct, regardless of Drive type.

For VMs specially with databases, you need good IO values. Multiple mirrors are best for I/O. But SSDs have several thousands IO vs several hundred with spindles. Therefor I see no real problem to use raid-Z with SSDs.

But you can also use a single 2way or 3way mirror with SSDs up to 1 TB today - optionally with an extra ZIL to avoid small random writes on the SSD (ex with a write optimized SLC SSD or a Intel S3700). For max performance stay below 50% fillrate, for good performance stay below say 70%.

This way you can easily add more mirrors with max I/O performance.
Sorry but I see this going pear-shaped fast.

The OP is rather confusing and convoluted, no mention of ZFS other than RAIDz. Otherwise, it is all NFS that can sit on multiple possible systems.

If you are going to throw SQL at a array, don't use Parity, the calcs needed by the ROC or the CPU in the case of ZFS just add latency and overheads. Networking it is bad enough but don't kill it anymore than needed.

RAID-1 or RAID-10 only

Put the M1015 into IR mode, pass the current drives to whatever you want and hang the 2 or 4 SSD's directly off it in RAID-1 or 10. Simple, fast and not tied to an OS.
 

brianmat

Member
Dec 11, 2013
58
9
8
Correct, regardless of Drive type.

Sorry but I see this going pear-shaped fast.

The OP is rather confusing and convoluted, no mention of ZFS other than RAIDz. Otherwise, it is all NFS that can sit on multiple possible systems.
I figured the mention of a Napp-It server would have indicated the use of ZFS. Sorry for the confusion on that one, but we are all ZFS right now.

Most of our databases are read heavy with monthly batch updates to the data. The transactional data we have has a reasonable TPS requirement that even our current spinning drives can handle just fine. We doing mostly data analytics work so its a lot of long running stuff going on in the background.

I recognize the overhead of parity calculations, but with the speed of upper end SSDs the question become just how much of that is a factor in that configuration? With 7200RPM spinning drives it's a big concern. With solid state - is it so much?

Mirrored drives are the best overall performance, but with an M1015 will it make a difference? At what point is the controller going to become the bottleneck?

Right now I'm leaning towards mirrors mostly for the ability to extend the array in pairs. It's just an easier ongoing cost to build into the monthly IT expense versus waiting longer to buy them in batches of 5 or 7.
 

gea

Well-Known Member
Dec 31, 2010
2,485
837
113
DE
RAID-1 or RAID-10 only

Put the M1015 into IR mode, pass the current drives to whatever you want and hang the 2 or 4 SSD's directly off it in RAID-1 or 10. Simple, fast and not tied to an OS.
Using hardware raid/mirrors with ZFS is possible but there is no one advantage over a ZFS mirror but some disadvantages like:
- you loose the self healing feature of ZFS. ZFS detects every error (that the hardware raid mostly cannot) but cannot fix.
- you mostly loose smart features and hotplug capability (may depend on a controller)
- ZFS reads from mirrors in parallel, most hardware raid do not
- if a hardware controller is doing its own caching without bbu, data may become corrupted despite a sync write.

And with ZFS you are not tied to an OS or a disk driver: Better use pure HBA without any raid layer for ZFS
 

Lost-Benji

Member
Jan 21, 2013
424
23
18
The arse end of the planet
Now that the OP has made it clear that his load on the system is nothing special (HDD's are doing just fine) then simple mirror is best option providing there are several things considered; background/foreground garbage collection & TRIM.

I knew there would be a ZFS reasoning but lets look it at this way, you now have an OS that needs a serious amount of knowledge to make work well or dig yourself out of a hole when it goes to shit (Please don't argue, simple looking at how many different forums a spattered with ZFS setup/config/issues threads says enough).
I am not saying it is a bad way to go (it is on my cards as a serious option in the the very near future) but when it comes to mission critical systems, KISS wins hands down. Keep It Simple, Stupid.

As the OP seems to know a bit about ZFS then go for it, just keep in mind that if something happens to him or the system and he isn't around, who is going to pick up the pieces.
 

brianmat

Member
Dec 11, 2013
58
9
8
I knew there would be a ZFS reasoning but lets look it at this way, you now have an OS that needs a serious amount of knowledge to make work well or dig yourself out of a hole when it goes to shit (Please don't argue, simple looking at how many different forums a spattered with ZFS setup/config/issues threads says enough).
I am not saying it is a bad way to go (it is on my cards as a serious option in the the very near future) but when it comes to mission critical systems, KISS wins hands down. Keep It Simple, Stupid.
For the record, building out our ZFS server has ended up being a fraction of the hassle, learning curve, and price of our MD3000i we started off with. We're also not stuck with a $2000 raid controller replacement cost when one of the pair dies (which it did). Performance is also double that of our MD3000i, so it's a win all around.

I bet if you throw any vendor at a SMB there's going to be a learning curve and a TOG problem (This One Guy), so having a fleet of people to manage the server isn't always a luxury we have. I don't see how teaching someone to use the Napp-It client with some base ZFS knowledge is going to be any harder than bringing someone up to speed with EMC, NetApp, Nexenta, etc. if they have never been exposed to it before.

Based on your argument you could say the same thing about Cisco configurations. There's many different forums spattered with Cisco IOS setup/config/issues threads. Says enough about Cisco, huh? No, not at all.

Our ZFS server running on quality yet commodity hardware (mostly Supermicro), has been up for over a year, never needed a restart, and is double the performance of our MD3000i which required 1 RAID controller replacement under warranty, has been slow, required Dell branded drives, and is now running on a single controller being out of warranty. Sure, I would love to have the spare cash floating around to throw $50-$75k at some EMC hardware but that's not in the cards and it's going to take a bit to get to that point. You make do with the budget you have and you make it a point to buy the highest quality that fits in the budget.