ZFS tiered storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

brianmat

Member
Dec 11, 2013
58
9
8
I've been doing some cursory research on ZFS tiering and it seems to come down to ARC and log cache on top of slower drives. It started me thinking about our current environment and what would be the best approach to our storage.

We were looking at moving our databases to SSD arrays with the remaining VMs on our spinning rust mirrored pool. Granted our Napp-It servers are actually giving decent (for us) performance on our spinners but we want the databases to have a little more throughput. I also don't like the idea of putting the databases on the SSD "just because" since they are not always going to be the performance bottleneck.

Right now our main array is 24TB using mirrored 2TB drive (DT01ACA200). The SSD array is a RAID-Z2 array of 960GB Sandisk Pros (6.6TB). There's actually a 3rd array which is just for running our backup environment (PHD and Veeam) which doesn't factor in but is included for the sake of being complete.

Based on our current hardware would it make more sense to run the SSDs as L2ARC on top of the spinners and let ZFS handle the caching or are we better off running 2 different NFS endpoints?

Our SSD array has 72GB of RAM (physical) and our hard drive array has 48GB of RAM (virtual).

This post ZFS L2ARC (Brendan Gregg) really got me thinking about this more and makes me wonder what would be the best.

Of course the lazy side of me (devil) says to use the SSDs as L2ARC and let ZFS sort it out. Even the angel side says in this case the devil has a good point.

Our databases are generally very read heavy outside of specific points in the month where we handle bulk data loads. It's mostly reporting and analysis work using SQL Server.

I'm still looking for additional SSD heavy L2ARC comparisons but hopefully some of the big data guys here can weigh in.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
From a certain point on, you need raw power what means iops.
In the past you used massive mirror setups with 15k disks to achieve.

Now you can use SSDs. While read iops are always out of discussion
write iops may be much lower than read iops especially on raid without trim.
You can reduce this problem on raid setups with enterprise SSDs or a manual overprovisioning.

If you can restrict your problems mainly to reads, the new NVMe disks may be a solution.
Ex an Intel P750 or P3610 can give you a read performance of several GB/s
and several 100k read iops, see Intel® SSD 750 Series: Performance Unleashed - add enough RAM with large L2ARC, for a 480 GB L2Arc, you should use > 32GB RAM better 64 to 128GB.
 
  • Like
Reactions: T_Minus