Hi all
I am upgrading my Solaris 11.3 home server/NAS. I have 27 x 2TB SATA3 drives, which will be configured as three 9-drive VDEVs; either 3 x RAIDZ2 or 3 x RAIDZ3, I haven't decided yet.
I also plan to add a couple of SSDs for SLOG and read cache, and my question for this thread is about read cache (L2ARC).
When data is in read cache/L2ARC, does ZFS read this data only from the cache, or does it read some from L2ARC and some from the underlying disks?
In other words, when looking to boost performance with L2ARC devices, do I need those devices to always be faster than the underlying disks, or is the performance of L2ARC additional to the performance of the disks?
I'm wanting to add read cache primarily to boost my IOPS, where of course SSDs will be vastly better than spinning disks. But my spinners do give me great sequential read/write performance, and this will be valid for at least some of my workload (dealing with very large media files like MP4 videos of up to 20GB in size.)
In my current benchmarks, using iozone, I am recording sequential speeds of 2.2GB/s writes and 1.7GB/s reads.
My issue is that my planned cache devices likely won't be as fast for sequential reads. I was thinking of getting 2 x Samsung EVO 850 drives which are capable of 550MB/s reads. Two of these striped would give me 1.1GB/s sequential, which is quite a bit slower than the 1.7GB/s sequential reads my spinning disks can maintain.
So what would be perfect is if ZFS can use read cache in addition to underlying storage - ie if it striped the reads so that some come from L2ARC devices and some from underlying storage, such that total performance could be anywhere up to the sum of cache + pool bandwidth?
Conversely, if cached data is loaded only from L2ARC, bypassing the pool completely, then it seems to me that in some circumstances I will see a lowering of performance from my proposed config of 2 x 550MB/s SSDs?
I'd be most grateful for any thoughts/comments on how this actually works. It's not something I can easily test myself until I actually get the SSDs, but I don't want to order them until I better understand how things work, as understanding this might well affect my purchasing decision.
For example, if I found that I really do want to get L2ARC devices that are at least as fast, or faster for sequential as my disks, I might consider something exotic like a PCI-E flash drive. I can currently get a 1.2TB LSI Nytro Warp drive on eBay for about the same price as 3 x 250GB Samsung 850 SSDs. That's rather more than I planned to spend (I was planning to get only 2 x 250GB), but I will consider it if it's the only way to ensure caching increases my performance in all use cases.
TIA!
I am upgrading my Solaris 11.3 home server/NAS. I have 27 x 2TB SATA3 drives, which will be configured as three 9-drive VDEVs; either 3 x RAIDZ2 or 3 x RAIDZ3, I haven't decided yet.
I also plan to add a couple of SSDs for SLOG and read cache, and my question for this thread is about read cache (L2ARC).
When data is in read cache/L2ARC, does ZFS read this data only from the cache, or does it read some from L2ARC and some from the underlying disks?
In other words, when looking to boost performance with L2ARC devices, do I need those devices to always be faster than the underlying disks, or is the performance of L2ARC additional to the performance of the disks?
I'm wanting to add read cache primarily to boost my IOPS, where of course SSDs will be vastly better than spinning disks. But my spinners do give me great sequential read/write performance, and this will be valid for at least some of my workload (dealing with very large media files like MP4 videos of up to 20GB in size.)
In my current benchmarks, using iozone, I am recording sequential speeds of 2.2GB/s writes and 1.7GB/s reads.
My issue is that my planned cache devices likely won't be as fast for sequential reads. I was thinking of getting 2 x Samsung EVO 850 drives which are capable of 550MB/s reads. Two of these striped would give me 1.1GB/s sequential, which is quite a bit slower than the 1.7GB/s sequential reads my spinning disks can maintain.
So what would be perfect is if ZFS can use read cache in addition to underlying storage - ie if it striped the reads so that some come from L2ARC devices and some from underlying storage, such that total performance could be anywhere up to the sum of cache + pool bandwidth?
Conversely, if cached data is loaded only from L2ARC, bypassing the pool completely, then it seems to me that in some circumstances I will see a lowering of performance from my proposed config of 2 x 550MB/s SSDs?
I'd be most grateful for any thoughts/comments on how this actually works. It's not something I can easily test myself until I actually get the SSDs, but I don't want to order them until I better understand how things work, as understanding this might well affect my purchasing decision.
For example, if I found that I really do want to get L2ARC devices that are at least as fast, or faster for sequential as my disks, I might consider something exotic like a PCI-E flash drive. I can currently get a 1.2TB LSI Nytro Warp drive on eBay for about the same price as 3 x 250GB Samsung 850 SSDs. That's rather more than I planned to spend (I was planning to get only 2 x 250GB), but I will consider it if it's the only way to ensure caching increases my performance in all use cases.
TIA!