ZFS Advice for new Setup

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

humbleThC

Member
Nov 7, 2016
99
9
8
48
Hello~

I'm about to make the shift from a Windows Server 2016 with Storage Spaces for CIFS/iSCSI over to a Linux with ZFS , to hopefully get better disk & network performance out of my hardware. Any advice on how best to setup, or anything I should consider would be greatly appreciated.

NAS Server Hardware
SuperMicro 36x Bay SuperChassis w/ Dual Intel X5560 QC @ 2.8GHz (8 cores total)
80GB DDR3 ECC
7x PCIe v2.0 x8 (in x16 Slots)
5x LSI 9211-8i's in IT Mode (latest FW)
1x Crucial 250GB SSD (Boot Drive)
4x Samsung 250GB Evo 850 Series (Cache only - dont need capacity, need speed)
10x Hitachi 4TB NAS Edition 7.2K 64MB Cache (Pool Storage)
All disks are spread evenly across the SATA3 controllers
- 2x SSDs per 1x LSI (on separate channels)
- 10x HDDs split up on 2x LSI
- 1x Boot drive on 1x LSI​

My biggest issue/bottleneck is coming up with a design that has support for my older generation Infiniband nics & switch. i.e. Mellanox ConnectX-2 40GB QDR Dual Port Nics and Mellanox 4036-E 36x Port 40GB Managed Switch. All of my ESX servers and my main Windows desktop have these dual port NICs, as well as the NAS server itself. These require IPoIB drivers, which are shotty at best. If I were dealing with 1GbE or even 10GbE I wouldn't be terribly worried about squeezing every bit of performance out of the 4x SSDs. But I've got a pair of 40Gb Nics between all my hosts, i'd like to have the disk try and push it.

In my current setup, the Windows 2016 server is performing well enough... It has excellent Infiniband support, and I get SMB 3.1 with RDMA, so from my primary desktop i'm seeing 1.2GBs sustained read/write. However when trying NFS support to my ESX farm, it was worse than 10MBs. I switched over to iSCSI between the NAS and the ESX environment and achieved good perf (600MBs/ish). But after doubling my disk and SSDs in the pool, i saw no additional performance which was disappointing.

So my question is... for OS with best-of-breed ZFS support, do I have to focus on Solaris only? or is something like Ubuntu 16.10 (which apparently has Mellanox supported drivers) a good place to start.

And when it comes to ZFS design itself, how should I best use my hardware?
I want a single Pool, expandable with "like quantities" of hard drives for scalable capacity & performance.

i.e. I'll likely add disks in 5x 4TB HDDs and 2x 250GB SSDs increments.
I'll be starting with 10x 4tB HDDs and 4x 250GB SSds.

I'm thinking of having 1 disk per 5 for redundancy, so i'm thinking raid 50 2(4+1) or raid6 1(8+2).
For SSDs, i'm curious if i can/should partition each of the SSDs.
And use a certain portion of them for ZIL , L2ARC, and w/e the other caching mechanism i'm forgetting is.
 
  • Like
Reactions: theinternalnetwork

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I'd skip FreeNAS with Infiniband.

I really like napp-it + OmniOS for performance and ease of management BUT I also think there's a huge amount of support for ZoL with Ubuntu and if you ever wanted to do other things the Ubuntu option is good. 16.10 will not be supported for a long time though so I'd go with 16.04.
 

humbleThC

Member
Nov 7, 2016
99
9
8
48
I'd skip FreeNAS with Infiniband.

I really like napp-it + OmniOS for performance and ease of management BUT I also think there's a huge amount of support for ZoL with Ubuntu and if you ever wanted to do other things the Ubuntu option is good. 16.10 will not be supported for a long time though so I'd go with 16.04.
Good feedback, i'm reading up now on napp-it & OmniOS.
So napp-it would be the underlying OS supporting Infiniband & ZFS, and OmniOS is a virtual hypervisor that sits on top?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Good feedback, i'm reading up now on napp-it & OmniOS.
So napp-it would be the underlying OS supporting Infiniband & ZFS, and OmniOS is a virtual hypervisor that sits on top?
The underlying OS is the commercial Oracle Solaris (free for noncommercial demo and development) or a free OpenSource Solaris fork like OmniOS or OpenIndiana.

Napp-it is a webmanaged storage appliance software that runs on top of any of them to make management easier.

I do not use IB in my setups but there are OmniOS users with the X2 generation (X3 not supported). May be different with a genuine Solaris, Using InfiniBand Devices (Overview/Tasks) - Oracle Solaris Administration: Devices and File Systems
 
Last edited:

cookiesowns

Active Member
Feb 12, 2016
234
83
28
28
I would definitely skip FreeNAS if you want good IB support. In fact FreeNAS is quite finicky with uncommon HW configs. I've had really odd issues with both Chelsio & solarflare cards.

If you don't mind ditching a GUI, ZoL with Ubuntu is fairly stable, but if you want something more "commercialized" Napp-IT with OmniOS would be my go to.
 
  • Like
Reactions: humbleThC

humbleThC

Member
Nov 7, 2016
99
9
8
48
You guys are awesome... Having a lot of fun testing OmniOS w/ NappIt.
Initial boot included full driver support for the ConnectX2's as recommended.
Initial IPERF tests came back faster than anything i've tested prior (win>win or esx>esx or win>esx) = Very Promising
Initial ZFS Pool tests are in progress, (testing diff RaidZ configs w/ diff SSD cache & log configs)

Once i'm happy with my final disk benchmarks, i'll move on to testing over the network via CIFS/NFS & iSCSI.

Question on Pool Design with/:
- 10x HDDs @ 4TB
- 4x SSDs @ 250GB

I'm thinking either:
- RAID50 equiv of RAIDZ (4+1)*2 -or-
- RAID6 equiv of RAIDZ2 (8+2)*1

With the thought being i'd like to grow in 4-5x disk increments in the future.

For SSD usage, i'm still testing/learning... But my thought is either/:
- 2x SSD dedicated whole disk to Cache
- 2x SSD dedicated whole disk to Logs

-or-
Is there a way to partition the disks, such that i'm using all 4x SSDs for both Cache & Logs;
Where i'm using all 4x SSDs for bandwidth for L2ARC (lets say 100GB to overkill the 80GB of ARC)
And then use the remainder of the 4x SSDs for Cache?

[or is that a no-no, due to not being able to use the individual drive's disk cache, if using partitions -vs- whole disk?

Any pool recommendations would be great.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
I would avoid raid-Z1/raid-5 with many disks or large capacity.
The interesting pool layouts with 10 disks are

1 x raid Z2 vdev of 10 disks (4TB)= 40 TB raw, 32 TB usable
or 5 x raid-1 vdevs, 20 TB usable (you should add a hotspare)

While sequential performance will be good either, the first one can offer the iops if a single disk (around 100 iops) the second can give 5 x 100iops=500iops. So the first is good for backup and a simple filer, the second for VMs or databases.

I would not use the quite slow SSDs als L2Arc as you can give enough RAM to ZFS especially as the pool will be much faster than the cache SSDs sequentially. For small io this may be different but this may be irrelevant with enough RAM. You can check arcstat for cache hits to add an L2Arc SSD or NVMe if needed (Arc cache hit rate not good enough). If you want to use the Samsung EVO as Slog, simply add them one by one to distribute the load over them.

You may use the 4 SSDs as a raid-10 pool for a performance sensitive load.
If you use the pool for databases or VMs where you need a secure write behaviour, you may add a good SSD or NVMe as an Slog device. This requires a very good SSD (Intel S3700, P750 or better) with powerloss protection. Samsung EVO is not good enough for an Slog.
 
  • Like
Reactions: humbleThC

humbleThC

Member
Nov 7, 2016
99
9
8
48
I should also state that this is all for a HomeLab, where the environment needs SPEED. I realize that RAIDZ with 2x RAID5(4+1)s will give me a lower mean time between data loss than RAIDZ2 (8+2). I'm willing to take that trade off, Assuming that the write penalty of RAID5 is lower than RAID6, and RAID5+0 would benefit from the increased stripe width of 2x Disk Groups.

I'm currently benchmarking the RAID50 w/ 2x SSD L2ARC and 2x SSD Cache, and the initial results are looking amazing.

About 1/2 of the array is going to be full of my home media library. The other 1/2 will be carved up via iSCSI to ESX for Lab only testing/development, or presented via NFS if i can get enough performance/support.

My original goal was to use SSDs only to "help cover all angles of IO workload" and accelerate the HDD pool. And once I get close to the 40Gb Infiniband theoretical max (about 2GBs sustained read/write bandwidth), I would no longer expand SSDs, and just grow HDDs for more capacity, assuming 1 disk per 5 for parity, and striping across for capacity & bandwidth, knowing that each 5x pack of disks i add increases the MTBDL.

For the fun of it, I will create a 4x SSD pool RAID10 and benchmark it by itself.
Totally wasnt my plan, but as a fall back option, its not bad.

Last but not least, I do have dual power supplies in the server, attached to redundant UPS, on separate 20amp circuits, with a whole-house generator on auto-switch, and planning to go Solar in the spring... so i'm expecting the unplanned power outages to be at a minimum.
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
How is the performance for random I/O on the RAID50? That's going to be the limiting factor. Sequential is fine, and easy. It's random (IOPS) that kills you. You can mitigate a lot of it with SSD cache/log, but eventually you run out and have to hit the disk. If that matters to you and your workload is up to you. For 40Gbit/s though, I don't think that's going to cut it for random. And with multiple VMs using it for iSCSI/NFS, your workload will be random.

Write penalty of raidz1/raidz2 is about the same. To write, you must write to all the disks in the set.

I don't know if "each 5x pack of disks i add increases the MTBDL." is correct... If you lose enough disks in any raidz to fail that 5x raidz, you lose the entire pool. And having done it, reslivering raidz1/2 (to replace a failed drive) on spinners SUCKS. It takes forever. With 1.5TB drives, it was about 12 hours.

If you are still in testing mode, make sure to test random I/O, with sync enabled. And test with a large RAID10. 2-disk mirrors striped in a pool of 5 pairs is probably the best configuration for IOPS with 10 disks while retaining redundancy.
 
  • Like
Reactions: humbleThC

humbleThC

Member
Nov 7, 2016
99
9
8
48
How is the performance for random I/O on the RAID50? That's going to be the limiting factor. Sequential is fine, and easy. It's random (IOPS) that kills you. You can mitigate a lot of it with SSD cache/log, but eventually you run out and have to hit the disk. If that matters to you and your workload is up to you. For 40Gbit/s though, I don't think that's going to cut it for random. And with multiple VMs using it for iSCSI/NFS, your workload will be random.

Write penalty of raidz1/raidz2 is about the same. To write, you must write to all the disks in the set.

I don't know if "each 5x pack of disks i add increases the MTBDL." is correct... If you lose enough disks in any raidz to fail that 5x raidz, you lose the entire pool. And having done it, reslivering raidz1/2 (to replace a failed drive) on spinners SUCKS. It takes forever. With 1.5TB drives, it was about 12 hours.

If you are still in testing mode, make sure to test random I/O, with sync enabled. And test with a large RAID10. 2-disk mirrors striped in a pool of 5 pairs is probably the best configuration for IOPS with 10 disks while retaining redundancy.
I guess since the writes are cached, coalesced, and written as full stripes with the parity calculation already computed before it hits disks, then all of the RAID types will have similar performance on a per-spindle basis.

And you are right, i meant to say each 5x pack of disks I add in the future, (and the two i'm starting out with) decrease the MTBDL (not increase). Not to say that i dont care about reliability, but I do have a 2nd NAS and a bunch of USB drives, where i keep all my important data backed up , in addition to this primary NAS.

And finally even in RAID10 5x(1+1)s, if I was unlucky enough to lose both disks of any mirror, the entire pool is still down. So the difference i see is:
RAIDz2 1(8+2) [survives any two disk failures, without disruption]
RAIDz 2(4+1) [survives any two disk failures, assuming only 1 per disk group]
Mirror 5(1+1) [survives up to 5 disk failures, if you're lucky, and as little as 2 if you are unlucky]

Its really between RAIDz and RAIDz2 for me however, because I can't afford a 50% capacity penalty , even if it offers slightly faster random read/write. Which is really why i'm trying to focus on which disk strategy to focus in on, and how best to cache it for better overall speed & capacity.
 

humbleThC

Member
Nov 7, 2016
99
9
8
48
I guess my current question is focusing on how best to use the 4x Samsung 850 Evos.
I get they dont have power protection, and they aren't the fastest SSDs out.
But compared to spinning rust, they're still A-O-K.

My thought of 2x non-mirrored Logs and 2x non-mirrored Cache would give me additional cache capacity + iops against the disk pool, and covers both random read and bursty writes for busy times.

If it's really a no-no to use such a SSD as L2ARC, and they really are too slow to add any value over the ARC+DiskPool, then I guess the recommendation is throw all 4x SSDs in the ZIL?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I guess my current question is focusing on how best to use the 4x Samsung 850 Evos.
I get they dont have power protection, and they aren't the fastest SSDs out.
But compared to spinning rust, they're still A-O-K.

My thought of 2x non-mirrored Logs and 2x non-mirrored Cache would give me additional cache capacity + iops against the disk pool, and covers both random read and bursty writes for busy times.

If it's really a no-no to use such a SSD as L2ARC, and they really are too slow to add any value over the ARC+DiskPool, then I guess the recommendation is throw all 4x SSDs in the ZIL?
Gea has already advised that those are not the best (not even close) for ZIL/L2ARC devices and I concur plain and simple. They are not robust enough to deliver 'sustained' low latency/high iops/provide PLP so YMMV there. Take that for what it's worth/a grain of salt if you can live w/ non-optimized ssd cache layer. We typically shoot for Intel DC s3700/hitachi hussl series on the budget minded side of the house or a ZeusRam, Intel P3700/P3600 if $ is not an option.

I 'hope' you have stumbled across this. GL w/ build/testing!

Top Hardware Components for FreeNAS NAS Servers
 
  • Like
Reactions: humbleThC

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
If your SSDs don't have power-loss protection, they should NOT be used as log devices. If you do that, you might as well just disable sync writes on the disks and be done with it. I would also question those SSDs with non-cached sync random write performance anyway. They are probably still better than rust, but they aren't as good as you think. I've tried it, they fall over without much work. For client machines, they work pretty well, but they don't hold up to server workloads. You get about 20% of their "rated" performance.

They will work fine for L2ARC. They aren't great for that, but they are acceptable. Keep in mind that managing L2ARC requires RAM and CPU. If you are RAM or CPU starved already, L2ARC will not save you. It also only helps on random I/O by default, if you want streaming reads to be cached there, you need to turn that on. In your prefered configuration, you probably should.

Having done both, the difference between 2x raidz and using the same number of drives in a striped mirror configuration is not "slightly faster random I/O". It's much more than that. Test it, seriously. I went from stuttering feeding a half dozen video streams on 1GbE to feeding more machines while saturating a 10GbE link. And I don't have any SSDs in my system. I'm only pushing on it because you said your priority is speed. Fast, cheap, right. Pick two. :)

Generally speaking...

raidz performs like a single drive
adding stripes is N*stripes

So 2x raidz is about 2 drives worth of performance, 5x mirrors is about 5 drives worth.

The highest performing and space-efficient config is RAID0. If your backup strategy is good, you might consider it if you really need speed and can't do mirrors.

If you must do raidz for whatever reason, your best bet is probably 2x raidz1 using 2 of those SSDs over-provisioned as L2ARC. Perhaps make a partition on each of about 50% of the space, and add the partitions as cache devices. That will help with the huge fall-off of performance those drives experience in sustained workloads. It gives the controller more empty blocks to work with, so the erase cycles don't kill your performance so quickly. Use the other 2 for something else. If write performance is a problem, disable sync writes and see if that fixes it. If it does, get a good log device like a DC3500 or DC3700, and again, over-provision it. Most systems don't need more than a few gigs of log space and having all those empty blocks helps a lot. Or just leave sync off. That's a valid configuration for a lot of workloads. Not great for databases, but for mass media storage, it's fine.

It's your system and your data, so do what you want. But that's my advice. I gave it for free, so it might be worth nothing. :)
 

humbleThC

Member
Nov 7, 2016
99
9
8
48
If your SSDs don't have power-loss protection, they should NOT be used as log devices. If you do that, you might as well just disable sync writes on the disks and be done with it. I would also question those SSDs with non-cached sync random write performance anyway. They are probably still better than rust, but they aren't as good as you think. I've tried it, they fall over without much work. For client machines, they work pretty well, but they don't hold up to server workloads. You get about 20% of their "rated" performance.

They will work fine for L2ARC. They aren't great for that, but they are acceptable. Keep in mind that managing L2ARC requires RAM and CPU. If you are RAM or CPU starved already, L2ARC will not save you. It also only helps on random I/O by default, if you want streaming reads to be cached there, you need to turn that on. In your prefered configuration, you probably should.

Having done both, the difference between 2x raidz and using the same number of drives in a striped mirror configuration is not "slightly faster random I/O". It's much more than that. Test it, seriously. I went from stuttering feeding a half dozen video streams on 1GbE to feeding more machines while saturating a 10GbE link. And I don't have any SSDs in my system. I'm only pushing on it because you said your priority is speed. Fast, cheap, right. Pick two. :)

Generally speaking...

raidz performs like a single drive
adding stripes is N*stripes

So 2x raidz is about 2 drives worth of performance, 5x mirrors is about 5 drives worth.

The highest performing and space-efficient config is RAID0. If your backup strategy is good, you might consider it if you really need speed and can't do mirrors.

If you must do raidz for whatever reason, your best bet is probably 2x raidz1 using 2 of those SSDs over-provisioned as L2ARC. Perhaps make a partition on each of about 50% of the space, and add the partitions as cache devices. That will help with the huge fall-off of performance those drives experience in sustained workloads. It gives the controller more empty blocks to work with, so the erase cycles don't kill your performance so quickly. Use the other 2 for something else. If write performance is a problem, disable sync writes and see if that fixes it. If it does, get a good log device like a DC3500 or DC3700, and again, over-provision it. Most systems don't need more than a few gigs of log space and having all those empty blocks helps a lot. Or just leave sync off. That's a valid configuration for a lot of workloads. Not great for databases, but for mass media storage, it's fine.

It's your system and your data, so do what you want. But that's my advice. I gave it for free, so it might be worth nothing. :)
Thanks much for your words of wisdom.

Does PLP argument pertain if my server has 99.999% power-uptime? And in the event of a power loss, how hard is it to roll-back to the 'last known good' state? if all i'm going to loose is the last few minutes of writes, that will work for my environment, for the 0.001% i do encounter an issue. Or we talking pool-down, rebuild and copy backups?

I'm testing every combination against all of the built-in benchmarks in napp-it and recording the results just for fun. But in the end, it will come down to "the best overall" config.

I too am noticing a decent improvement in RAID50 over RAID6 in multiple IO stream random workloads, and in my case, is a strong case towards when i grow by another 5x disks in another RAIDz component of the pool, after rebalance, should get 50% faster even.

The Evo's are pretty slow, @ about 500MBs sustained R/W and 10k IOPs @ QD1 and 97k IOPs @ QD32.
The Hitachi NAS disks are good for 150MBs each, and i have 10x...
My thought is, once i get 10 or so VMs spun up, and give the time for the cache to warm up, my random workloads should be *mostly* absorbed capacity wise, by my SSDs.
Cloning a VM will take 10minutes the 1st time, but maybe 5-7 the 2nd sort of thing.

Again i'm very appreciated of all the great feedback, it's helping me understand the how exactly ZFS functions, as i'm primarily an EMC Storage Engineer by trade, and been focused on scale-up storage arrays for the last 15yrs... A get how storage works, just learning 'how ZFS does it' :)

It seems like the Best Practice for my config would be either:
RAIDZ2 (8+2) with 4x SSD as L2ARC with Read Enabled (over-provisioned lets say only using 50GB per SSD)
-or-
RADIZ2 (8+2) no Cache or Logs devices, and just RAID10 the 4x SSDs separately.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
At the end you will find

Every write must go to disk.
If your write is random you are strictly limited by the pool iops (this scales with number of vdevs where each vdev is like a disk). A single disk has around 100 write iops, a desktop SSD 5-10k under steady load, an enterprise SSD 80k.

The Slog device is not a write cache. ZFS write cache is done via RAM. The slog is an additional crash/powerloss save logdevice what means that every write is done over the write cache (very fast, some seconds combined) and when sync is enabled/active ADDITIONALLY logged immideatly on every commit. A commit to OS/application must mean yes its on disk - after a crash on the next reboot.

Your pool can give around 1000 MB/s read, your SSD only 300-500 MB. You need at least 2 SSDs to be in par with the pool. Your pool has around 100 iops (single vdev), your SSD around 30k so this can improve random read when used as L2ARC. But with enough RAM, nearly all reads are from ARC. The L2ARC can additionall help with several sequential concurrent read streams (you must enable sequential caching).

This is why i prefer SSD only pools for random load (VMs, databases) and spindels in a raid-Z2 for filer or backup use. For a filer sync write is not needed/requested.
 
Last edited:
  • Like
Reactions: humbleThC

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
If you're power is that stable, you might be ok. But the most you should ever lose from an uncommitted sync write is a few seconds of sync write requests, even with sync write completely disabled. I've run with sync disabled for years and never had to restore backups. In that case, skip the SLOG/ZIL devices and use them for something else. Even just more L2ARC.

While the pool will be fine, some applications like databases are notoriously annoyed when something like that happens. If your power uptime is really 5 nines, I doubt you'll ever run into it. But crashing is a thing still.. Even for applications where it's an issue, it's pretty rare to have a problem.

Best practice... well... that depends on you. A single raidz2 is alright for a mass media storage, but will not handle random I/O well. SSD cache will help some there, but at some point you're always going to hit the rust. For mostly sequential workloads, it does ok. I would think of it like a really big USB drive for performance considerations. If you're ok with that, I would do the 4xSSD in a mirror pool for VMs and other heavy-random workloads. If your drives are good for 150MB/s, that's the max you should expect from the array.

You mentioned expansion earlier. With a 10-disk raidz2, you would need to replace all 10 drives to see extra space. You can also add more vdevs to the pool, best practice would be to match the existing setup, so you would add another 10 disk raidz2. There's nothing preventing you from doing something else, even adding a single drive (don't do that).

If you're really going to run 40Gb IB, it's a huge waste on a single raidz2.... You could saturate 1GbE with it, with a single client doing sequential reads. With aggressive caching, you could have a few clients on that 1GbE. 40Gb is a whole different ballgame. Keeping the same disk setup, you need 10 arrays for 10Gb, 40 for 40Gb.... Not exactly, I'm sure, but you get the idea. It seems to hold for me though. I have 10 mirrors in my pool. With the random pattern a VM server sees, I can saturate the 10Gb link. Local sequential access is faster, but not 4x faster.

And note that when gea talks about ZFS, you should listen. If he disagrees with me, listen to him. Seriously. :)
 
  • Like
Reactions: humbleThC

humbleThC

Member
Nov 7, 2016
99
9
8
48
I appreciate all the feedback! If you guys ever have questions on EMC VMAX/VNX/Unity or Cisco MDS/Nexus or Brocade FC, i'm your guru :)

The Slog device is not a write cache.
Understood (finally, that was one of my biggest misunderstandings that was confusing me)... I so thought the 'log' devices were acting as a Write-Intent Log, and absorbing the writes on sync, allowing the client to get the ack and move on. I know thats different than pure write cache, but if Sync writes dont accept RAM write ack, then i figured with LOG devices (a pair of SSDs) to absorb the write would be hugely beneficial.

For testing i'm playing with:
upload_2017-1-5_17-16-14.png

Took the 4x SSDs and partitioned them 60GB / 185GB.
for L2ARC i'm using the 1st part of each SSD = 240GB
for Logs, to protect against a single SSD failure, i'm using RAID10 on the 185GB partitions for 370GB.

But with my new found knowledge, i agree you always require PLP on ZIL.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
But with my new found knowledge, i agree you always require PLP on ZIL.
There is a world between total unsecure and state of technology.
Without an Slog, you can loose up to a few seconds of commited writes, A good and fast SSD without PLP can reduce the problem to the small timeframe it needs to write a commit.

But for sure, a production machine cannot accept this, at home/lab this may be different.
But a typical desktop SSD is not fast enough to give a real improvement, does not matter the security.
 

humbleThC

Member
Nov 7, 2016
99
9
8
48
You've got me sold... did some google-foo and came up with:

ZOTAC Premium Edition 240GB SSD MLC SATA III Internal 2.5” 7mm height (ZTSSD-A5P-240G-PE)
Amazon.com: ZOTAC Premium Edition 240GB SSD MLC SATA III Internal 2.5” 7mm height (ZTSSD-A5P-240G-PE): Computers & Accessories
$110 each (I saw they were as low as $74 each at one time)

From what I read, they are the best bang for the buck, and have PLP , SMART, NCQ, TRIM, etc.
Going to order a pair of them to start off with a Mirror SLOG of these two ZOTACs.
(if it's not fast enough i'll add 2x more and try again :) )

For my Samsung 850 EVOs i'm debating what to do... they were purchased for the intent of being Cache & Logs only to accelerate the rust. i.e. I got tiny SSDs. They aren't the right tool for the Logs, and maybe not fast enough for the L2ARC? Or is it just silly to throw 960GB at L2ARC to begin with. That or the second pool RAID10 w/ 480GB - 10% usable (sooo small)

Hell, I might pull them out of the SuperMicro NAS, and toss them directly in the ESX01 and ESX02 HP DL380s , play with VMWare Cache and VSAN :) That or perhaps grab one more, and put them in a RAIDz secondary pool, or is that kind of useless again? My issue is 480GB of RAID10 just seems too small to be worth it for my VM space. Especially if the primary pool with the two new ZILs is just as fast or faster.

I'm torn and wishy washy... Because the same pool will house my Wife's Photo Archive and my Work Client Documents, (and about 15TB of media). I'd really like to not lose those... The same pool will be used as a storage repository for my ESX lab. And although I have a ton of VMs, simulators, VSAs, etc, there are no users ever, and when i'm benchmarking/developing/testing, i'm only using that one piece at a time, and i want it to be as fast as possible.

The engineer in me knows you build the right pool for the right job, and mixing workloads and mixing critical and non-critical data in the same bucket is a no no. But the child in me still wants the biggest fastest bucket i can get. And i want everything to share it :)
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Only some Brits in Europe believe that you can have your cake and eat it.

In real world you have cost, reliability and performance
and can only optimize two of them.

I would skip the idea of an Slog or buy a used Intel S3700-100