Hi STH community! Our company is growing rapidly and so I recently ordered our first ‘proper’ server (been using NAS hosted VM’s previously). I settled on the SuperMicro AS-1115-SV-WTNRT with 10 x 3.4 PM9A3 Samsung Enterprise drives (1.3 DWPD) and a RAID0 m.2 boot drive.
I’m a fairly experienced computer user (years of development, database management, system admin etc) but am struggling to find good info about tuning ZFS for all NVMe storage arrays. Worringly, I seem to come across many posts where people complain they’re getting very poor speeds using ZFS - regardless of tuning :-(
Another problem - possibly even worse - is the absolutely appalling write amplification that ZFS suffer from (see thorough testing by user Dunuin here). Write amplification of over 50 x for ZFS RAID10!(!!). I don’t want to wear out my NZ$6000 storage array 50 x faster than necessary!
Yes, I’m a ZFS noob but I want to climb this hill and am looking for help finding the best path forward. My questions are (and please insert anything else you think relevant)
1. Should I just use MDADM instead? (Server will run VM’s probably in XCP-ng)
2. With a 5 year warranty, am I worrying too much about rampant write amplification? Just ignore it!?
3. Running a journaling file system on top of ZFS seems to be a major cause of the amplification, can I disable journaling (considering SSD’s have capacitors, server has dual power supplies and a robust UPS will be in place). I’ve ready conflicting answers on this so am not sure what’s true.
Apologies if this has been asked before but I’m really struggling to find info specific to all NVMe drive pools and NVMe that don’t have severe issues.
Very much appreciate any thoughts.
Thanks,
Carlin
I’m a fairly experienced computer user (years of development, database management, system admin etc) but am struggling to find good info about tuning ZFS for all NVMe storage arrays. Worringly, I seem to come across many posts where people complain they’re getting very poor speeds using ZFS - regardless of tuning :-(
Another problem - possibly even worse - is the absolutely appalling write amplification that ZFS suffer from (see thorough testing by user Dunuin here). Write amplification of over 50 x for ZFS RAID10!(!!). I don’t want to wear out my NZ$6000 storage array 50 x faster than necessary!
Yes, I’m a ZFS noob but I want to climb this hill and am looking for help finding the best path forward. My questions are (and please insert anything else you think relevant)
1. Should I just use MDADM instead? (Server will run VM’s probably in XCP-ng)
2. With a 5 year warranty, am I worrying too much about rampant write amplification? Just ignore it!?
3. Running a journaling file system on top of ZFS seems to be a major cause of the amplification, can I disable journaling (considering SSD’s have capacitors, server has dual power supplies and a robust UPS will be in place). I’ve ready conflicting answers on this so am not sure what’s true.
Apologies if this has been asked before but I’m really struggling to find info specific to all NVMe drive pools and NVMe that don’t have severe issues.
Very much appreciate any thoughts.
Thanks,
Carlin