Hi,
for no other purpose than just looking how far I can go, I decided to build a fast NAS, mainly for a sequential write situation (backup, 'archive' file server). Just want to see how fast I can do my backups. This means that I am looking for configurations where I can use fast storage as a big write-backed cache. And I am willing to sacrifice protection against power cuts a little bit, as my data is not hugely critical, and I have an elaborate UPS setup. I can accept bad files after a power cut, just not bad file systems. I am willing to spend some time setting it up, but after that it must be set-and-forget.
This is what I have: a couple of Xeon D servers with ESXi (6.5U2 atm), Optane 900P, various Samsung 960 Pro M.2 NVMe drives, a SATA HBA capable of pass-through, some SATA SSDs (Intel S3610), some large spinning drives, and a 10G network of course via some Cisco switches.
The NAS should run as a VM, use the spinning drives as bulk storage, and get the most out of the various flash components I have.
I like the idea of the ZIL/SLOG of ZFS, but the repetitive stories of it trading in performance for security put me off a bit, and benchmarks I ran confirmed that.
I then tried various write-backed cache strategies on a standard Linux NAS packaging (OpenMediaVault), but they all failed, as OMV does not seem to like interference by others on the array configuration. And every time I look, all the write-backed stuff seems a bit dated and not always well integrated.
And I hear good stories about Win2k16. Haven't tried that yet.
Optane does fly as a local drive on a recent Linux kernel (4.15+), when you get the correct scheduler. Older kernels lag a bit.
My question: Any recommendations on how to get write-backed cache working on linux on a recent kernel, that is stable and capable of profiting from an Optane? Or should I try other directions?
for no other purpose than just looking how far I can go, I decided to build a fast NAS, mainly for a sequential write situation (backup, 'archive' file server). Just want to see how fast I can do my backups. This means that I am looking for configurations where I can use fast storage as a big write-backed cache. And I am willing to sacrifice protection against power cuts a little bit, as my data is not hugely critical, and I have an elaborate UPS setup. I can accept bad files after a power cut, just not bad file systems. I am willing to spend some time setting it up, but after that it must be set-and-forget.
This is what I have: a couple of Xeon D servers with ESXi (6.5U2 atm), Optane 900P, various Samsung 960 Pro M.2 NVMe drives, a SATA HBA capable of pass-through, some SATA SSDs (Intel S3610), some large spinning drives, and a 10G network of course via some Cisco switches.
The NAS should run as a VM, use the spinning drives as bulk storage, and get the most out of the various flash components I have.
I like the idea of the ZIL/SLOG of ZFS, but the repetitive stories of it trading in performance for security put me off a bit, and benchmarks I ran confirmed that.
I then tried various write-backed cache strategies on a standard Linux NAS packaging (OpenMediaVault), but they all failed, as OMV does not seem to like interference by others on the array configuration. And every time I look, all the write-backed stuff seems a bit dated and not always well integrated.
And I hear good stories about Win2k16. Haven't tried that yet.
Optane does fly as a local drive on a recent Linux kernel (4.15+), when you get the correct scheduler. Older kernels lag a bit.
My question: Any recommendations on how to get write-backed cache working on linux on a recent kernel, that is stable and capable of profiting from an Optane? Or should I try other directions?