I wonder if it can view OWC SoftRaid volumes lol.Napp-it cs is a port of a Solaris ZFS web-gui.
Alert currently reports only ZFS pool failures but Windows Storage Spaces healthy is on the todo list.
I wonder if it can view OWC SoftRaid volumes lol.Napp-it cs is a port of a Solaris ZFS web-gui.
Alert currently reports only ZFS pool failures but Windows Storage Spaces healthy is on the todo list.
Get-PhysicalDisk | Select-Object FriendlyName, SerialNumber, @{N='HealthStatus'; E={$_.HealthStatus}}, @{N='HealthStatusNumeric'; E={$_.CimInstanceProperties['HealthStatus'].Value}}, @{N='OperationalStatus'; E={$_.OperationalStatus -join ', '}}, @{N='OperationalStatusNumeric'; E={$_.CimInstanceProperties['OperationalStatus'].Value -join ', '}}
Intel raid? I'm currently on Ryzen. I would like to be able to remain intel/amd agnostic as I will likely end up transitioning this array to a different machine.What hardware are you running and how large an array are you contemplating?
Would VROC work with your hardware?
I have found that you can almost "throw together" a RAID5 or RAID6 system using a LSI 9361 and just about any newer 3.5" hard disk and get 400MB/sec sequential reads and writes. If you jump to more spindles, 600MB/sec is easy. This is with absolutely no tuning, no SSD cache, no RAM cache other than what Windows does, etc.Sorry, not agreeing. ZFS is dog slow if your pool is north of 50% full. A hardware raid volume has the same performance throughout.
No modern hardware RAID card uses an actual battery...they use a very large capacitor (supercap) that will last far longer than a rechargeable battery.1) I presume you mean Battery Backup?
SLOG does not change the speed of async writes at all...they are all cached in RAM.I'm looking that up but not seeing what you're talking about. I've heard of deploying a SLOG to speed up writes which I might do.
I use a RAID60 setup in each of my SAN nodes (12 wide RAID6 x2 in each 24 port expander) and 8x SSDs in RAID10 for maxCache r/w cache. 3GB/s is easy, it'll saturate the pcie3.0x8 bus.I have found that you can almost "throw together" a RAID5 or RAID6 system using a LSI 9361 and just about any newer 3.5" hard disk and get 400MB/sec sequential reads and writes. If you jump to more spindles, 600MB/sec is easy. This is with absolutely no tuning, no SSD cache, no RAM cache other than what Windows does, etc.
Add an SSD cache that the RAID controller manages, or a delayed write cache stored in RAM, and you can see 30-second bursts of 2-3GB/sec.
All for about $65 (including cache card and supercap "battery"). You will more than make up that money in time not spent tuning ZFS.
That is a lot more spindles than I use, and a lot more SSD than I use.I use a RAID60 setup in each of my SAN nodes (12 wide RAID6 x2 in each 24 port expander) and 8x SSDs in RAID10 for maxCache r/w cache. 3GB/s is easy, it'll saturate the pcie3.0x8 bus.
Yup. Gives me a superfast ~4TB SSD cache for the entire array and is completely transparent to any filesystem on top of this. I can't use NVMEs for this because the SSDs have to be on the Adaptec card, otherwise I'd have used NVME.Do you use the 8x SSDs for total cache volume?
So, there's no good answer to this. The primary motivation for the SSD cache is fast writes (I'm ingesting the entire NBBO stock quotes feed from all US exchanges). The way the application is architected, the quotes need to be persisted as such (and quickly), but they're not read by the worker nodes in real time (the proxy multicasts the quotes both to the worker nodes and the DB write queue simeltaneously).Assuming the "hot" fits in a single RAID10 pair of SSDs, how large of performance drop would you expect?
Whoops. Correction. That's ~1TB per day not week. Sorry, was in a rush.almost 1TB per week
ZFS compress is nice and even realtime dedup can now be an option with the new fast dedup as long as your data allows higher dedup rates. But the real killerfeatures of ZFS are Copy on Write and checksums on all datablocks and metadata on a per single disk base. Copy on Write means that you never modify datablocks of data already on disk but write every modified datablock new. On success the new datablock is valid and the former block can be kept by snaps - otherwise the former block remains active. On a crash during write there is no "half finished" write. Atomic writes like write datablock and update metadata or write a raidstripe sequentially over several disks are done completely or discarded. Say goodby to offline fschk or chkdsk. No need for these tools as ZFS remains intact. You need a real disaster to damage ZFS (software bug or bad hardware)I use a RAID60 setup in each of my SAN nodes (12 wide RAID6 x2 in each 24 port expander) and 8x SSDs in RAID10 for maxCache r/w cache. 3GB/s is easy, it'll saturate the pcie3.0x8 bus.
Edit: BUT...I'm (right now) playing with ZFS on top of these arrays, because I need compression. I don't care much about other features in ZFS (although snapshots and send/receive is really handy). My datasets are growing quite rapidly (almost 1TB per week) and they compress real good with ZFS (almost a factor of 5x). More testing needed...
Just curious, but, what are you using to measure those read/writes, as my numbers differ.I have found that you can almost "throw together" a RAID5 or RAID6 system using a LSI 9361 and just about any newer 3.5" hard disk and get 400MB/sec sequential reads and writes. If you jump to more spindles, 600MB/sec is easy. This is with absolutely no tuning, no SSD cache, no RAM cache other than what Windows does, etc.
Add an SSD cache that the RAID controller manages, or a delayed write cache stored in RAM, and you can see 30-second bursts of 2-3GB/sec.
All for about $65 (including cache card and supercap "battery"). You will more than make up that money in time not spent tuning ZFS.
This is both a feature and a problem. The way ZFS works with write coalescing (in RAM) and an optional ZIL, still leaves me with potential data loss in case of a power crash. Whatever device ZFS can be given for a ZIL can be no better protected than the DRAM cache on a raid controller (with Supercap power loss protection). And a raid card does similar things, where if it was unable to write out the full stripe from the DRAM cache to the underlying disks (due to a power event), the array will come up dirty and will need to be rebuilt. But the full stripe I (to be written) is alive and well in the DRAM cache.On a crash during write there is no "half finished" write. Atomic writes like write datablock and update metadata or write a raidstripe sequentially over several disks are done completely or discarded. Say goodby to offline fschk or chkdsk. No need for these tools as ZFS remains intact. You need a real disaster to damage ZFS (software bug or bad hardware)
I'm not entirely convinced. Any data inflight before a file system gets it, is certainly at risk in case of a power event. But, the only advantage with copy-on-write is that the original blocks remain unchanged in such a case, hence no (potential corruption). The problem of course is that there's data that did need to get written and I can't lose it. (Well, I can per se...but fixing that hole in the dataset is ...painful).This is a very important detail. If you pull the ac plug during writes, there is a ultra low chance of corrupted data, a corrupted ZFS filesystem or a damaged ZFS raid. If you do the same with a Raid 5/6 even with ZFS on top, there is a quite high chance of corrupted data or a corrupted raid. A hardware raid with BBU (supercap/flash) protection can reduce the risk but not avoid in a way ZFS can do.
I'm leaning in that direction as well. (Need to run more testing).ZFS with sync enabled
I understand that.The dram controller cache and the ZFS writecache have a different write behaviour, size and commit behaviours (no CoW) so better than nothing but far away from perfect.
What happens if you enable sync with no ZIL or SLOG? It'll try to write to the disk(s) and RAM concurrently, right?Sync enabled always logs all committed writes to ZIL or Slog.
sync disabled doesn't work for this use case. I can't have ZFS holding data in RAM only.You can use sync=disabled and hope the best for your dram cache
I may not have phrased my statement correctly. Mirroring implies I need more raw storage to store the same amount of data, right? Yes, it's redundant now, but needs twice the amount of space.Mirror does not increase size, only offers redundancy with same writeperformance than a single "disk" and twice the readperformance.
The infrastructure is in a data center with redundant power already. But that doesn't mean that a DC can't die, so you plan for a blast radius around the DC. I have a second line of thought going as well, to get a second DC presence > 50 miles away from the first one and use that for DR, including synchronous replication between the two. The costs of course go up due to the need for high speed network links between the two DCs (most likely 100g).Maybe you can use ZFS software raid with sync disabled and a UPS to be at least protected against power outages.
That's what I was saying. Why use a SLOG if the controller cache (power protected) is much faster than any other device? And that device will need to be power protected anyway.If the system crashes, ZFS remains intact with a few seconds of last writes lost that are otherwise protected by an Slog (or a special vdev nvme mirror in newest OpenZFS)
ZIL is a special faster part of a ZFS pool and always there. If you enable sync, ZFS logs committed writes to this ZIL area. You can only use a faster Slog instead for logging. If you do not want this logging, sync=disbled is the setting.What happens if you enable sync with no ZIL or SLOG? It'll try to write to the disk(s) and RAM concurrently, right?
I know that. I edited my post above to say "No SLOG", sorry, was writing too fast.ZIL is a special faster part of a ZFS pool and always there. If you enable sync, ZFS logs committed writes to this ZIL area. You can only use a faster Slog instead for logging. If you do not want this logging, sync=disbled is the setting.