RAID for Windows (That's Not Storage Spaces)?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
1,870
918
113
I don't want to weigh in too much on a ZFS thread, but Postgres (and other ACID DBMSes) should already be writing your data twice (assuming you have it configured for reasonably durable transactions) with its own write ahead log to recover from file system corruption on power loss, putting that on a COW filesystem results in quite a few extra writes for little gain. The Postgres docs suggest you shouldn't even use journaling on something like ext4: 28.3. Write-Ahead Logging (WAL)
 

nabsltd

Well-Known Member
Jan 26, 2022
763
558
93
So, there's really not much "hot" data so to speak.
OK, so much more than I would be doing.

Basically, I'm thinking of a pair of SAS3 3.84TB in RAID1 fronting 10x 8TB drives in RAID10. I'd lose your very fast write across 4x striped SSD, but should be able to handle close to 10Gbps ingest for at least a few minutes if necessary. Since this is for VM storage (not much data...mostly just boot and application, with data on a distributed storage system), eventually the most used VMs would percolate to the SSD cache.

This should allow me fast storage vMotion and fast writes for things like OS updates, but still give me plenty of space for read cache.
 

nabsltd

Well-Known Member
Jan 26, 2022
763
558
93
Just curious, but, what are you using to measure those read/writes, as my numbers differ.
Those are all local to the system (no network I/O) using whatever benchmark is handiest for that OS. I tend to use CrystalDiskMark in "real world" mode on Windows and fio on Linux.

I know my uncached numbers on my workstation (6x 4TB spinners in RAID6) are close to 400MB/sec sequential. With a write-back RAM cache using system memory, the numbers become positively ludicrous (6GB/sec or more) as long as the total bytes to write isn't too much larger than the cache size. Ultimately, the actual write speed is the same, as you are limited by the underlying disk speed, but the calling app thinks the data has been written. As long as you have a reliable UPS (and I have one for each power supply in my workstation, and one supply will run the system by itself), there's no real danger in a write cache.
 

kapone

Well-Known Member
May 23, 2015
1,799
1,189
113
OK, so much more than I would be doing.

Basically, I'm thinking of a pair of SAS3 3.84TB in RAID1 fronting 10x 8TB drives in RAID10. I'd lose your very fast write across 4x striped SSD, but should be able to handle close to 10Gbps ingest for at least a few minutes if necessary. Since this is for VM storage (not much data...mostly just boot and application, with data on a distributed storage system), eventually the most used VMs would percolate to the SSD cache.

This should allow me fast storage vMotion and fast writes for things like OS updates, but still give me plenty of space for read cache.
This is not for VM storage. There's a separate all flash pool for that. This is only for PG data storage (The database and its replicas are pretty huge).