I guess my account is suspect I tried bidding 125 each for 3 and got denied with eBay saying the seller wasn’t taking bids from my account.
Thanks for this - I assume that's directly on a drive?Random Read 4KiB (Q= 1,T= 1) : 64.048 MB/s [ 15636.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 179.490 MB/s [ 43820.8 IOPS]
There's zero DRAM used on an ioDrive, and only one type of MLC NAND.I think they have a dram based cache. Writes go to that layer first, but reads are from "slower" nand.
Are you writing with directio, or is your OS buffering? Bench it with a good benchmarking app like fio.So why the tripple write vs read speed then @acquacow ?
Memory usage is 100% tied to the size of your writes, not your formatting. The formatting just controls the worst-case scenario.I've been looking at one of the VSL release notes and have a question about system memory usage. It says that devices formatted with 4K blocks less than 2TB in size require 300MB + (2.67MB per GB of device size) in the worst case. So for these devices thats a worst case of around 3.5GB.
I'm just wondering what the average memory usage might be under a moderate load. Is it much lower normally, or does it stay closer to the worst case?
For a single user, not much, but for multi-threaded use cases like VDI and other virtualized loads, tons.noob question of the day: what are the advantages of this type of drive over an NVMe M.2?
multi user unraid configuration may see some performance gains if these were used for cache?For a single user, not much, but for multi-threaded use cases like VDI and other virtualized loads, tons.
Steady state write performance is substantially better, esp since there's no buffer other than the 20% over-provisioning on the drive. The wear life is also substantially better than any nvme. MTBF is also substantially better.
Not my bench but @Marsh 's , I just picked his Q1T1 values since I am interested in those.Are you writing with directio, or is your OS buffering? Bench it with a good benchmarking app like fio.
Also, your QD and Threads are 1. Bump that up to ~16 threads and 8-16QD (Assuming you have the core count to hit 16 threads).
don't believe these work with unraidmulti user unraid configuration may see some performance gains if these were used for cache?
That's on top of Debian? You could probably use the provided drivers or rebuild from source on a box with the same kernel.don't believe these work with unraid
Doing updates is risky if using it as an SLOG. If the drivers gets unloaded because its no longer compatible w/ the kernel the pools disappear. Thankfully after rebuilding the drivers and rebooting the pools came back for me. It was very scary tho. But in the future if I ever do any updates on my PXE machine I'll be removing the iodrive from the pool before hand, just in case.Do these offer power-loss write protection? If so, how would they fare as a ZFS SLOG?
All ioDrive IIs and newer have power loss protection.Updates aside... is it an acceptable SLOG? I'm a little confused as to whether it has power loss protection.
So unraid is slackware, and as such is using a 4.x kernel right now. You'd have to go into the driver download section for fedora/etc that feature a 4.x kernel and grab the iomemory-vsl-3.2.15.1699-1.0.src.rpm that is available there.That's on top of Debian? You could probably use the provided drivers or rebuild from source on a box with the same kernel.don't believe these work with unraid