Fusion-io ioDrive II - 1.2TB+ drives , 0.09 or 0.08/GB

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Random Read 4KiB (Q= 1,T= 1) : 64.048 MB/s [ 15636.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 179.490 MB/s [ 43820.8 IOPS]
Thanks for this - I assume that's directly on a drive?

Just makes me wonder why writes are almost tripple of reads?
 

i386

Well-Known Member
Mar 18, 2016
4,218
1,540
113
34
Germany
I think they have a dram based cache. Writes go to that layer first, but reads are from "slower" nand.
 

Jordan

New Member
Jan 26, 2016
17
3
3
39
I've been looking at one of the VSL release notes and have a question about system memory usage. It says that devices formatted with 4K blocks less than 2TB in size require 300MB + (2.67MB per GB of device size) in the worst case. So for these devices thats a worst case of around 3.5GB.

I'm just wondering what the average memory usage might be under a moderate load. Is it much lower normally, or does it stay closer to the worst case?
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
So why the tripple write vs read speed then @acquacow ?:)
Are you writing with directio, or is your OS buffering? Bench it with a good benchmarking app like fio.

Also, your QD and Threads are 1. Bump that up to ~16 threads and 8-16QD (Assuming you have the core count to hit 16 threads).

I've been looking at one of the VSL release notes and have a question about system memory usage. It says that devices formatted with 4K blocks less than 2TB in size require 300MB + (2.67MB per GB of device size) in the worst case. So for these devices thats a worst case of around 3.5GB.

I'm just wondering what the average memory usage might be under a moderate load. Is it much lower normally, or does it stay closer to the worst case?
Memory usage is 100% tied to the size of your writes, not your formatting. The formatting just controls the worst-case scenario.
DRAM is used to hold a copy of the memory pointers that define the virtual block layer that the driver presents to your OS. The smaller your writes, the more pointers will exist, and the more DRAM will be consumed.

noob question of the day: what are the advantages of this type of drive over an NVMe M.2?
For a single user, not much, but for multi-threaded use cases like VDI and other virtualized loads, tons.

Steady state write performance is substantially better, esp since there's no buffer other than the 20% over-provisioning on the drive. The wear life is also substantially better than any nvme. MTBF is also substantially better.
 

Erlipton

Member
Jul 1, 2016
93
23
8
36
For a single user, not much, but for multi-threaded use cases like VDI and other virtualized loads, tons.

Steady state write performance is substantially better, esp since there's no buffer other than the 20% over-provisioning on the drive. The wear life is also substantially better than any nvme. MTBF is also substantially better.
multi user unraid configuration may see some performance gains if these were used for cache?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Are you writing with directio, or is your OS buffering? Bench it with a good benchmarking app like fio.

Also, your QD and Threads are 1. Bump that up to ~16 threads and 8-16QD (Assuming you have the core count to hit 16 threads).
Not my bench but @Marsh 's , I just picked his Q1T1 values since I am interested in those.
OS Buffering might be o/c, but I think the testfile size was 32GB, that would be a lot of buffering
 

lowfat

Active Member
Nov 25, 2016
131
91
28
40
Do these offer power-loss write protection? If so, how would they fare as a ZFS SLOG?
Doing updates is risky if using it as an SLOG. If the drivers gets unloaded because its no longer compatible w/ the kernel the pools disappear. Thankfully after rebuilding the drivers and rebooting the pools came back for me. It was very scary tho. But in the future if I ever do any updates on my PXE machine I'll be removing the iodrive from the pool before hand, just in case.
 

Oddworld

Member
Jan 16, 2018
64
32
18
124
Updates aside... is it an acceptable SLOG? I'm a little confused as to whether it has power loss protection.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Updates aside... is it an acceptable SLOG? I'm a little confused as to whether it has power loss protection.
All ioDrive IIs and newer have power loss protection.

Most older ioDrive 1s have powerloss protection, the cap is visible on those models.
 
  • Like
Reactions: BoredSysadmin

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
don't believe these work with unraid
That's on top of Debian? You could probably use the provided drivers or rebuild from source on a box with the same kernel.
So unraid is slackware, and as such is using a 4.x kernel right now. You'd have to go into the driver download section for fedora/etc that feature a 4.x kernel and grab the iomemory-vsl-3.2.15.1699-1.0.src.rpm that is available there.

I'd probably stand up a development slack VM with the kernel headers/build env setup and use that to build your kernel module for the ioDrives.

As someone has already stated, if you update unraid, that ioDrive kernel module won't load and you'll have to build a new one for your newer kernel before the drives will come back online.

You can set stuff up with dkms to auto-rebuild on new kernel updates, but that can sometimes be a bit of a learning curve...

-- Dave