Fusion-io ioDrive 2 1.2TB Reference Page

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Oddworld

Member
Jan 16, 2018
64
32
18
124
Has anyone tried installing a FusionIO Drive2 on Debian Buster (Debian 10)?

I had no issues with Debian 8, but am running into a few hiccups with Buster (10).

Quick edit - drivers do not appear to be available from the WD portal page. Support seems to stop at Debian 9.
 
Last edited:

Oddworld

Member
Jan 16, 2018
64
32
18
124
On a related note, has anyone had luck installing on Arch Linux? Arch tends to get every kernel update once available. There's no official support for Arch, although source drivers seem to be available. Curious whether that means you need to recompile the driver from source every kernel upgrade.

Any thoughts?
 

acquacow

Well-Known Member
Feb 15, 2017
796
447
63
43
Curious whether that means you need to recompile the driver from source every kernel upgrade.
Standard ioDrive procedure is to recompile the driver from source every kernel update, either by hand, or by dkms.

This includes minor updates, the module won't load otherwise.
 
  • Like
Reactions: Oddworld

rootpeer

Member
Oct 19, 2019
82
17
8
On a related note, has anyone had luck installing on Arch Linux? Arch tends to get every kernel update once available. There's no official support for Arch, although source drivers seem to be available. Curious whether that means you need to recompile the driver from source every kernel upgrade.

Any thoughts?
This guy seems to have done it using a Github VSL:
[REQUEST] iomemory-vsl-3.2.16.1731-1.0 / AUR Issues, Discussion & PKGBUILD Requests / Arch Linux Forums
 

TedB

Active Member
Dec 2, 2016
123
33
28
46
Anybody knows if these SX350/PX600 work with MS Windows Server 2019 ?

On Dell webpage there is a driver version 4.3.5 from march 2019 and according to pdf file it works with MS Windows Server up to including 2016 but no mention of version 2019.
 

rootpeer

Member
Oct 19, 2019
82
17
8
It's windows, so it /should/ work...
Hey acquacow! Since you seem to know a lot about the devices, can you tell me how TRIM is handled? (on Linux)

I read about issuing discards so do you enable continuous TRIM with the discard option in fstab instead of fstrim or other periodic TRIM options?

Is there any way to check if it is working or not?
 

Oddworld

Member
Jan 16, 2018
64
32
18
124
Standard ioDrive procedure is to recompile the driver from source every kernel update, either by hand, or by dkms.

This includes minor updates, the module won't load otherwise.

Thanks. I really appreciate the support and information you bring. I cannot thank you enough for your assistance getting these working. I purchased six of these at home for use on ESXi, Windows and Linux - the hardware is great, and you've been critical to helping me understand driver support for my non-enterprise use case. Just wanted to say thanks.
 
  • Like
Reactions: acquacow

Freebsd1976

Active Member
Feb 23, 2018
404
73
28
Anybody knows if these SX350/PX600 work with MS Windows Server 2019 ?

On Dell webpage there is a driver version 4.3.5 from march 2019 and according to pdf file it works with MS Windows Server up to including 2016 but no mention of version 2019.
It works on win 2019 , at least hp sx350 on win 2019 run smooth
 
  • Like
Reactions: TedB

acquacow

Well-Known Member
Feb 15, 2017
796
447
63
43
Hey acquacow! Since you seem to know a lot about the devices, can you tell me how TRIM is handled? (on Linux)

I read about issuing discards so do you enable continuous TRIM with the discard option in fstab instead of fstrim or other periodic TRIM options?

Is there any way to check if it is working or not?
TRIM works the way it works with any other drive, but honestly, it isn't going to be an issue if you TRIM or not. You really aren't going to notice any difference and it turned out to be much less of an issue in the industry than people thought it was going to be.

I think there are still a lot of issues with various storage layers not passing trim through as well. Your filesystem may support it, but then if you have layers of LVM or MD underneath, those may not pass it through/etc.

I think most of that has been worked out in the last few years, but again, it's not really an issue unless you are hitting the device at peak write workload while it's 100% full.
 
  • Like
Reactions: rootpeer

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
Hmm, I've never tried pushing the HP stuff onto an ioDrive. I've always gone the other direction.

There were bios updates for the HP servers that fixed the fan issues with the Fusion-io/SanDisk firmware.

I suppose I could tear into the HP firmware and look and see what stands out.
Did you ever look in to this :)

Trying to figure out how to turn my Cisco PX600 1Tb to an HPE 1.0TB HH/HL Light Endurance (LE) PCIe Workload Accelerator (775666-B21)
 

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
Also there is a new firmware out

4.3.6 Change Log

In addition to various improvements, the following changes are made to the Fusion ioMemory VSL software since version 4.3.5.

General Changes

General Improvements and Features

l Updatedsupportedoperatingsystems.
o Newly Supported Operating Systems

l AddedsupportforSolaris11.4.(CRT-1107)

l AddedsupportforUbuntu16.04.6LTS.(CRT-1114)

l Fixedanissuewhichcausesattachfailureswhentherearenotenoughreserves.Thiscouldpreventusersfrom getting errors when trying to recover their data. (FH-24074)

Windows Changes

Linux

n Ubuntu 16.04.6

Solaris

n Solaris 11.4

ATTENTION!

IMPORTANT:
l TheVSL4.3.6versionhastwodedicatedinstallers-aLegacyinstallerforWindowsServer2008R2,anda

Storport installer for Windows 2012, 2012 R2, and 2016. (FH-24419)
l TheVSL4.3.6LegacyinstallercanonlybeusedonWindowsServer2008R2.IfyoutrytousetheVSL

4.3.6 Legacy installer on Windows 2012 or newer Windows OS, it will fail and will display an error message: "Fusion ioMemory VSL4 requires Windows Server 2008 R2". (DO-1861)

Windows Fixed Issues

l Errorswhenmulti-queueisdisabledandFIO_AFFINITYisset.(FH-24399)

l Memoryleaksinfio-pci-checkcouldcausefailures.(FH-24230)

l IssuesoccurredduringscheduledSQLdefragprocess–BSODwithIRQL_NOT_LESS_OR_EQUALoniomemory_ vsl4_mc.sys. (CRT-1044)
 

acquacow

Well-Known Member
Feb 15, 2017
796
447
63
43
Just because I seem to deal with this a lot in PMs, here's my current list of Fusion-io driver/firmware mapping:

OP may want to link to it from the first post.

Code:
VSL         Firmware      Date
1.2.7.2     3.0.0.36867
1.2.8.4     3.0.3.43246

2.0.0       4.0.1.41356
2.0.2-pre2  5.0.1.42685
2.1.0       5.0.1.42895
2.1.0       5.0.3.43247
2.2.0       5.0.5.43674
2.2.3       5.0.6.101583
2.3.0       5.0.7.101971
2.3.1       5.0.7.101971
2.3.10.110  5.0.7.107053

3.1.1.172   6.0.0.107004
3.1.1       7.0.2.108609
3.1.5       7.0.2.108609 buggy firmware
3.1.5       7.0.0.107322 fixed firmware
3.2.1     
3.2.2       7.1.13.109322 20121025
3.2.3       7.1.13.109322 20130605
3.2.4       7.1.15.110356 20130604
3.2.6       7.1.15.110356 20131003
3.2.8       7.1.17.116786 20140508
3.2.10      7.1.17.116786 20150212
3.2.11      7.1.17.116786 20150618
3.2.16      7.1.17.116786 20180821

4.0.2       8.5.28.116474 20140410
4.2.1       8.9.1.118126  20160309
4.2.5       8.9.5.118177  20160412
4.3.0       8.9.8.118189  20161119
4.3.1       8.9.9.118194  20170222
4.3.3       8.9.9.118194  20180423
4.3.4       8.9.9.118194  20180621
4.3.5       8.9.9.118194  20190313
These are the driver versions that you must run with each firmware version. They are dependent on each other and mixing them up won't yield good results. In later versions, the VSL will refuse to load the driver for a mis-matched card.

Also, for all of the later 3.x and 4.x versions, I have these mirrored for safe-keeping should they no longer be available in the future.
 
  • Like
Reactions: OrionPax

rootpeer

Member
Oct 19, 2019
82
17
8
Just because I seem to deal with this a lot in PMs, here's my current list of Fusion-io driver/firmware mapping:

OP may want to link to it from the first post.

Code:
VSL         Firmware      Date
1.2.7.2     3.0.0.36867
1.2.8.4     3.0.3.43246

2.0.0       4.0.1.41356
2.0.2-pre2  5.0.1.42685
2.1.0       5.0.1.42895
2.1.0       5.0.3.43247
2.2.0       5.0.5.43674
2.2.3       5.0.6.101583
2.3.0       5.0.7.101971
2.3.1       5.0.7.101971
2.3.10.110  5.0.7.107053

3.1.1.172   6.0.0.107004
3.1.1       7.0.2.108609
3.1.5       7.0.2.108609 buggy firmware
3.1.5       7.0.0.107322 fixed firmware
3.2.1     
3.2.2       7.1.13.109322 20121025
3.2.3       7.1.13.109322 20130605
3.2.4       7.1.15.110356 20130604
3.2.6       7.1.15.110356 20131003
3.2.8       7.1.17.116786 20140508
3.2.10      7.1.17.116786 20150212
3.2.11      7.1.17.116786 20150618
3.2.16      7.1.17.116786 20180821

4.0.2       8.5.28.116474 20140410
4.2.1       8.9.1.118126  20160309
4.2.5       8.9.5.118177  20160412
4.3.0       8.9.8.118189  20161119
4.3.1       8.9.9.118194  20170222
4.3.3       8.9.9.118194  20180423
4.3.4       8.9.9.118194  20180621
4.3.5       8.9.9.118194  20190313
These are the driver versions that you must run with each firmware version. They are dependent on each other and mixing them up won't yield good results. In later versions, the VSL will refuse to load the driver for a mis-matched card.

Also, for all of the later 3.x and 4.x versions, I have these mirrored for safe-keeping should they no longer be available in the future.
Where are you getting VSLs higher than 3.2.16? And some performance related questions:

I am testing with my 2.4TB iodrive II Duo and I am not sure that I am getting full performance.

I set it up on a Windows 10 VM with 6 cores and two threads and PCIe passthrough. The CPU is an AMD Threadripper 2920x.
The card is plugged in a slot directly to the numa node assigned to the VM.

I split the card to 4 virtual drives as it supposedly improves performance and formatted with 4K sectors both with fio-format and in Windows. I set the drives as a 4 way stripe.

Here is the crystaldiskmark output:

Code:
-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (C) 2007-2018 hiyohiyo
                          Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :  2136.859 MB/s
  Sequential Write (Q= 32,T= 1) :  2321.404 MB/s
  Random Read 4KiB (Q=  8,T= 8) :   749.313 MB/s [ 182937.7 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   650.057 MB/s [ 158705.3 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :   156.448 MB/s [  38195.3 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   234.817 MB/s [  57328.4 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :    47.736 MB/s [  11654.3 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :    91.100 MB/s [  22241.2 IOPS]

  Test : 1024 MiB [I: 0.1% (2.9/2146.8 GiB)] (x1)  [Interval=5 sec]
  Date : 2019/11/02 22:50:22
    OS : Windows 10 Professional [10.0 Build 18362] (x64)
Here is the fio-status -a output:

Code:
C:\WINDOWS\system32>fio-status -a

Found 4 ioMemory devices in this system with 1 ioDrive Duo
Driver version: 3.2.15 build 1699

Adapter: Dual Controller Adapter
        Dell ioDrive2 Duo 2410GB MLC, Product Number:7F6JV, SN:US07F6JV7605128D0008
        ioDrive2 Adapter Controller, PN:F4K5G
        SMP(AVR) Versions: App Version: 1.0.35.0, Boot Version: 0.0.8.1
        External Power Override: ON
        External Power: NOT connected
        PCIe Bus voltage: avg 11.63V
        PCIe Bus current: avg 2.75A
        PCIe Bus power: avg 31.02W
        PCIe Power limit threshold: 49.75W
        PCIe slot available power: unavailable
        Connected ioMemory modules:
          fct1: Product Number:7F6JV, SN:1231D1459-1111
          fct3: Product Number:7F6JV, SN:1231D1459-1111P1
          fct4: Product Number:7F6JV, SN:1231D1459-1121
          fct5: Product Number:7F6JV, SN:1231D1459-1121P1

fct1    Attached
        SN:1231D1459-1111
        SMP(AVR) Versions: App Version: 1.0.29.0, Boot Version: 0.0.9.1
        Located in slot 0 Upper of ioDrive2 Adapter Controller SN:1231D1459
        Powerloss protection: protected
        Last Power Monitor Incident: 26 sec
        PCI:0d:00.0
        Vendor:1aed, Device:2001, Sub vendor:1028, Sub device:1f71
        Firmware v7.1.17, rev 116786 Public
        576.30 GBytes device size
        Format: v500, 140699218 sectors of 4096 bytes
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 55.61 degC, max 56.11 degC
        Internal voltage: avg 1.01V, max 1.02V
        Aux voltage: avg 2.49V, max 2.50V
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Active media: 100.00%
        Rated PBW: 8.13 PB, 96.73% remaining
        Lifetime data volumes:
           Physical bytes written: 265,507,035,945,724
           Physical bytes read   : 185,626,851,430,832
        RAM usage:
           Current: 39,674,816 bytes
           Peak   : 40,290,496 bytes
        Contained VSUs:
          fct1: ID:0, UUID:9fb9db0d-acf3-4368-812d-946cfa5a56de

fct1    State: Online, Type: block device
        ID:0, UUID:9fb9db0d-acf3-4368-812d-946cfa5a56de
        576.30 GBytes device size
        Format: 140699218 sectors of 4096 bytes

fct3    Attached
        SN:1231D1459-1111P1
        SMP(AVR) Versions: App Version: 1.0.29.0, Boot Version: 0.0.9.1
        Located in slot 0 Upper of ioDrive2 Adapter Controller SN:1231D1459
        Powerloss protection: protected
        PCI:0d:00.0
        Vendor:1aed, Device:2001, Sub vendor:1028, Sub device:1f71
        Firmware v7.1.17, rev 116786 Public
        576.30 GBytes device size
        Format: v500, 140699218 sectors of 4096 bytes
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 55.61 degC, max 56.11 degC
        Internal voltage: avg 1.01V, max 1.02V
        Aux voltage: avg 2.49V, max 2.50V
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Active media: 100.00%
        Rated PBW: 8.13 PB, 96.73% remaining
        Lifetime data volumes:
           Physical bytes written: 265,506,978,157,300
           Physical bytes read   : 185,626,777,918,328
        RAM usage:
           Current: 39,670,656 bytes
           Peak   : 40,273,856 bytes
        Contained VSUs:
          fct3: ID:0, UUID:54d333ed-2ec4-4964-97c3-19c23e328454

fct3    State: Online, Type: block device
        ID:0, UUID:54d333ed-2ec4-4964-97c3-19c23e328454
        576.30 GBytes device size
        Format: 140699218 sectors of 4096 bytes

fct4    Attached
        SN:1231D1459-1121
        SMP(AVR) Versions: App Version: 1.0.29.0, Boot Version: 0.0.9.1
        Located in slot 1 Lower of ioDrive2 Adapter Controller SN:1231D1459
        Powerloss protection: protected
        Last Power Monitor Incident: 26 sec
        PCI:07:00.0
        Vendor:1aed, Device:2001, Sub vendor:1028, Sub device:1f71
        Firmware v7.1.17, rev 116786 Public
        576.30 GBytes device size
        Format: v500, 140699218 sectors of 4096 bytes
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 59.06 degC, max 60.04 degC
        Internal voltage: avg 1.01V, max 1.02V
        Aux voltage: avg 2.49V, max 2.50V
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Active media: 100.00%
        Rated PBW: 8.13 PB, 96.74% remaining
        Lifetime data volumes:
           Physical bytes written: 265,088,491,556,636
           Physical bytes read   : 185,154,759,318,888
        RAM usage:
           Current: 39,670,656 bytes
           Peak   : 40,282,176 bytes
        Contained VSUs:
          fct4: ID:0, UUID:c34c4337-7694-4cbd-b12d-7cee43786d7a

fct4    State: Online, Type: block device
        ID:0, UUID:c34c4337-7694-4cbd-b12d-7cee43786d7a
        576.30 GBytes device size
        Format: 140699218 sectors of 4096 bytes

fct5    Attached
        SN:1231D1459-1121P1
        SMP(AVR) Versions: App Version: 1.0.29.0, Boot Version: 0.0.9.1
        Located in slot 1 Lower of ioDrive2 Adapter Controller SN:1231D1459
        Powerloss protection: protected
        PCI:07:00.0
        Vendor:1aed, Device:2001, Sub vendor:1028, Sub device:1f71
        Firmware v7.1.17, rev 116786 Public
        576.30 GBytes device size
        Format: v500, 140699218 sectors of 4096 bytes
        PCIe slot available power: 25.00W
        PCIe negotiated link: 4 lanes at 5.0 Gt/sec each, 2000.00 MBytes/sec total
        Internal temperature: 59.06 degC, max 60.04 degC
        Internal voltage: avg 1.01V, max 1.02V
        Aux voltage: avg 2.49V, max 2.50V
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Active media: 100.00%
        Rated PBW: 8.13 PB, 96.74% remaining
        Lifetime data volumes:
           Physical bytes written: 265,088,451,953,036
           Physical bytes read   : 185,154,802,474,104
        RAM usage:
           Current: 39,670,656 bytes
           Peak   : 40,282,176 bytes
        Contained VSUs:
          fct5: ID:0, UUID:8d554735-3233-4685-8b3c-9b5061d563cd

fct5    State: Online, Type: block device
        ID:0, UUID:8d554735-3233-4685-8b3c-9b5061d563cd
        576.30 GBytes device size
        Format: 140699218 sectors of 4096 bytes
I am also attaching ATTO images:

atto2.jpg

atto1.jpg

Where are the ~500.000 4K IOPS?
 

acquacow

Well-Known Member
Feb 15, 2017
796
447
63
43
Where are you getting VSLs higher than 3.2.16?
VSL4 is only for ioDrive 3 (SX/PX 300/350/600/etc...)


I am testing with my 2.4TB iodrive II Duo and I am not sure that I am getting full performance.
Datasheet numbers are for raw access of the device without a filesystem/etc. Once you add a filesystem/software raid/etc, you will see lower numbers.

I set it up on a Windows 10 VM with 6 cores and two threads and PCIe passthrough. The CPU is an AMD Threadripper 2920x.
The card is plugged in a slot directly to the numa node assigned to the VM.
I split the card to 4 virtual drives as it supposedly improves performance and formatted with 4K sectors both with fio-format and in Windows. I set the drives as a 4 way stripe.
This only improves performance for 512B workloads, won't help you at 4k. A raid0 of both memory modules should perform better.

Where are the ~500.000 4K IOPS?
Our datasheets were only ever at 512B and 1MB, so the 500k IOPS you see are for 512B workloads, 4k would be a lot less. You should be more concerned on if you can hit 2-2.5GB/sec read/write at 1MB block sizes.

If you want to benchmark the config, I recommend using a linux VM and the fio app (Written by Jens Axboe, linux i/o stack maintainer). axboe/fio

Here's what I had for a single ioDrive2 on an older v2 xeon passed through to windows 10:


You should be able to get 2x those numbers with a windows raid0 of both halves of that duo.

I can't seem to find my screenshot of 4 drives in a raid0, but here's 4 modules in a 2-way ReFS mirror on the same box:


Reads get right up to 6GB/sec. Writes are half because this is a 2-way mirror.

Hmm, this isn't the raid0 dynamic disk graph I'm looking for, but it is basically a raid0 in storage spaces with 4 modules.


You should be able to get half those numbers with a single duo.

This is on ESXi 5.5 on an e5-2648Lv2 CPU in a supermicro X9SRL board.


Also, you need to make sure any BIOS power savings options are disabled like c-states, p-states, any kind of cpu frequency scaling/etc...

Windows and VMWare power savings options need to be disabled also.

-- Dave
 
Last edited:

rootpeer

Member
Oct 19, 2019
82
17
8
VSL4 is only for ioDrive 3 (SX/PX 300/350/600/etc...)
Oh OK! I thought people were using VSL4 with iodrive 2 drives.

Datasheet numbers are for raw access of the device without a filesystem/etc. Once you add a filesystem/software raid/etc, you will see lower numbers.


This only improves performance for 512B workloads, won't help you at 4k. A raid0 of both memory modules should perform better.
I did it because the documentation states an 80% improvement at 4K and a 100% improvement at 512.


Our datasheets were only ever at 512B and 1MB, so the 500k IOPS you see are for 512B workloads, 4k would be a lot less. You should be more concerned on if you can hit 2-2.5GB/sec read/write at 1MB block sizes.
It is just that I was reading reviews and specs such as these:
Fusion-io ioDrive2 Duo MLC Application Accelerator Review | StorageReview.com - Storage Reviews
SanDisk Fusion ioMemory ioDrive 2 Duo - solid state drive - 2.4 TB - PCI Express 2.0 x8 Specs

that quote 490K IOPS write and 480K IOPS read at 4K on the iodrive 2 duo MLC. 512 IOPS are advertised as 540K read and 1100K write.
If you look at the 4K 100% Read/Write section on storagereview, they were getting 431K read IOPS at 16T/16Q on Windows while I am getting 150K on crystaldiskmark at those settings. I was really looking forward to those kinds of IOPS as I intend to use the drive for VM images storage rather than needing the sequential performance for big data.

Also, you need to make sure any BIOS power savings options are disabled like c-states, p-states, any kind of cpu frequency scaling/etc...

Windows and VMWare power savings options need to be disabled also.
I will move the drive to a Dell R720 tomorrow and try both Windows and Linux as well as disabling of power settings and such and report back. I guess my VM might be problematic at the moment as I did have some GPU passthrough slowdowns recently that may be related to CPU frequency scaling.

Edit: Forgot to say thanks! Thank you so much for your contribution and assistance here!
 

acquacow

Well-Known Member
Feb 15, 2017
796
447
63
43
I did it because the documentation states an 80% improvement at 4K and a 100% improvement at 512.
Again, that's for raw application access to bare flash, not with a filesystem in the way.

If you look at the storagereview testing notes, you'll see they use the fio benchmark for both windows and linux tests. These are done with no filesystem on the devices.

storagereview said:
All PCIe Application Accelerators are benchmarked on our second-generation enterprise testing platform based on a Lenovo ThinkServer RD630. For synthetic benchmarks, we utilize FIO version 2.0.10 for Linux and version 2.0.12.2 for Windows. In our synthetic testing environment, we use a mainstream server configuration with a clock speed of 2.0GHz, although server configurations with more powerful processors could yield even greater performance.
If you want to compare performance, give the drive a fresh fio-format, and run the fio benchmark tool against it and then try to identify any bottlenecks you may have. I recommend starting with a job that just utilizes one of the memory modules, and then once you get a number you're happy with, run against both modules at the same time and see if your perf scales linearly.

...that quote 490K IOPS write and 480K IOPS read at 4K on the iodrive 2 duo MLC. 512 IOPS are advertised as 540K read and 1100K write.
If you look at the 4K 100% Read/Write section on storagereview, they were getting 431K read IOPS at 16T/16Q on Windows while I am getting 150K on crystaldiskmark at those settings.
Again, they are testing bare device performance via fio, and not testing the performance of a filesystem on top of it. Crystal disk mark isn't a great tool for filesystem benching anyways...especially when really looking at enterprise workloads.

I will move the drive to a Dell R720 tomorrow and try both Windows and Linux as well as disabling of power settings and such and report back. I guess my VM might be problematic at the moment as I did have some GPU passthrough slowdowns recently that may be related to CPU frequency scaling.

Edit: Forgot to say thanks! Thank you so much for your contribution and assistance here!
No prob. I'm not always responsive, and I always have a good handful of PMs to deal with on this site, so just bump the thread every now and then if I don't get back to you. I check the site usually a few times a day, but some weeks are just too busy to gather all the info for folks.

-- Dave
 
  • Like
Reactions: rootpeer

Oddworld

Member
Jan 16, 2018
64
32
18
124
Here's a script that should install the drivers on Ubuntu 18.04.

I am unable to get the driver on Ubuntu 18.04. Could you help me out? I ran into two issues:

(1) The patch script fails

Code:
patching file fio-driver.spec
Hunk #1 FAILED at 328.
1 out of 1 hunk FAILED -- saving rejects to file fio-driver.spec.rej
patching file ./debian/iomemory-vsl-source.install
After reviewing fio-driver.spec, it seems that the following references cannot be found. For me, they reference cc53 (not cc63):

> -/usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc63_libkfio.o.cmd
> -/usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc63_libkfio.o_shipped

(2) I manually opened these files to make the edit. Time to build "dpkg-buildpackage -b -uc -us"... failed

Attempt #1 without root privilege

Code:
  CC [M]  /home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/khotplug.o
make[3]: *** No rule to make target '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/kfio/x86_64_cc74_libkfio.o', needed by '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/iomemory-vsl.o'.  Stop.
make[3]: *** Waiting for unfinished jobs....
  CC [M]  /home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/kcsr.o
Makefile:1577: recipe for target '_module_/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source' failed
make[2]: *** [_module_/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source] Error 2
make[2]: Leaving directory '/usr/src/linux-headers-4.15.0-66-generic'
Makefile:82: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source'
ERROR:
debian/rules:98: recipe for target 'build-arch-stamp' failed
make: *** [build-arch-stamp] Error 1
dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
oddworld@e5labuntu:~/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731$
Attempt #2 as root

Code:
make[3]: *** No rule to make target '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/kfio/x86_64_cc74_libkfio.o', needed by '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/iomemory-vsl.o'.  Stop.
make[3]: *** Waiting for unfinished jobs....
  CC [M]  /home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source/kcsr.o
Makefile:1577: recipe for target '_module_/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source' failed
make[2]: *** [_module_/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source] Error 2
make[2]: Leaving directory '/usr/src/linux-headers-4.15.0-66-generic'
Makefile:82: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/home/oddworld/ioDriveFW/SoftwareSource/iomemory-vsl-3.2.16.1731/driver_source'
ERROR:
debian/rules:98: recipe for target 'build-arch-stamp' failed
make: *** [build-arch-stamp] Error 1
dpkg-buildpackage: error: debian/rules build subprocess returned exit status 2
Any guidance how to proceed? Below is my system:

Code:
oddworld@e5labuntu:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.3 LTS
Release:        18.04
Codename:       bionic
oddworld@e5labuntu:~$ uname -r
4.15.0-66-generic
oddworld@e5labuntu:~$ sha256sum ioDriveFW/SoftwareSource/iomemory-vsl_3.2.16.1731-1.0.tar.gz
0c40b3b863dda4ea15462e2efe5173c82459390e9a0831991e6740704558a6c8  ioDriveFW/SoftwareSource/iomemory-vsl_3.2.16.1731-1.0.tar.gz
 

rootpeer

Member
Oct 19, 2019
82
17
8
I have done some testing on Centos 8 with zfs 0.8.2:

Device: FusionIO Iodrive II Duo 2.4TB MLC
Dell R720XD
2x Intel E5-2650 v2
32GB (4x 8GB) RAM
Driver: snuf/iomemory-vsl branch 5.1.28 (DKMS)

I used the Phoronix Test Suite to benchmark the drive's performance (I ran the fio benchmark).
Code:
phoronix-test-suite run pts/fio
Tests:
  1. 100% Random Read / 100% Random Write, Engine: Linux AIO, Buffered=No, Direct=No, Block Size=4K
  2. Sequential Read / Sequential Write, Engine: Linux AIO, Buffered=No, Direct=No, Block Size=2M

Iodrive 2 Duo configurations:
  1. 2 controllers, 4K sectors
  2. 2 controllers, 512 sectors
  3. 4 controllers (=2 controllers after firmware split), 4K sectors
  4. 4 controllers (=2 controllers after firmware split) 512 sectors
In all cases, I configured the controllers as a ZFS stripe (=no redundancy). The following results are of the performance of 2.4TB stripes, not individual controllers.

Results:

2 controllers, 4K:
  • Random Read = 34.000 IOPS
  • Random Write = 19.400 IOPS
  • Sequential Read = 1.738 MB/s
  • Sequential Write = 1.653 MB/s

2 controllers, 512:

  • Random Read = 41.200 IOPS
  • Random Write = 20.200 IOPS
  • Sequential Read = 1.663 MB/s
  • Sequential Write = 1.577 MB/s

4 controllers, 4K:

  • Random Read = 36.000 IOPS
  • Random Write = 21.000 IOPS
  • Sequential Read = 1.733 MB/s
  • Sequential Write = 1.759 MB/s

4 controllers, 512:

  • Random Read = 43.500 IOPS
  • Random Write = 19.800 IOPS
  • Sequential Read = 2.076 MB/s
  • Sequential Write = 1.654 MB/s
Here is how a Corsair Force MP600 2TB M.2 PCIe Gen 4 NVMe SSD compares to the Iodrive with both ZFS and EXT4 tests:
A Quick Look At EXT4 vs. ZFS Performance On Ubuntu 19.10 With An NVMe SSD - Phoronix
 
Last edited:

TRACKER

Active Member
Jan 14, 2019
260
110
43
Hi guys, @acquacow
not sure if this is the right thread for my question.
I've bought ioScale 3.2TB drive, but i cannot make it work.
I've updated the firmware to latest from WD's site (v.7.1.17). Original firmware was 7.1.15
In attachment you can see some details.
Basically, error which i got is:
2020-01-08T16:09:23.957152900Z - ERROR - FusionEventDriver - fct0 MINIMAL MODE DRIVER: NAND Module Mask (0x7) doesn't match NAND Present Mask (0x6)

Do you know how this could be fixed (if possible at all)?

Many thanks in advance!
 

Attachments

Last edited: