Fusion-io ioDrive 2 1.2TB Reference Page

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sunfire

New Member
Mar 1, 2024
2
0
1
Hi guys, I have two IBM branded IodriveII cards (1.2TB) which i updated to 3.2.10 (latest Lenovo firmware version).
After that I got the 3.2.16 Sandisk firmware from @acquacow 's repo, changed the INFO file accordingly and when I tried to update the two cards the fio-update-iodrive utility said there's no firmware update available (because the controller firmware version was already at 7.1.17). So I forced the update on both cards. The update went fine but how can i check the optrom version ? fio-status doesn't show any difference after the update, compared to before the update.

For reference the two sections I added in the INFO file in the firmware .fff archive (one section with hyphen, another section without hyphen) have the correct PA#, the only things different in the duplicated sections are the optromversion and optromfile entries, both 3.2.16.1731 (the Lenovo .fff archive had 3.2.10.1509 for the optromversion and optromfile entries).

Do you guys know a way to check the optromversion and optromfile versions installed on the cards ?

BIG thank you !
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
Hi guys, I have two IBM branded IodriveII cards (1.2TB) which i updated to 3.2.10 (latest Lenovo firmware version).
After that I got the 3.2.16 Sandisk firmware from @acquacow 's repo, changed the INFO file accordingly and when I tried to update the two cards the fio-update-iodrive utility said there's no firmware update available (because the controller firmware version was already at 7.1.17). So I forced the update on both cards. The update went fine but how can i check the optrom version ? fio-status doesn't show any difference after the update, compared to before the update.

For reference the two sections I added in the INFO file in the firmware .fff archive (one section with hyphen, another section without hyphen) have the correct PA#, the only things different in the duplicated sections are the optromversion and optromfile entries, both 3.2.16.1731 (the Lenovo .fff archive had 3.2.10.1509 for the optromversion and optromfile entries).

Do you guys know a way to check the optromversion and optromfile versions installed on the cards ?

BIG thank you !
I had a 60mph storm come through the other night so all my hardware is still powered down... but there is a flag you can throw at fio-status and you can specify a variable to display a value for. I think you can specify optrom stuff and get info back from that. I think it's -k or -K or -F for field maybe? I think either cap or lower L will get you a list of vars you can query.
 
  • Like
Reactions: sunfire

sunfire

New Member
Mar 1, 2024
2
0
1
I had a 60mph storm come through the other night so all my hardware is still powered down... but there is a flag you can throw at fio-status and you can specify a variable to display a value for. I think you can specify optrom stuff and get info back from that. I think it's -k or -K or -F for field maybe? I think either cap or lower L will get you a list of vars you can query.
Figured right after I posted. " Fio-status -l " gives the entire list of fields that can be queried on the card(s), and there are a few " optrom " fields. However, with the exception of the field " iom.optrom_base_supported " which shows the value " 1 " for both cards, all the other " optrom " fields are not available

[root@truenas ~]# fio-status -l | grep -i opt
iom.optrom_base_supported
iom.optrom_current_major_ver
iom.optrom_current_micro_ver
iom.optrom_current_minor_ver
iom.optrom_current_revision
iom.optrom_current_version
iom.optrom_enabled
[root@truenas ~]# fio-status -l | grep -i opt | xargs -n1 fio-status /dev/fioa -F
1
Unavailable: optrom_current_major_ver is not available.
Unavailable: optrom_current_micro_ver is not available.
Unavailable: optrom_current_minor_ver is not available.
Unavailable: optrom_current_revision is not available.
Unavailable: optrom_current_version is not available.
Unavailable: optrom_enabled is not available.
[root@truenas ~]# fio-status -l | grep -i opt | xargs -n1 fio-status /dev/fiob -F
1
Unavailable: optrom_current_major_ver is not available.
Unavailable: optrom_current_micro_ver is not available.
Unavailable: optrom_current_minor_ver is not available.
Unavailable: optrom_current_revision is not available.
Unavailable: optrom_current_version is not available.
Unavailable: optrom_enabled is not available.
[root@truenas ~]#

Could this happen because the system ( IBM x3650 m3 ) is booted in legacy mode ? I can't think of any other explanation. I can't boot the system in uEFI mode because i'm using an LSI 9211-8i adapter on it and the latest P20 firmware for it doesn't allow the system to boot, unless i'm flashing the 9211-8i with an older BIOS ( but still the latest uEFI firmware ) and boot the system in legacy mode.

For reference here are the sections i've added in the INFO file in the 3.2.16 .fff firmware archive which i've used to flash both IBM cards:

[PA004149006]
version = 7.1.17.116786
file = gen2_49_salmon_fusion_7.1.17.116786.bin
format = bin
ecc = 49b
dpfile = gen2_49_salmon_dual_fusion_7.1.17.116786.bin
dpformat = bin
dpecc = 49b
cntrpdiversion = 1.0.35
cntrpdifile = carre_1.0.35.pdi
cntrpdiformat = pdi
optrom = 1e00000
optromversion = 3.2.16.1731
optromfile = uefi-3.2.16.1731.rom

[PA004149-006]
version = 7.1.17.116786
file = gen2_49_salmon_fusion_7.1.17.116786.bin
format = bin
ecc = 49b
dpfile = gen2_49_salmon_dual_fusion_7.1.17.116786.bin
dpformat = bin
dpecc = 49b
cntrpdiversion = 1.0.35
cntrpdifile = carre_1.0.35.pdi
cntrpdiformat = pdi
optrom = 1e00000
optromversion = 3.2.16.1731
optromfile = uefi-3.2.16.1731.rom

Both sections do specify the 3.2.16.1731.rom optrom and the ( forced ) firmware update worked perfectly for both cards.
If we can't find a way to query the uEFI rom version installed on the cards I will assume the latest 3.2.16 is installed and keep using the cards as they are.
My next step is to install TrueNAS SCALE on the box and use it as an ISCSI server for two ESXi hosts, both IodriveII cards will be configured as a striped pool and will be hosting VM OS disks. On the same box I have 12x 400GB SAS SSDs which will also be configured as a striped pool hosting the data disks for the VMs.

I know, I know, no redundancy for any of the pools, but I'm not worried about losing everything, for me disk space is more important than storage redundancy. If I lose any of the pools I can restore the stuff from backups ( stored on a different x3650 m3 with proper storage redundancy configured ).
 

Volanar

New Member
Sep 15, 2020
5
3
3
Figured right after I posted. " Fio-status -l " gives the entire list of fields that can be queried on the card(s), and there are a few " optrom " fields. However, with the exception of the field " iom.optrom_base_supported " which shows the value " 1 " for both cards, all the other " optrom " fields are not available

[root@truenas ~]# fio-status -l | grep -i opt
iom.optrom_base_supported
iom.optrom_current_major_ver
iom.optrom_current_micro_ver
iom.optrom_current_minor_ver
iom.optrom_current_revision
iom.optrom_current_version
iom.optrom_enabled
[root@truenas ~]# fio-status -l | grep -i opt | xargs -n1 fio-status /dev/fioa -F
1
Unavailable: optrom_current_major_ver is not available.
Unavailable: optrom_current_micro_ver is not available.
Unavailable: optrom_current_minor_ver is not available.
Unavailable: optrom_current_revision is not available.
Unavailable: optrom_current_version is not available.
Unavailable: optrom_enabled is not available.
[root@truenas ~]# fio-status -l | grep -i opt | xargs -n1 fio-status /dev/fiob -F
1
Unavailable: optrom_current_major_ver is not available.
Unavailable: optrom_current_micro_ver is not available.
Unavailable: optrom_current_minor_ver is not available.
Unavailable: optrom_current_revision is not available.
Unavailable: optrom_current_version is not available.
Unavailable: optrom_enabled is not available.
[root@truenas ~]#

Could this happen because the system ( IBM x3650 m3 ) is booted in legacy mode ? I can't think of any other explanation. I can't boot the system in uEFI mode because i'm using an LSI 9211-8i adapter on it and the latest P20 firmware for it doesn't allow the system to boot, unless i'm flashing the 9211-8i with an older BIOS ( but still the latest uEFI firmware ) and boot the system in legacy mode.

For reference here are the sections i've added in the INFO file in the 3.2.16 .fff firmware archive which i've used to flash both IBM cards:

[PA004149006]
version = 7.1.17.116786
file = gen2_49_salmon_fusion_7.1.17.116786.bin
format = bin
ecc = 49b
dpfile = gen2_49_salmon_dual_fusion_7.1.17.116786.bin
dpformat = bin
dpecc = 49b
cntrpdiversion = 1.0.35
cntrpdifile = carre_1.0.35.pdi
cntrpdiformat = pdi
optrom = 1e00000
optromversion = 3.2.16.1731
optromfile = uefi-3.2.16.1731.rom

[PA004149-006]
version = 7.1.17.116786
file = gen2_49_salmon_fusion_7.1.17.116786.bin
format = bin
ecc = 49b
dpfile = gen2_49_salmon_dual_fusion_7.1.17.116786.bin
dpformat = bin
dpecc = 49b
cntrpdiversion = 1.0.35
cntrpdifile = carre_1.0.35.pdi
cntrpdiformat = pdi
optrom = 1e00000
optromversion = 3.2.16.1731
optromfile = uefi-3.2.16.1731.rom

Both sections do specify the 3.2.16.1731.rom optrom and the ( forced ) firmware update worked perfectly for both cards.
If we can't find a way to query the uEFI rom version installed on the cards I will assume the latest 3.2.16 is installed and keep using the cards as they are.
My next step is to install TrueNAS SCALE on the box and use it as an ISCSI server for two ESXi hosts, both IodriveII cards will be configured as a striped pool and will be hosting VM OS disks. On the same box I have 12x 400GB SAS SSDs which will also be configured as a striped pool hosting the data disks for the VMs.

I know, I know, no redundancy for any of the pools, but I'm not worried about losing everything, for me disk space is more important than storage redundancy. If I lose any of the pools I can restore the stuff from backups ( stored on a different x3650 m3 with proper storage redundancy configured ).
In the output of "fio-status -a" There should be a line right under Firmware...that gives the UEFI Option ROM version. If there is not, then it did not actually add the uefi ROM.

Did you use the hidden --enable-uefi flag when you ran fio-update-iodrive? It will also mention that it applied the uefi rom when the update completes.

RE: "force" I found that using "--bypass-uptodate" was sufficient without -force when you're already on the same version.
 

cdoublejj

New Member
Jan 5, 2018
28
4
3
36
3.png

i ran this on pop os but, am not seeing the drive, i'm missing a step where i run whats compiled or something like that
 

pyite

New Member
May 15, 2013
10
1
3
i ran this on pop os but, am not seeing the drive, i'm missing a step where i run whats compiled or something like that
Check to see if it built the .ko file under /var/lib/dkms/ , and if so run insmod on the .ko file and see what happens.

DKMS should add the file to the appropriate place to show up after a reboot, so maybe verify that it exists in lspci as well, and look for messages in dmesg.
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
Just an FYI, the drivers I'm hosting on my site are self hosted at home and I'm tired of updating my DNS manually since go daddy ended their free API access. I'm moving to porkbun and access may be spotty till I get it all squared away. If the link isn't working just ping me and I'll give you whatever the working dns is at that time.

Thanks!

Edit: I think we're all good now, though nextcloud is being strange and making me re-create folders from scratch to sync data to them from the nextcloud android client. Not sure if that's an issue from updating nextcloud, android, or TrueNAS...
 
Last edited:
  • Like
Reactions: Aluminat and nexox

homs

New Member
Jul 30, 2025
1
0
1
Hi everyone,
I’m looking for a compatible driver for the Fusion-IO ioDrive2 1.2TB (Dell-branded) card to use with VMware ESXi 6.7.
Unfortunately, it seems like the official sources for these drivers are no longer available, and I haven’t had any luck finding a working link or package.
Interestingly, I have the same model card but HP-branded, and it works just fine with ESXi 6.7 using the old driver I found some time ago. However, the Dell-branded version isn’t recognized, and I believe it needs a different or modified driver.
If anyone has a copy of the ESXi 6.7 driver for the Dell version of the ioDrive2, or knows where to find it, I’d really appreciate it if you could share it or point me in the right direction.
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
Between the two, there isn't a firmware that will have the part number for each. Check my posts in here, I have a howto so that you can pull the part numbers for your drives, unzip the firmware file, add the entry to the INFO file in the firmware, re-zip it, and put the same firmware on both drives. Then your driver will load them both the same.

Driver and firmware have to match version-wise, if you end up with the same version firmware on each card, doesn't matter if it's the HP or Dell ones, they can be mixed, they just can't be different versions.
 
Last edited:

Patrick_M

New Member
Mar 30, 2017
9
8
3
41
Hi there, I was wondering if someone knew how smoothly I could migrate my IBM Fusion-io ioDrive Duo 320GB SLC from a Windows 10 system to a Windows 11 one?

Is it as simple as installing the card into the new system, installing the software & drivers (probably in a different order), and voila?

Or is something more complicated and destructive required lol. Which would be fine since the data is obviously backed up.

Also, I've been using driver 2.3.10/firmware 5.0.7.107053 forever with zero issues, is there any reason to mess around and update that?

Thank you in advance for any assistance.
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
Windows-wise, the latest would be Fusion_ioMemory_VSL_3.2.15.1699_x64.exe and you'd have to update the firmware as well to the 3.2.14 firmware. You'd need to do a little unzipping of the firmware file, modding the INFO file inside and adding info from the latest 3.2.8 IBM-released firmware if your card isn't in the 3.2.14 firmware.

You'd gain adaptive flashback as a feature which can protect from nand page failures and erase block failures. There is an upgrade path to follow though (for other people reading this post that might be on older 2.x versions): 1.2.4 -> 1.2.7 -> 1.2.8 -> 2.1.0 -> 2.3.1 -> 3.2.15

-- Dave
 

Patrick_M

New Member
Mar 30, 2017
9
8
3
41
You'd gain adaptive flashback as a feature which can protect from nand page failures and erase block failures.
Oh, that does seem pretty interesting.

Now regarding the migration, do you have any ideas? I have moved Windows RAID arrays from one system to another without issue, but this seems potentially more complicated given the additional software and drivers.
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
No migration, just copy all your data off to another device and start working on upgrading the iodrive. You will have to format the card/etc. all data on it will be lost.
 

ShadowChaser

New Member
Jun 22, 2022
6
0
1
I recently suffered a LEB map error on an ioScale 3.2TB and after reading through some related posts determined that the next course of action before declaring a hardware error is to run an fio-sure-erase on the device and seeing if it resolves the issue after a format. How long does a full device erase take? Is the limiting factor the card itself or the hardware of the platform it's installed in? The documentation mentions potentially waiting hours but mine has been stuck at 0% for half a day.
 

acquacow

Well-Known Member
Feb 15, 2017
826
475
63
44
If it's stuck at zero, it's stuck, not gonna complete.

You can try loading the driver in minimal mode and then erasing it and see if that helps (I think it's still possible to sure-erase in minimal mode...)
 

ShadowChaser

New Member
Jun 22, 2022
6
0
1
If it's stuck at zero, it's stuck, not gonna complete.

You can try loading the driver in minimal mode and then erasing it and see if that helps (I think it's still possible to sure-erase in minimal mode...)
Thanks for the info. I got way in over my head with these cards so I don't know what you mean by this. The documentation for 4.1.2 suggests that I can set a flag to 1 in the iomemory.conf file but I don't have the user guide for 3.1.16 and I'm not sure if it's the same for the older driver
 

ShadowChaser

New Member
Jun 22, 2022
6
0
1
re:above
was able to set the device to minimal mode but fio-format and fio-sure-erase are not applicable when it is in minimal mode, so I guess I'm outta luck there.
 

ShadowChaser

New Member
Jun 22, 2022
6
0
1
after mucking around a little more and checking logs, the detailed errors I am getting are:
fio-attach - "failed in reading eb headers at EB 826, corrupt eb header (read failed)." and "Failed to create device: Invalid argument (-22)."
fio-format - "failed in reading eb headers at EB 826, corrupt eb header (read failed)." and "cannot recover existing LEB records."

fio-sure-erase remained stuck at 0%, presumably because it can't get past these errors as well. If it's dead hardware that sucks since this drive only had a few TB written to it.