Fusion-io ioDrive 2 1.2TB Reference Page

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
Great,
thanks acquacow, got it now.

In the future I may be using the card in Ubuntu also, so I would be interested to know how to get it working in 18.04 Ubuntu. So if you can share the patches needed, that would be great.
Here's a script that should install the drivers on Ubuntu 18.04.

Note: This only installs the drivers for the current kernel version. And you can't install two versions of the driver, due to conflicts in some of the files. So, after one installs a new kernel version, this requires rebuild for the new kernel version, removal of the previous driver version (due to conflicts), and install of the new driver version.

Code:
sudo apt-get install -y gcc fakeroot build-essential debhelper  rsync
sudo apt-get install linux-headers-generic # ubuntu
sudo apt-get install -y libelf-dev
tar xzf iomemory-vsl_3.2.16.1731-1.0.tar.gz
cd iomemory-vsl-3.2.16.1731
cp ./root/usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc63_libkfio.o.cmd ./root/usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc73_libkfio.o.cmd
cp ./root/usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc63_libkfio.o_shipped ./root/usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc73_libkfio.o_shipped
patch -p0 <<EOF
--- fio-driver.spec.~1~ 2018-08-15 16:07:19.000000000 -0500
+++ fio-driver.spec     2019-03-03 08:00:54.949738642 -0600
@@ -328,8 +328,8 @@
 /usr/src/iomemory-vsl-3.2.16/include/fio/port/linux/ktypes.h
 /usr/src/iomemory-vsl-3.2.16/include/fio/port/linux/utypes.h
 /usr/src/iomemory-vsl-3.2.16/include/fio/port/gcc/align.h
-/usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc63_libkfio.o.cmd
-/usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc63_libkfio.o_shipped
+/usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc73_libkfio.o.cmd
+/usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc73_libkfio.o_shipped


 %changelog
--- ./debian/iomemory-vsl-source.install.~1~    2018-08-15 16:54:59.000000000 -0500
+++ ./debian/iomemory-vsl-source.install        2019-03-03 08:46:38.258317811 -0600
@@ -119,6 +119,7 @@
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc54_libkfio.o.cmd
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc53_libkfio.o.cmd
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc63_libkfio.o.cmd
+usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc73_libkfio.o.cmd
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc41_libkfio.o.cmd
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc44_libkfio.o.cmd
 usr/src/iomemory-vsl-3.2.16/kfio/.x86_64_cc48_libkfio.o.cmd
@@ -128,5 +129,6 @@
 usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc41_libkfio.o_shipped
 usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc48_libkfio.o_shipped
 usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc63_libkfio.o_shipped
+usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc73_libkfio.o_shipped
 usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc44_libkfio.o_shipped
 usr/src/iomemory-vsl-3.2.16/kfio/x86_64_cc49_libkfio.o_shipped
EOF
dpkg-buildpackage -b -uc -us
cd ..
apt remove -y iomemory-vsl-*-generic
dpkg -i iomemory-vsl-$(uname -r)_3.2.16.1731-1.0_amd64.deb
lsmod|grep iomem
sed '/^#/d;/^$/d' /etc/sysconfig/iomemory-vsl
sed -i 's/^#*ENABLED=1/ENABLED=1/' /etc/sysconfig/iomemory-vsl
systemctl restart iomemory-vsl
systemctl status -l iomemory-vsl
fio-status
 
Last edited:

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
Thanks!

Any chances you had deal with this module compilation on 5.X kernels?
I did start to research options for newer kernels, and I vaguely recall reading some mention of issues with changes in interfaces affecting the vendor release. That suggested it'd require more significant code changes, which I'm not ready for without much more experience with the card, and some baseline 'known good' benchmarks for comparison. That's the main reason I haven't explored it.

There is a github repo that says it's for later kernels: snuf/iomemory-vsl
I haven't tried it.

Note that the github repo is branched from an older vendor version and has done significant code refactoring. That will make efforts to merge newer vendor changes possibly error prone. That makes it a hard decision as to where to invest time.
 

AskFor

New Member
Mar 23, 2019
9
0
1
Alright, people keep asking me, so...example time.

I have two bundles and two cards, getting them to 3.2.11 was easy, as both Fusion-io and Dell supported up to that version.

# fio-status -F adapter.part_number_pa /dev/fct0
PA004151-017_5
Hi @acquacow!
My question: after cross-flashing my CISCO card will get original SanDisk part number (in fio-status output) or will keep older (branded) part number?

It's important for ESXi (I think). In ESXi compability list I see SanDisk supported up to 6.7, but Cisco only up to 5.5 version.

Thank you!
 

acquacow

Well-Known Member
Feb 15, 2017
564
293
63
38
The support is only tied to the driver version.

If you mod the firmware file appropriately, you can run the latest firmware on the cisco card as well as the latest supported driver for ESXi.

Everything will work.
 

I_D

Member
Aug 3, 2017
81
20
8
109
Does anyone know where to order a few LP brackets for iodrive 2's? (or other brackets that match the holes)

also: thanks acquacow for the awesome info here!
 

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28

PhytochromeFr

New Member
Apr 8, 2019
1
0
1
CAUTION!
EFI opROM support UEFI boot is removed above FW version 4.3.0.
latest FW support UEFI boot is 4.2.5.
 
Last edited:

nerdalertdk

Fleet Admiral
Mar 9, 2017
151
63
28
::1
Hmm, I've never tried pushing the HP stuff onto an ioDrive. I've always gone the other direction.

There were bios updates for the HP servers that fixed the fan issues with the Fusion-io/SanDisk firmware.

I suppose I could tear into the HP firmware and look and see what stands out.
Ohh please do :) but for the PX600 series :D
 

Oddworld

Member
Jan 16, 2018
56
28
18
120
Any thoughts or feedback regarding active vs. passive cooling?

I am considering installing a IOdriveII into a workstation (rather than a traditional server). The workstation would have case airflow, but it wouldn't be screaming the high volume of air within a server. The drive would have very very low I/O requirements, primarily WORM (Write-Once-Read-Many... for example: gaming, movies, media, etc.). Obviously this isn't fully utilizing the abilities of the drive, but its $/GB is fantastic compared to other SSD storage.

Question:

(1) Would you recommend a dedicated fan be installed near the IOdriveII to keep it cool, or would the regular case fan be sufficient?

(2) At what temperature should I begin to worry? It's around 50-55 C without a fan.
 

Marsh

Moderator
May 12, 2013
2,273
1,084
113
@Oddworld

IOdrive works fine in a normal tower case with a slow front 120mm fan + a 92mm exhaust fan in my experience.

There was a discussion regarding temperature and fan in previous page.
 

acquacow

Well-Known Member
Feb 15, 2017
564
293
63
38
(2) At what temperature should I begin to worry? It's around 50-55 C without a fan.
The spec is ~300LFM of airflow, but the regular ioDrive IIs use an industrial FPGA that is good for 100C.

The drive will start to throttle writes past 80C-ish and will eventually take itself offline around 95C or so.

A fan somewhere near the drive isn't a bad idea.
 

tx12

New Member
May 17, 2019
18
21
3
Hello all!

For the 1st time I've bought Cisco branded gen3 iomemory.
And it's got some strange behavior - driver loads VERY slow. It takes tens of seconds of silence even if auto attach was disabled by command line (auto_attach=0). Drive's LED flashes all this time and turns on after loading is completed. Looks like some kind of drive rescan is performed every time the driver loads.
Could it be some specific feature of Cisco branded devices?

Also, after upgrading from 4.2.5 to 4.3.5 I've got a bunch of "Pad 3 shows trim mismatch at reg 0x4b expected 0x20 actual 0x00." messages in syslog.
This situation is explained here:
HPE Support document - HPE Support Center

But in my case 4.2.5 driver wasn't complaining at all and newer drivers doesn't show "Invalid mixed NAND configuration.". All messages are limited to trim mismatch infos. After than (and after long long driver loading) drive attaches and behave normally.
What's the root case of this problem? Is it some kind of non-fatal NAND chip issue?
 

tx12

New Member
May 17, 2019
18
21
3
I am considering installing a IOdriveII into a workstation (rather than a traditional server).
Please take a note what iomemory doesn't support advanced power saving features like suspend. If your machine will go suspend mode drive wouldn't be able to wakeup properly. Not sure if iomemory's driver can block suspend in Windows, but linux machine can go into suspend without any cautions. Your iomemory will turn inaccessible after wakeup. Suspend/resume is like a powerloss for ioDrive, so the next time you'll boot it will require a drive rescan. It can take from tens of seconds to tens of minutes.
 

acquacow

Well-Known Member
Feb 15, 2017
564
293
63
38
I have never had any issues using windows suspend with any of my ioDrives.
 

tx12

New Member
May 17, 2019
18
21
3
Maybe its Linux-specific but since gen1 to gen3 every accidental suspend is a small disaster. Maybe it's possible to handle proper detach and re-enumeration but it definitely doesn't support suspend on Linux out of the box.
 

tx12

New Member
May 17, 2019
18
21
3
BTW, @acquacow maybe you know how to restore gen3 iodrive with lost lebmap information? is it possible without internal tools?