Fusion-io ioDrive 2 1.2TB Reference Page

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bert

Well-Known Member
Mar 31, 2018
819
383
63
45
With checkout of v5.6.0, make dpkg fails with a build error:

from /home/temp/Git/iomemory-vsl/driver_source/port-internal.h:62,
from /home/temp/Git/iomemory-vsl/driver_source/kblock.c:32:
/usr/src/linux-headers-4.19.0-16-common/include/linux/spinlock.h:377:57: note: expected ‘spinlock_t *’ {aka ‘struct spinlock *’} but argument is of type ‘spinlock_t **’ {aka ‘struct spinlock **’}
static __always_inline void spin_unlock_irq(spinlock_t *lock)
~~~~~~~~~~~~^~~~
/home/temp/Git/iomemory-vsl/driver_source/kblock.c: In function ‘kfio_disk_stat_write_update’:
/home/temp/Git/iomemory-vsl/driver_source/kblock.c:510:41: error: macro "part_stat_inc" requires 3 arguments, but only 2 given
part_stat_inc(&gd->part0, ios[1]);
^
/home/temp/Git/iomemory-vsl/driver_source/kblock.c:510:9: error: ‘part_stat_inc’ undeclared (first use in this function); did you mean ‘part_stat_show’?
part_stat_inc(&gd->part0, ios[1]);
^~~~~~~~~~~~~




and yes build-essential is already installed.

build-essential is already the newest version (12.6).
 

Bert

Well-Known Member
Mar 31, 2018
819
383
63
45
I don't understand why this is happening. Is there something wrong with my sources or is it about the debian installation I have? Can someone else build the sources on a debian machine?
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
@Bert

Ok got it now with 4.19.0-16.

1. cd /home/temp/Git/ (delete iomemory-vsl directory)
2. git clone snuf/iomemory-vsl
3. cd iomemory-vsl
4. git checkout v4.20.1
5. cd root/usr/src/iomemory-vsl-3.2.16
6. make (takes some seconds)
7. insmod iomemory-vsl.ko (make sure you stay in directory /home/temp/Git/iomemory-vsl/root/usr/src/iomemory-vsl-3.2.16)
8. cd ../../../../
9. mkdir deb && cd deb (for the fio-utils etc.)

10. wget -O fio-common_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/pd2ohfaufhwqc34/fio-common_3.2.16.1731-1.0_amd64.deb?dl=1
11. wget -O fio-firmware-fusion_3.2.16.20180821-1_all.deb https://www.dropbox.com/s/kcn5agi6lyikicf/fio-firmware-fusion_3.2.16.20180821-1_all.deb?dl=1
12. wget -O fio-sysvinit_3.2.16.1731-1.0_all.deb https://www.dropbox.com/s/g39l6lg9of6eqze/fio-sysvinit_3.2.16.1731-1.0_all.deb?dl=1
13. wget -O fio-util_3.2.16.1731-1.0_amd64.deb https://www.dropbox.com/s/57huby17mteg6wp/fio-util_3.2.16.1731-1.0_amd64.deb?dl=1
14. dpkg -i *.deb (shoudl install all 4 packages from above)

A 'fio-status -a' should give you this ...

ioDrive2_install_debian_419016.png
Let me know if this worked for you?
 
Last edited:

Bert

Well-Known Member
Mar 31, 2018
819
383
63
45
Ta-da!


Disk /dev/fioa: 768.3 GiB, 825000000000 bytes, 1611328125 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: D4C83D6C-E9BF-4ADD-9263-A1304554D74A

Device Start End Sectors Size Type
/dev/fioa1 2048 1611327487 1611325440 768.3G Microsoft basic data


Thank you very much gb00s!
 
Last edited:
  • Like
Reactions: gb00s

gm0n3y

New Member
May 22, 2021
3
0
1
@gb00s i followed your instructions on Ubuntu 20.04 lts but using the latest version from snuf. I have a drive 2 as well. I’m able to see the cards in the disks utility but fails to create a partition. I’m a bit of a Linux noob so I followed some tutorials to create and format in the cli but Still no go. Do you think it would be easier to use esxi and Ubuntu vm?
 

Marsh

Moderator
May 12, 2013
2,642
1,496
113
here you go.

fio-status -a
fdisk -l /dev/fioa
fdisk /dev/fioa ( create partition )
n
hit enter for default

sudo mkfs.ext4 -m 0 -T largefile4 -L fio1 /dev/fioa1
or xfs
mkfs.xfs /dev/fioa1
 
  • Like
Reactions: gm0n3y and Bert

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
Just a tip from my side. These drives get some boost with checksum disabled with XFS. If you are using it in a server with ECC ram this should be no issue. You would do it with mkfs.xfs -m crc=0 -f /dev/fioa1. Maybe it's just my setup, but these do not perform the same with ext4. Also, I try to overprovision them, even with these high 11PBW's ... Just my subjective opinion

Also perform a full erase/low-level-format before using them:
fio-detach /dev/fct0
fio-format /dev/fct0
fio-attach /dev/fct0
then go with @Marsh steps to partition them ...
 
Last edited:
  • Like
Reactions: Marsh and gm0n3y

gm0n3y

New Member
May 22, 2021
3
0
1
thanks for the help guys. I did the low level format to start fresh and I’m able to see the drive. I did a chown to get permissions. Still acting kinda funny. I set a mount point to media/user/fioa and I’m able to create a folder there. I tried to run a benchmark and the utility said no permissions. I’m going to look into that further and let y’all know my results. Thanks for the tip on xfs I’ll benchmark both and see what I get.
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
I would never mount to /media/user/..., but ok, up to you. I never have problems using it with:
mount /dev/fioa1 /mnt/io2
and do a
chown gb00s:gb00s /mnt/io2
Of course, you can mount it in /etc/fstab as well
/dev/fioa1 /mnt/io2 xfs defaults,noatime 0 0
I know noatime-option may not be necessary with todays xfs, but .... 100 users, 99 opinions ...
 

gm0n3y

New Member
May 22, 2021
3
0
1
Well as I said I’m still very green with Linux so I’m open to opinions for best practices.
 

bl300

New Member
Jun 6, 2021
1
0
1
I just got this Cisco 1.2TB and want to use it on my window 10 pc.
Could someone please provide a step by step instruction on how to get this to works.
This 1.2TB drive is brand new with zero write. Don't want to waste it. Thank you very much

====================================
C:\Program Files\Common Files\VSL Utils>fio-status.exe /dev/fct0 -F adapter.part_number_legacy
5491-72339-1250G

C:\Program Files\Common Files\VSL Utils>fio-status.exe -a

Found 1 VSL driver package:
4.3.3 build 957 Driver: loaded

Found 1 ioMemory device in this system

Adapter: ioMono (driver 4.3.3)
Cisco UCS 1300GB SanDisk ioMemory SX350, Product Number:pFIOS13002, SN:FIO2002P008
ioMemory Adapter Controller, PN:5491-72339-1250G
Product UUID:205c3db0-ac71-5fb9-99c1-e3953c56fb95
PCIe Power limit threshold: 74.75W
PCIe slot available power: unavailable
Connected ioMemory modules:
fct0: 02:00.0, Product Number:pFIOS13002, SN:FIO2002P008

fct0 Status unknown: Driver is in MINIMAL MODE:
The firmware on this device is not compatible with the currently installed version of the driver
ioMemory Adapter Controller, Product Number:pFIOS13002, SN:1550D0BBD
!! ---> There are active errors or warnings on this device! Read below for details.
ioMemory Adapter Controller, PN:5491-72339-1250G
Microcode Versions: App:0.0.14.0
Powerloss protection: not available
PCI:02:00.0
Vendor:1aed, Device:3002, Sub vendor:1137, Sub device:19a
Firmware v8.9.1, rev 20150611 Public
Geometry and capacity information not available.
Format: not low-level formatted
PCIe slot available power: 75.00W
PCIe negotiated link: 8 lanes at 5.0 Gt/sec each, 4000.00 MBytes/sec total
Internal temperature: 69.40 degC, max 71.86 degC
Internal voltage: avg 1.00V, max 1.01V
Aux voltage: avg 1.80V, max 1.80V
Rated PBW: 4.00 PB
Lifetime data volumes:
Physical bytes written: 0
Physical bytes read : 0
RAM usage:
Current: 0 bytes
Peak : 0 bytes

ACTIVE WARNINGS:
The ioMemory is currently running in a minimal state.


C:\Program Files\Common Files\VSL Utils>
 

dnj

New Member
Jun 8, 2021
1
0
1
Hello, I have some questions about IBM ioDrive 2 1.2TB :
1. 2 pieces work at a temperature of-40-50 deg., the other 2 pieces are heated to 65-72 deg., while those that are 65-72 deg. the white LED blinks, in the foto in the attachment. At the same time, the cards work normally. What can this LED mean? I didn't find it anywhere on the Internet.

2. There is another card in status unknown: driver is in MINIMAL MODE General channel initialization failure. I tried different drivers, installed the ibm and sandisk firmware(correcting info file). Nothing helps. Maybe someone has encountered it? Thank you in advance for any information.
I apologize for my english
 

Attachments

Magister

New Member
Jun 28, 2021
2
1
3
BTW, @acquacow maybe you know how to restore gen3 iodrive with lost lebmap information? is it possible without internal tools?
Well, if you can't get sure-erase to complete, (it wipes the leb-map) there might be something else wrong.

There might be some more hidden flags for sure-erase that might get it to complete. It doesn't sound like anyone used --purge, which would also wipe the FPGA and brick the card...

In the past, I have had to load the drivers into minimal mode and alternate sure-erase and format until one eventually completed. I probably went back and forth ~50 times before I had success.
Well, I know it's 2021 now, but anyway.
I had a very similar problem with SX350, and managed to recover it.
What I did:
1) Downgraded to minimum possible firmware and driver - that was 4.2.0. Not sure it's neccecary, but that's what I did.
2) Added a "iodrive_load_eb_map=0" param to driver and rebooted system - looks like that's the key, leb map is not loaded so it's not preserved on format.
3) Ran "fio-sure-erase -p" - now it finished without errors
4) Ran "fio-format" - it also finished ok.
5) Removed driver param, rebooted system.
But "fio-read-lebmap" showed empty map, so I tried to fill drive with data, that failed, repeated (fio-sure-erase without "-p" and fio-format several times.
Then, reading driver logs made me think that maybe that's a driver bug, so I updated to latest fw/driver 4.3.4, again ran "fio-sure-erase" (now without "-p"), "fio-format" and everything is good now.
The only thing is that on load driver tells me that there are no factory bad block map, but as it works ok now, I think it's not a problem.
 

Magister

New Member
Jun 28, 2021
2
1
3
I have a card with the same problem.
How do you add "iodrive_load_eb_map=0" param to driver ?
Could you show an example.

thanks
I'm using Linux, so I added the following line to /etc/modprobe.d/iomemory-vsl4.conf:
Code:
options iomemory-vsl4 iodrive_load_eb_map=0
Not sure if that's possible on Windows or OS X.

P.S. this option is flagged as "For use only under the direction of Customer Support.", and I'm not a customer support, so use it at your own risk.
Also I didn't do an extensive testing, just filled it with some random data.
 
  • Like
Reactions: Marsh

Marsh

Moderator
May 12, 2013
2,642
1,496
113
Thank you.
I am using Linux as well.
One of the SX350 1.6tb card went bad all by itself while stored in a cardboard box.
Now, I have hope to revive it.
 

Indecided

Active Member
Sep 5, 2015
163
83
28
I've had a pair each of the Cisco branded 1TB ioMemory PX600s in two separate Cisco C240 M3 servers. We have had them in the servers doing burn-in for some time now, but I just ran some basic benchmarks and discovered for each server, one card's read rate is in half of the other.
The write rates are pretty similar across all 4 drives @ 1.6GB/s; only the read rates seem to be impacted.

Card A : 1.2GB/s
Card B : 2.8GB/s

I've tried a fio-format in 512B vs 4096B - 512B gives me a 20% bump on the slower card to ~1.5GB/s but, still not at the ~2.8GB/s that the other cards can get.

The only thing I can see that's different is that one card is in a x8 and another is in a x16. (TBH, I don't know which is in which.)

However, fio-status reports both are linked at x8 (the cards are x8 afterall). Cisco officially accommodate up to 3 ioMemory cards per C240 M3, so it shouldn't be an issue.

I've attached the brief output from fio-status - If there is any other information from the verbose status output that I could provide to help troubleshoot (remotely) that would be great. The servers have been languishing in a DC for a while now, and due to various lockdown efforts remote is the first approach I need to take.

Any ideas on how to get both drives up to par?


Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:aaabbbccc
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct0: 05:00.0, Product Number:PFIO1000MP, SN:aaabbbccc

fct0 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:aaabbbccc
PCI:05:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 43.31 degC, max 52.66 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct0: ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02

fct0 State: Online, Type: block device, Device: \\?\PhysicalDrive4
ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
1000.00 GBytes device size

Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:dddeeefff
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct1: 04:00.0, Product Number:PFIO1000MP, SN:dddeeefff

fct1 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:dddeeefff
PCI:04:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 40.36 degC, max 46.26 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct1: ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5

fct1 State: Online, Type: block device, Device: \\?\PhysicalDrive3
ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
1000.00 GBytes device size
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I've had a pair each of the Cisco branded 1TB ioMemory PX600s in two separate Cisco C240 M3 servers. We have had them in the servers doing burn-in for some time now, but I just ran some basic benchmarks and discovered for each server, one card's read rate is in half of the other.
The write rates are pretty similar across all 4 drives @ 1.6GB/s; only the read rates seem to be impacted.

Card A : 1.2GB/s
Card B : 2.8GB/s

I've tried a fio-format in 512B vs 4096B - 512B gives me a 20% bump on the slower card to ~1.5GB/s but, still not at the ~2.8GB/s that the other cards can get.

The only thing I can see that's different is that one card is in a x8 and another is in a x16. (TBH, I don't know which is in which.)

However, fio-status reports both are linked at x8 (the cards are x8 afterall). Cisco officially accommodate up to 3 ioMemory cards per C240 M3, so it shouldn't be an issue.

I've attached the brief output from fio-status - If there is any other information from the verbose status output that I could provide to help troubleshoot (remotely) that would be great. The servers have been languishing in a DC for a while now, and due to various lockdown efforts remote is the first approach I need to take.

Any ideas on how to get both drives up to par?


Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:aaabbbccc
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct0: 05:00.0, Product Number:PFIO1000MP, SN:aaabbbccc

fct0 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:aaabbbccc
PCI:05:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 43.31 degC, max 52.66 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct0: ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02

fct0 State: Online, Type: block device, Device: \\?\PhysicalDrive4
ID:0, UUID:93d62e59-fc51-4c5e-acc0-a219d8391a02
1000.00 GBytes device size

Adapter: ioMono (driver 4.3.7)
Cisco UCS 1000GB MLC Fusion ioMemory PX, Product Number:PFIO1000MP, SN:dddeeefff
PCIe Power limit threshold: 24.75W
Connected ioMemory modules:
fct1: 04:00.0, Product Number:PFIO1000MP, SN:dddeeefff

fct1 Attached
ioMemory Adapter Controller, Product Number:PFIO1000MP, SN:dddeeefff
PCI:04:00.0
Firmware v8.9.9, rev 20200113 Public
1000.00 GBytes device size
Internal temperature: 40.36 degC, max 46.26 degC
Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
Contained Virtual Partitions:
fct1: ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5

fct1 State: Online, Type: block device, Device: \\?\PhysicalDrive3
ID:0, UUID:cd139c56-129d-4aea-9107-4efd6408f2b5
1000.00 GBytes device size
Is that the full status? Dump "fio-status -a" as well as "fio-pci-check"

fio-pci-check will dump the pci-e tree and show you the link of each slot and all the switches between it and the CPU. You may have the 2nd card behind the PCH in a slot that is sharing bandwidth with networking/etc...