Fusion-io ioDrive II - 1.2TB+ drives , 0.09 or 0.08/GB

acquacow

Well-Known Member
Feb 15, 2017
687
366
63
40
Set your thread count to the number of cores you have in your PC. You aren't going to max that thing out with a single thread and 4k writes.

Gotta use a 1M block size and a thread or two if you want to max out the benchmark.

Also, make sure your power plan in windows is set to max perf.
 

Jake77

New Member
Mar 16, 2019
4
0
1
Hi acquacow,

Any updates on UEFI booting? Have you had any time to check, or card available you could try it?
It would be really great to see that working :)

I mean the uefi.rom file is in the firmware file, and there are definitions in the INFO file that show how to apply it, but it was kinda experimental for some OEM workstation stuff. It was setup for our ioXtreme cards so that they could be bootable in workstations, but the FPGAs should all accept that rom.

I'm not sure if there was a separate OEM-side uefi image that the workstations would have shipped with/etc... someone would have to find that, extract it, then cram it into their own bios/etc...

When updating the cards, you can fio-update-iodrive --enable-uefi and --disable-uefi as well as --optrom-clear to clean out any extra stuff. I just haven't played with it much and not sure what state the cards end up in when you mess up.

I need someone to send me a bricked card so that I can try to recover it and then play with options =P My cards all have data I care about currently. Maybe I can find a cheap 320GB ioDrive II and figure it all out.
Anyway, thanks for all the very informative posts you have done on this forum regarding this card.
 

acquacow

Well-Known Member
Feb 15, 2017
687
366
63
40
I still don't have a card that I can risk bricking, nor do I want to break any of my BIOS images on my motherboards trying to push the UEFI driver into them.
 
Sep 6, 2017
22
6
3
NY/NJ/PA Tri-state
So unraid is slackware, and as such is using a 4.x kernel right now. You'd have to go into the driver download section for fedora/etc that feature a 4.x kernel and grab the iomemory-vsl-3.2.15.1699-1.0.src.rpm that is available there.

I'd probably stand up a development slack VM with the kernel headers/build env setup and use that to build your kernel module for the ioDrives.

As someone has already stated, if you update unraid, that ioDrive kernel module won't load and you'll have to build a new one for your newer kernel before the drives will come back online.

You can set stuff up with dkms to auto-rebuild on new kernel updates, but that can sometimes be a bit of a learning curve...

-- Dave
I know i'm resurrecting an old thread and topic here, but I'm adding a feature request to Unraid and talking with the admin/dev about compiling Fusion IO drivers for gen2/gen3 Fusion IO cards.

Unraid Admin Tom;
"...We use slackware packages but we keep up with kernel development. For example just released Unraid 6.8.0-rc7 is using latest Linux stable release 5.3.12. Upcoming Unraid 6.9 will no doubt use kernel 5.4.x.

It would be nice if these drivers were merged into mainline - ask Dave if that would be possible. Otherwise a vanilla set of driver source and Makefile is all we need if he can get it to compile against latest Linux kernels."


I assumed suse would be the closest fork and latest are most stable, but i notice that the version you suggested was different that what i suggested to the limetech op (see below). I'm not familiar with Fusion IO VSL drivers and dependencies , so any direction you can give me would be greatly appreciated.

SX300/SX350/PX600 Linux_sles-12 driver v4.3.6 20191116 (current)
- SRC -> iomemory-vsl4-4.3.6.1173-1.src.rpm
- BIN -> iomemory-vsl4-3.12.49-11-default-4.3.6.1173-1.x86_64.rpm
-> iomemory-vsl4-4.4.21-69-default-4.3.6.1173-1.x86_64.rpm
-> iomemory-vsl4-4.4.73-5-default-4.3.6.1173-1.x86_64.rpm
- Utility -> fio-preinstall-4.3.6.1173-1.x86_64.rpm
-> fio-sysvinit-4.3.6.1173-1.x86_64.rpm
-> fio-util-4.3.6.1173-1.x86_64.rpm

ioDrive/ioDrive2/ioDrive2Duo/ioScale Linux_sles-12 driver v3.2.16 20180912 (current)
- SRC -> iomemory-vsl-3.2.16.1731-1.0.src.rpm
- BIN -> iomemory-vsl-4.4.21-69-default-3.2.16.1731-1.0.x86_64.rpm
-> iomemory-vsl-4.4.73-5-default-3.2.16.1731-1.0.x86_64.rpm
- Utility -> fio-common-3.2.16.1731-1.0.x86_64.rpm
-> fio-preinstall-3.2.16.1731-1.0.x86_64.rpm
-> fio-sysvinit-3.2.16.1731-1.0.x86_64.rpm
-> fio-util-3.2.16.1731-1.0.x86_64.rpm
 
Last edited:
  • Like
Reactions: Samir

josh

Active Member
Oct 21, 2013
597
178
43
Is a single 3.8TB drive of this better than multiple 400GB HGST SAS SSDs?
My current ceph cluster is 5 nodes of 3x400GBs each, thinking of just replacing it with 3 nodes of 1x3.8GB each?

Doing this would mean I have to give up SAS drives entirely as I only have 1 PCIe slot and I would have to swap out the HBA for this.
 
  • Like
Reactions: Samir

acquacow

Well-Known Member
Feb 15, 2017
687
366
63
40
Honestly, at this point going forward, I'm not really sure I recommend them for any new installs due to the complete lack of future support. There's no guarantee that they will compile with any newer kernels, they are all going to be at the end of their enterprise 5-year support windows for any customers that last purchased these/etc.

If you're fine with running CentOS/RHEL 7.x forever at home as a storage box, fine, do it, but if you think you'll want newer features/etc, I'd probably stick with something that is a tad more future-proof.
 
  • Like
Reactions: Samir