Fusion-io ioDrive 2 1.2TB Reference Page

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Everything is up on Home Create an account, read the user guide, download the driver/firmware.

Keep in mind, driver must match firmware version, so you'll have to update the firmware to whatever driver you are using.

-- Dave
 

benwis

New Member
May 3, 2018
2
0
1
29
Everything is up on Home Create an account, read the user guide, download the driver/firmware.

Keep in mind, driver must match firmware version, so you'll have to update the firmware to whatever driver you are using.

-- Dave
Great. Looks pretty straightfoward. Only trouble I have is that the supported Linux versions are really oudated. Except for RHEL, the ubuntu and fedora versions are really outdated. Does anyone know how it works on more modern operating systems?
 

JustinH

Active Member
Jan 21, 2015
124
76
28
48
Singapore
You rebuild the driver from source for your kernel version.
There are a few projects on GitHub that port the drivers to newer kernel releases as well. I can dig up the link if compiling doesn’t help. (Doing a rebuild doesn’t work for newer kernel versions. Too many changes on the kernel side...)


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: lowfat

lowfat

Active Member
Nov 25, 2016
131
91
28
40
Rebuilding the drivers from the source files isn't difficult. Sandisk actually has fantastic instructions on their site. 'Iomemory V3.2.15 Use Guide for Linux' has a step by step instructions on how to rebuild the drivers.

I know you can get the cards running on Ubuntu 16.0.4 by rebuilding the drivers. I tried Proxmox and Ubuntu 18.04 but wasn't able to get them to work.

Hopefully today I'll try these on Proxmox.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I had some issues this last week trying to rebuild the 3.2.15 source RPM in rhel 7.5. No love there due to some pointer stuff that seems to be no longer compatible.

I'm back on the previous kernel, but have to wait for sandisk to upgrade the source package. 1699 is the current source package version, so keep a lookout for a newer one...
 
  • Like
Reactions: BLinux

lowfat

Active Member
Nov 25, 2016
131
91
28
40
Was able to get an ioDrive Duo working under proxmox. Used the above drivers. Also didn't realize that you have to manually download the pve headers found here. Which is likely why I couldn't get it to work previously.

 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
I had some issues this last week trying to rebuild the 3.2.15 source RPM in rhel 7.5. No love there due to some pointer stuff that seems to be no longer compatible.

I'm back on the previous kernel, but have to wait for sandisk to upgrade the source package. 1699 is the current source package version, so keep a lookout for a newer one...
yup. i had the same problem and had to boot the previous kernel. will WD/SanDisk continue to support this going forward?
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
No idea. WD seems to have EOL'd all the legacy fusion-io/sandisk stuff once they were allowed to fully merge with HGST.

I'd bet RHEL 7 stuff keeps getting updated for a while, but that might be the last major version. They still have to maintain/honor all 5 year support agreements with major customers.
 
  • Like
Reactions: cactus

warlockedyou

Member
Sep 4, 2016
217
18
18
I just purchased a Fusion-io ioDrive2 2.4TB and I forgot to make two separate drives to show up as one. Is this an OS level (LVM) thing? Or is this a Fusio-IO software level thing?
In the past with a different 1.2TB Fusion ioDrive, I had done this on CentOS and I thought I used
Code:
[root@esxi01:~] vgcreate
-sh: vgcreate: not found
and
Code:
[root@esxi01:~] pvcreate
-sh: pvcreate: not found
to create one volume consisting both drives but it seems that isn't working with my ESXi 6.0 install. I also tried
Code:
[root@esxi01:~] fio-update-iodrive --merge
fio-update-iodrive: unrecognized option '--merge'
Fusion fio-update-iodrive utility (3.2.15.1699 pinnacles@f0f84521e1b1)
  Copyright (c) 2006-2017 Western Digital Corporation or all its affiliates.
Summary: fio-update-iodrive is a tool for updating ioDrive firmware
   Note: fio-update-iodrive MUST NOT be used while the ioDrive is attached

Usage: fio-update-iodrive [OPTION] ffffile
   The default action is to upgrade all Fusion devices with firmware
   contained in the ffffile.
   -h, --help                      help message (this screen)
   -f, --force                     force upgrade (bypass all validation)
                                     (may result in data loss)
   --bypass-ecc                    bypasses ECC compatibility validation
   --bypass-barrier                bypasses barrier version validation
   --bypass-uptodate               bypasses already up to date validation
   -y, --all-yes                   confirm all warning messages
   -d, --device                    specify a device to update (ex: /dev/fct0)
   -p, --pretend                   show what updates would be done
                                     (firmware will not be modified)
   -l, --list                      list the firmware available in the archive
   -c, --clear-lock                clears locks placed on a device
   -q, --quiet                     quiet mode - do not show update progress
   -v, --version                   display version information
that I found on the internet but that's not a valid parameter.

At this point, I am thinking of taking all these pesky Fusio ioDrives and putting them into a separate Ubuntu CentOS box and exposing them to ESXi as iSCSI over fiber.

In total, I have ~5TB
  1. 1.2TB Fusion ioDrive
  2. 1.2TB Fusion ioDrive
  3. 2.4TB Fusion ioDrive2

What do you guys think about this approach? Is it better to have this installed on the ESXi host? Or is it better to have it running separately on a "flash storage" server?
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
With the 2.4TB duo, it is two physical 1.2TB drives, no way to combine them other than software raid 0.

The firmware does feature a --split and --merge function, but it has to be done on a single device. You could split your drive into four 640GB drives, or use them as two 1.2TB drives.

This is not supported in ESX.

pvcreate and lvcreate are features of LVM in linux, which is something ESX is not. I'm not aware of any software raid methods in ESX, only thing you can do is make them both extents in a single volume, but that's probably not recommended.


-- Dave
 

warlockedyou

Member
Sep 4, 2016
217
18
18
With the 2.4TB duo, it is two physical 1.2TB drives, no way to combine them other than software raid 0.

The firmware does feature a --split and --merge function, but it has to be done on a single device. You could split your drive into four 640GB drives, or use them as two 1.2TB drives.

This is not supported in ESX.

pvcreate and lvcreate are features of LVM in linux, which is something ESX is not. I'm not aware of any software raid methods in ESX, only thing you can do is make them both extents in a single volume, but that's probably not recommended.


-- Dave
Ahhh that makes more sense. The past two drives I had were regular ioDrives, not the Duo versions.

What do you think about moving all these FusionIO drives to a an Ubuntu box and then using LVM?
Ubuntu --> iSCSI --> ESXi
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
You can do it if you have some good 10gige adapters.

I honestly wouldn't bother and would keep them as separate datastores.
 

warlockedyou

Member
Sep 4, 2016
217
18
18
You can do it if you have some good 10gige adapters.

I honestly wouldn't bother and would keep them as separate datastores.
I do have 10Gig NICs installed in it. Why would you recommend keeping the cards as individual 600GB drives and not combine them to make a 1.2TB volume? The only reason I was doing it is so that I have an easier time allocating space to VMs from single data store instead of two stores per drive.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Because if you stripe them, and one fails for some reason, you're going to lose all of your data, vs half your data.

There's no perf benefit to putting them in VMFS extents either, it just makes future things uglier, so no reason to do that.

If you're going to be diligent with your backups, fine, drop them in a linux box, stripe them together with mdadm, share it out with targetcli/tgt and have at it.
 
  • Like
Reactions: warlockedyou

warlockedyou

Member
Sep 4, 2016
217
18
18
Because if you stripe them, and one fails for some reason, you're going to lose all of your data, vs half your data.

There's no perf benefit to putting them in VMFS extents either, it just makes future things uglier, so no reason to do that.

If you're going to be diligent with your backups, fine, drop them in a linux box, stripe them together with mdadm, share it out with targetcli/tgt and have at it.
Thank you for the clarification, I understand now.
I will use the 1.2TB Fusio-ioDrive as one big drive which will hold media temporarily while it's being processed but won't hold any VMs.
On the other hand, I will use the 2.4TB iodrive2 Duo as two separate drives and expose them to ESXi as two separate 1.2TB disks.
 

ArcturusSix

New Member
Nov 18, 2015
3
6
3
43
Regarding the rebuild of 3.2.15 source RPM on RHEL/CentOS 7.5, patching one line of the source code fixes the build error. Details here: CentOS 7 + FusionIO users: do not upgrade to kernel-3.10.0-862.2.3.el7 yet! I've been running it stable for the past few days on CentOS 7.5.

I had some issues this last week trying to rebuild the 3.2.15 source RPM in rhel 7.5. No love there due to some pointer stuff that seems to be no longer compatible.

I'm back on the previous kernel, but have to wait for sandisk to upgrade the source package. 1699 is the current source package version, so keep a lookout for a newer one...
 

ThomasDDX

New Member
Aug 15, 2018
1
1
3
Great info in this thread. I got a 2.4 GB iodrive2 Duo. Plugged it in, created account with Sandisk, downloaded the latest software, installed, drive was already at the current firmware, issued the power limit command to give it 50 watts and initialized the drives in windows 10 as a stripe. Off to the races. Took about half an hour total.
 
  • Like
Reactions: Tha_14