Fusion-io ioDrive 2 1.2TB Reference Page

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,861
113
Starting a thread on the Fusion-io ioDrive 2 1.2TB cards as they seem to be on the market fairly often now at under $0.50/GB. Hopefully others can chime in with experiences.

Key Specs
Capacity: 1.2TB
Formatted capacity in Windows: ~1.1TB
Random 4k read IOPS (tested via Iometer): 330K - 340K
128KB Sequential Read: 1.5GB/s
Write endurance: 16.26PB
Interface: PCIe 2.0 x8 (works in older servers)

Notes
Actual 4K Random Read IOPS using a Windows 10 Pro test machine were over 330K. That figure is higher than the 2013 rated specs.
upload_2016-9-14_14-9-33.png
For those wondering why that is an impressive screenshot: it shows one of these cards running over its rated IOPS. It is also pushing almost it entire 1.5GB/s rating while doing a 4K IOPS test at QD 64 (much lower than most NVMe drives can peak.)

Installation of the HP/ HPE card worked in Windows 10 Pro without issue by simply installing the driver packs from the WD/ SanDisk/ Fusion-io support site. One item to be careful about with these drives is that they are not boot drives and they do not have default drivers in Windows/ Linux. As a result, unlike NVMe drives, you will have to install the driver. You can simply register on the WD/ SanDisk/ Fusion-io site to get access to the downloads.

The performance of these drives fall off much less under heavy writes than the Intel DC P3600's. For read optimized workloads, an Intel 750 or P3600 may be a better option. If you are doing heavy data crunching or need a fast read/ write drive for video editing, the ioDrive 2 is still fairly good. It is unlikely used drives are going to have anywhere near 16.26PB of write endurance used.

Pricing
As of Sept 2016 these are now selling regularly for sub $500 for the 1.2TB card. Good pricing is in the $400 to $450 range.

ebay search: Fusion io ioDrive II 1.2TB | Lowest Price
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,861
113
What is the latency like?

E.g. compared to S3500 or S3700?
It is still doing a 4K random read test but showing about 0.5-0.6ms even at 330K IOPS. Generally these are going to be much lower than SATA drives.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,074
976
113
NYC
I'm taking away from this that these basically saturate the PCIE 2.0 bus even with 4K. Nice.
 

i386

Well-Known Member
Mar 18, 2016
4,440
1,657
113
35
Germany
Can the iodrive be used like a normal sata(or sas) drive in windows storage spaces?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,768
2,146
113
During that 'peak 4k test' any idea what power usage was vs idle on the card?
 

lowfat

Active Member
Nov 25, 2016
131
91
28
41
Will these work as a datastore in ESXi? I have an older ioXtreme that doesn't but as far as a I know most of the other drives should.

EDIT: Found the acticle saying it does.

EDIT2: Ordered a 1.2TB ioScale on eBay. I'm exited.
 
Last edited:

lowfat

Active Member
Nov 25, 2016
131
91
28
41
A couple of benches of the 1.2TB ioScale I bought. I believe it is nearly the same as the ioDrive 2 but higher write endurance. Paid $362USD ($486CAD after taxes/shipping).

Had to override the PCIe slot power draw as it would only allow for 25w. Took me forever to find the command so if someone else needs it
Code:
fio-config -p FIO_EXTERNAL_POWER_OVERRIDE [driveserialnumber]:75000
The ioScale





And a 256GB Samsung 950 Pro to compare against.





So sequential reads are slower on the ioScale. But it actually pulls ahead in the QD1 4K reads, which isn't something I expected.

EDIT: Some low queue depth IOmeter comparisons.


This is a 512B QD1 100% read 100% random. The 950 Pro is actually slightly quicker here. 1.3% more IOPS.

ioScale


950 Pro



This is a 1K QD1 100% read 100% random. Here the ioScale pulls ahead. 12.2% more IOPS.

ioScale


950 Pro



This is a 4K QD1 100% read 100% random. ioScale is only 3.3% quicker here.

ioScale


950 Pro
 
Last edited:

j.battermann

Member
Aug 22, 2016
86
21
8
44
I am currently thinking about getting one of these for my Workstation to store & run all my VMs on.. but was wondering how 'future-proof' these are.. as in.. are drivers for these still being maintained/updated? I had some bad experience with drivers for other 'older' devices after a fresh install of Windows 10 Anniversary Update and its requirement for signed drivers and all that...

While certainly no one can predict the future, has anyone an idea or an educated opinion whether buying one of these for a Win10 Workstation would be advisable?

I am currently going back and forth between one of these 1.2tb fusion-io drives and a 800gb intel dc p3600 - the later still getting firmware/driver updates and overall appearing to be more of a normal plug'n'play kind of scenario, but both would be ~ the same price (used) and I wouldn't mind having an extra few hundred gbs of storage either.

Any advice / opinions?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,861
113
NVMe is much more friendly driver wise. Most modern OSes have NVMe drivers built-in. While they may not be the specialized Intel drivers, NVMe drives do work out of the box which is significantly different than Fusion-IO.
 

workingnonstop

Active Member
Feb 24, 2016
236
64
28
38
Speaking of special drivers... any secret to getting these to run in Windows 10? It looks like Sandisk has a page with Windows drivers on it, but wondering if you're running those or a set from somewhere else. [edit - re-read your post and saw you said "Installation of the HP/ HPE card worked in Windows 10 Pro without issue by simply installing the driver packs from the WD/ SanDisk/ Fusion-io support site." ...so guess it was that easy!]

What is your go-to for testing these drives? I can identify Anvil, CrystalDisk, and IOMeter in the post above. Have seen comments that some of the more popular benchmarking programs are not super accurate for these faster drives.

I had my eye on the 1.2TB versions and have seen them listed as low as $300. Missed a $300 OBO listing last night, so not sure if it sold for list price or if someone beat my offer. Picked up 2x 2.4TB versions this afternoon, though, and specs on that guy look even more nuts. Will post some test results once I get them (will make a new thread then... to not completely hijack this one!).

Usable Capacity: 2410GB
Technology: NAND Flash
MLC Bandwidth (1MB): 2.5GB/s Writes, 3.0GB/s Reads
Access Latency: 15 microseconds for Writes, 68 microseconds for Reads
IOPS: Read 892,000 IOPS (sequential 512B), Write 935,000 IOPS (sequential 512B)
Read 285,000 IOPS (random 512B), Write 725,000 IOPS (random 512B)

Form factor: Full Height/Half length PCI Express x 8 slot (spec 2.0)
Bus Interface: PCI Express 2.0 x 8
Specs also list that this one may need an aux power cable, so need to figure out 1) if that's true and 2) if not included with the cards I bought, where to find them... :p
 
Last edited:

walked

New Member
Jan 19, 2017
12
3
3
Washington DC
I'm seriously considering grabbing one of these to use as a Starwind VSAN cache disk. Wondering how it'd perform on a 10gb network for small-scale virtualization clustering.. I'm thinking I'm going to try it. Worst case is it ends up as my vmware workstation scratch disk..
 
  • Like
Reactions: NetWise

walked

New Member
Jan 19, 2017
12
3
3
Washington DC
Annnd pulled the trigger.

We'll see how this performs used as a L2 cache disk for Starwind Virtual SAN on a 10gbe network in short order..
 

walked

New Member
Jan 19, 2017
12
3
3
Washington DC
So I picked this up; local performance is just as expected - GREAT.

However, no matter what, if I present the Fusion-IO on a network; performance is abysmal.

I've tried presenting via 10gbe Copper and Fiber.
I've tried presenting via SMB and iSCSI.

No matter what; once network is included in the access pattern; performance goes from ~1.2gbps to ~0.2gbps. It's ugly.

It's not a bus limitation, as I can run local iometer benchmarks agains the disk full bore, and then run a different set of network benchmarks, also full bore, and get full performance out of both.

I've tried verifying jumbo frames; multiple host systems, the works. No matter what - as soon as I try to access via network, I'm seeing terrible performance.

Guess I'll just use it as a local cache for performance sensitive workloads on my workstation, and move my workstation 850 evo over. Sad.
 

walked

New Member
Jan 19, 2017
12
3
3
Washington DC
What OS on both systems?
Server 2016; fully updated.

NICs I've tired are Mellanox Connect-X 2 and Intel X540-T2. Both tied into 10gbe backbone that performs at full speed for all systems outside of this disk.

Local to the bare metal host of Server 2016; everything is up to par.
If I present the FusionIO as an SMB share; performance from any other system tanks from ~1.2gbps to ~200mbps. (and it's definitely not a network issue; because I can present my H700 array via SMB and hit ~500mbps which is what it should be)

If I present the drive as iSCSI from Starwind VSAN; I get the same performance from remote systems.

I can run iometer / any disk benchmark locally, maxed out, and then run full network throughput tests and have neither hit subpar numbers, which (at least to me) indicates it isnt a PCIE bottleneck.

Kinda pulling my hair out.
 

Maritime

New Member
Nov 20, 2016
10
0
1
...
I had my eye on the 1.2TB versions and have seen them listed as low as $300. Missed a $300 OBO listing last night, so not sure if it sold for list price or if someone beat my offer. Picked up 2x 2.4TB versions this afternoon, though, and specs on that guy look even more nuts. Will post some test results once I get them (will make a new thread then... to not completely hijack this one!).
...
There is no way to trick OS to see two of these as one big disk? Something like RAID for 'normal' disks? Not because of speed but because of big projects that need to be on one place.
 

acquacow

Well-Known Member
Feb 15, 2017
798
449
63
43
I should note, in your first post, the write endurance is only the original warranted write endurance. The cards can/will last a lot longer than 16PBW.

If you have any questions or issues with the drives, let me know. I worked at Fusion-io for 6 years and have seen just about everything you might come across.

Can the iodrive be used like a normal sata(or sas) drive in windows storage spaces?
Yes, absolutely. I have 3 in a storage space right now in Windows 10. They are great on their own, but I tried setting up SSD and HDD tiers in a pool with them and couldn't get more than disk speeds if the ioDrives were in the pool. I settled on 4 HDDs in a 2-column stripe and kept the ioDrives in a separate pool for faster stuff.

Here's the command-line I used to build my mirrored "ioPool" of 3 ioDrive 2s in Windows 10 storage spaces:
Code:
#Making 3 ioDrives into mirrored storage space:
Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:\WINDOWS\system32> Get-PhysicalDisk -CanPool $True | ft FriendlyName,OperationalStatus,Size,MediaType

FriendlyName          OperationalStatus          Size MediaType
------------          -----------------          ---- ---------
Fusion ioCache 1200GB OK                1205000000000 SSD
Fusion ioCache 1200GB OK                1205000000000 SSD
Fusion ioCache 1200GB OK                1205000000000 SSD


PS C:\WINDOWS\system32> $pd = (Get-PhysicalDisk -CanPool $True | Where FriendlyName -EQ "Fusion ioCache 1200GB")
PS C:\WINDOWS\system32> New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “ioPool”

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly
------------ ----------------- ------------ ------------ ----------
ioPool       OK                Healthy      False        False


PS C:\WINDOWS\system32> New-VirtualDisk -StoragePoolFriendlyName "ioPool" -FriendlyName ioStorage -ResiliencySettingName Mirror -UseMaximumSize

FriendlyName ResiliencySettingName OperationalStatus HealthStatus IsManualAttach    Size
------------ --------------------- ----------------- ------------ --------------    ----
ioStorage    Mirror                OK                Healthy      False          1.64 TB


PS C:\WINDOWS\system32> Get-VirtualDisk ioStorage | Get-Disk | Initialize-Disk -PartitionStyle GPT
PS C:\WINDOWS\system32> Get-VirtualDisk ioStorage | Get-Disk | New-Partition -DriveLetter “I” -UseMaximumSize


   DiskPath: \\?\storage#disk#{b254657a-f750-49c0-9a4a-79050cf3a153}#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}

PartitionNumber  DriveLetter Offset                                                              Size Type
---------------  ----------- ------                                                              ---- ----
2                I           135266304                                                        1.64 TB Basic


PS C:\WINDOWS\system32> Initialize-Volume -DriveLetter “I” -FileSystem REFS -Confirm:$false

DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining    Size
----------- --------------- ---------- --------- ------------ ----------------- -------------    ----
I                           ReFS       Fixed     Healthy      OK                      1.63 TB 1.64 TB


Here's how it performs:


There is no way to trick OS to see two of these as one big disk? Something like RAID for 'normal' disks? Not because of speed but because of big projects that need to be on one place.
We always relied on mdraid in linux and windows dynamic disks for striping ioDrives together. It's a much faster solution than anything you could do with a slow hardware ASIC.

I have 3 in an array attached to a VM right now:


Server 2016; fully updated.

NICs I've tired are Mellanox Connect-X 2 and Intel X540-T2. ...performance from any other system tanks from ~1.2gbps to ~200mbps. (and it's definitely not a network issue; because I can present my H700 array via SMB and hit ~500mbps which is what it should be)
Kinda pulling my hair out.
I've got the 3 above in an array in a windows 10 VM, intel x540-T2, single link to my desktop which also has an x540-T2.

I'm just using normal windows file sharing, jumbo frames are set at 9000 at both ends. I can nearly max out the single link just fine:



Your drive may be power throttling if it can't detect the supported wattage from the slot. You may want to look into the Power Override options to set it to 25w so that you can get full perf.
 
Last edited: