Fusion-io ioDrive 2 1.2TB Reference Page

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
Oh, another thing to check for, are the cards all in slots that have dedicated PCI-e lanes? Some go through a pcie switch and split bandwidth between them. Benching any single card will yield ideal results, but when you push all 3, two of them will be sharing bandwidth.
 

Marsh

Moderator
May 12, 2013
2,287
1,100
113
Also THANK YOU for your help keeping these cards relevant for broke ass cheapskates like myself!

+1
 

thorondorwest

New Member
Feb 13, 2019
23
0
1
upload_2019-2-26_13-16-8.png

I am on an EPYC 7601 so all PCI-E lanes are hanging off of CPU dies, however there can be similar issues to a PCI-E switch when running very high bandwidth devices. I am wondering if I should try software to see if I can pin the fio driver to some idle cores. Part of me wonders if it is the driver not being well optimized for odd numbers of cards. If I pull a card from the array and run two of them I lose about 1GB/s of write speed but read speed stays roughly the same.
 

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
Hmm, we obviously never had EPYC to develop the drivers against. No clue how interrupt handling is different vs the prior platforms.

There are ways in the driver to set flags for different types of interrupt scenarios, but there's no public facing docs on it. I'll have to comb my email history and see if there's anything in there.

You can use the fio utils to get/set a bunch of parameters that might alter performance one way or another.

I'd also bump your read threads in the test from 1 to 4 or more.
 
  • Like
Reactions: thorondorwest

thorondorwest

New Member
Feb 13, 2019
23
0
1
Hmm, we obviously never had EPYC to develop the drivers against. No clue how interrupt handling is different vs the prior platforms.

There are ways in the driver to set flags for different types of interrupt scenarios, but there's no public facing docs on it. I'll have to comb my email history and see if there's anything in there.

You can use the fio utils to get/set a bunch of parameters that might alter performance one way or another.

I'd also bump your read threads in the test from 1 to 4 or more.
These are for high bandwidth NICs but might contain useful information:

 

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
Yeah, you might be maxing out what you can do with a single thread in terms of I/O. Upping the threads clearly shows that.
 

thorondorwest

New Member
Feb 13, 2019
23
0
1
Just for S&Gs here is 64 threads (for all tests) on a 64 thread capable CPU striped across 3 IODrive2. Pretty good example of how Fusion IO devices scale well with CPU power.

upload_2019-2-26_18-35-10.png

upload_2019-2-26_18-35-25.png

upload_2019-2-26_18-35-40.png
 

illamint

New Member
Dec 25, 2015
22
11
3
32
Looking for a gut check before I go down the road of flashing my cards from eBay. I'd love to make them appear to my DL380p Gen8 as HP-branded cards so that the server can slow down its fans. Can I do this by mucking about with the .fff flash file and reflashing the card? I've got the latest drivers installed and working just fine under Ubuntu 16.04, and here's the output of fio-status:

Code:
Found 1 ioMemory device in this system
Driver version: 3.2.16 build 1731

Adapter: Single Controller Adapter
        Fusion-io ioDrive2 1.205TB, Product Number:F00-001-1T20-CS-0001, SN:1213D1754, FIO SN:1213D1754
        External Power: NOT connected
        PCIe Power limit threshold: 24.75W
        Connected ioMemory modules:
          fct0: Product Number:F00-001-1T20-CS-0001, SN:1213D1754

fct0    Attached
        ioDrive2 Adapter Controller, Product Number:F00-001-1T20-CS-0001, SN:1213D1754
        Located in slot 0 Center of ioDrive2 Adapter Controller SN:1213D1754
        PCI:04:00.0
        Firmware v7.1.17, rev 116786 Public
        1205.00 GBytes device size
        Internal temperature: 41.83 degC, max 48.23 degC
        Reserve space status: Healthy; Reserves: 100.00%, warn at 10.00%
        Contained VSUs:
          fioa: ID:0, UUID:80d5226e-14be-48cd-9adf-aa1ed6672176

fioa    State: Online, Type: block device
        ID:0, UUID:80d5226e-14be-48cd-9adf-aa1ed6672176
        1205.00 GBytes device size
 

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
Hmm, I've never tried pushing the HP stuff onto an ioDrive. I've always gone the other direction.

There were bios updates for the HP servers that fixed the fan issues with the Fusion-io/SanDisk firmware.

I suppose I could tear into the HP firmware and look and see what stands out.
 
  • Like
Reactions: nerdalertdk and zxv

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
View attachment 10539

I am on an EPYC 7601 so all PCI-E lanes are hanging off of CPU dies, however there can be similar issues to a PCI-E switch when running very high bandwidth devices. I am wondering if I should try software to see if I can pin the fio driver to some idle cores. Part of me wonders if it is the driver not being well optimized for odd numbers of cards. If I pull a card from the array and run two of them I lose about 1GB/s of write speed but read speed stays roughly the same.
Does the driver do IRQ steering?

If not, one option is to steer interrupts statically rather than letting the IRQ balancer change the steering dynamically.
This distributes the interrupts form the iodrive evenly across all the cores.
If you know which cores are preferred, you could specify the the 'cores' list.
Code:
IRQS=$(sed -n '/iodrive/s/:.*//p;' /proc/interrupts)
cores=($(seq 1 $(grep -c processor /proc/cpuinfo)))
i=0
for IRQ in $IRQS
do
    core=${cores[$i]}
    let "mask=2**(core-1)"
    echo $(printf "%x" $mask) > /proc/irq/$IRQ/smp_affinity
    let "i+=1"
    if [[ $i ==${#cores[@]} ]]; then
        i=0
    fi
done
 

illamint

New Member
Dec 25, 2015
22
11
3
32
Hmm, I've never tried pushing the HP stuff onto an ioDrive. I've always gone the other direction.

There were bios updates for the HP servers that fixed the fan issues with the Fusion-io/SanDisk firmware.

I suppose I could tear into the HP firmware and look and see what stands out.
I think this might just be a sign that I should rip the E5-2697v2s out of this box and build a quieter Supermicro-backed desktop system. Thanks!
 

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
If anyone has any info about the male end of of the power connector that plugs in to a ioDrive 2 1.2TB, I'd appreciate it.

connector.jpg power.pins.jpg

I plan to make a cable to connect it to a DL380 Gen8 PCI riser.
DL380g8.riser.jpg.

The hardware installation guide says the card draws up to 55W at 12V through this connector.
 
Last edited:

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
That connector is unnecessary to plug in and is only for pci-e slots that cannot provide 25W of power.

That HP riser will provide 25w per slot no problem (assuming you plug in the HP power connector for it).

The 1.2TB cards won't use more than 25W on peak write.

Either way, if you want to build one, here's all the info you need Pins 1 and 2 on that 4-pin connector are 12v, the other two are ground.

Part numbers are included, and FrozenCPU is still in business. =)

 
Last edited:
  • Like
Reactions: zxv

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
Thanks so much @aquacow for the advice and details.

fio-status is reporting the 25W power limit. If that's sufficient maybe I should plan to do benchmarks first, and see if there's any issues before building power cables.

I'm not familiar with any optional extra cable to supply more power to the riser. I thought the white power connector on the riser was to supply power to supply additional power to a GPU. So I thought I'd be drawing power from that connector on the riser to power the iodrive.

upload_2019-3-3_14-3-28.png
 

acquacow

Well-Known Member
Feb 15, 2017
577
298
63
38
Aah, you're correct. Yeah, that's the HP GPGPU connector. There's power provided through the slot the riser plugs into...that's all you need for ioDrive IIs.

The 3TB and the ioDrive 3 can get up to 55W, so those are recommended for either 75W slots, or power connectors in older machines.
 
  • Like
Reactions: zxv

zxv

The more I C, the less I see.
Sep 10, 2017
153
51
28
I have some trivial patches to install the iomemory-vsl-3.2.16.1731 release on Ubuntu 18.04. If anyone has a need for this, just let me know.
 

Jake77

New Member
Mar 16, 2019
4
0
1
Hi all,

I just ordered myself this 1.2Tb card also, but not arrived yet.
I was just wondering where to get the drivers for windows7 or windows10?
Card ID is
F00-001-1T20-CS-0001
But does that tell if the card is branded for dell/hp etc?

I tried checking sandisk/wd support site but did not find the driver..

Thanks
 

Jake77

New Member
Mar 16, 2019
4
0
1
Great,
thanks acquacow, got it now.

I have some trivial patches to install the iomemory-vsl-3.2.16.1731 release on Ubuntu 18.04. If anyone has a need for this, just let me know.
In the future I may be using the card in Ubuntu also, so I would be interested to know how to get it working in 18.04 Ubuntu. So if you can share the patches needed, that would be great.