Sun Oracle F80 800GB PCIe Flash Accelerator 7069200 LSI WarpDrive 6203 $195 + FS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

czl

New Member
May 14, 2016
25
4
3
I had a similar issue with setting up a striped ZFS in linux across the four SSD modiles on an F40 card. What eventually solved it was to leave one of the four SSD's out. Writes to that single SSD module were unstable and would often timeout -- reads however did not seem to have a problem. Tried re-format using both ddcli and bios but nothing would fix this. To save on shipping and hassle I asked the vendor to issue me a 25% refund since the other three SSD modules on the card were OK.

While I wait for my full-height LSI bracket that I ordered from China I figured I should put this drive in one of my small form factor systems that accepts half-height cards. Windows 10 Pro recognized the card automatically but every time I try to set up a striped volume across all four drives the Disk Management MMC freezes and the format never completes. I even ran DDCLI.EXE and did a format but this did not help things.

Has anyone else ran into this? The only thing I can think of is that I might need to specify a specific sector size when formatting... I left it at "Default" when I attempted this last night.

I also tried to find a compatible firmware from Seagate's WarpDrive site. The 800GB firmware available there says "NAND type mismatch" when I attempt to flash it.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Got mine in today, VERY well packed.

Initial fio stats, vt-D device to a Ubuntu 16.04 LTS VM, single device out of the 4 formatted w/ fdisk, ext4 filesystem - 4530 READ 1510 WRITE IOPS

root@fio:/sda# fio --randrepeat=1 --ioengine=libaio --direct=1 --name=testingtons --filename=5GBtestfile --bssplit=512/10:4k/60:8k/20:64k/10 --iodepth=64 --size=5G --readwrite=randrw --rwmixread=75
testingtons: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
testingtons: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [24322KB/7457KB/0KB /s] [6105/1966/0 iops] [eta 00m:00s]
testingtons: (groupid=0, jobs=1): err= 0: pid=1394: Thu Sep 15 15:16:19 2016
read : io=3840.5MB, bw=25990KB/s, iops=4530, runt=151310msec
slat (usec): min=1, max=830, avg= 5.23, stdev= 4.23
clat (usec): min=117, max=69858, avg=9871.13, stdev=6955.98
lat (usec): min=123, max=69862, avg=9876.52, stdev=6956.00
clat percentiles (usec):
| 1.00th=[ 644], 5.00th=[ 940], 10.00th=[ 1192], 20.00th=[ 1768],
| 30.00th=[ 8032], 40.00th=[ 9152], 50.00th=[ 9792], 60.00th=[10176],
| 70.00th=[10688], 80.00th=[16768], 90.00th=[19584], 95.00th=[20864],
| 99.00th=[30080], 99.50th=[31872], 99.90th=[41216], 99.95th=[46336],
| 99.99th=[51968]
bw (KB /s): min=18268, max=49651, per=100.00%, avg=26002.09, stdev=3881.47
write: io=1279.6MB, bw=8659.6KB/s, iops=1510, runt=151310msec
slat (usec): min=2, max=21199, avg= 6.57, stdev=55.19
clat (usec): min=195, max=79887, avg=12734.90, stdev=7056.81
lat (usec): min=203, max=79890, avg=12741.64, stdev=7056.81
clat percentiles (usec):
| 1.00th=[ 908], 5.00th=[ 1608], 10.00th=[ 3408], 20.00th=[ 8896],
| 30.00th=[ 9536], 40.00th=[10048], 50.00th=[10304], 60.00th=[11072],
| 70.00th=[17280], 80.00th=[19072], 90.00th=[20352], 95.00th=[26752],
| 99.00th=[33024], 99.50th=[39680], 99.90th=[49408], 99.95th=[52480],
| 99.99th=[63744]
bw (KB /s): min= 6924, max=13944, per=100.00%, avg=8663.11, stdev=1199.93
lat (usec) : 250=0.01%, 500=0.21%, 750=1.41%, 1000=3.34%
lat (msec) : 2=13.43%, 4=3.85%, 10=30.00%, 20=39.00%, 50=8.72%
lat (msec) : 100=0.04%
cpu : usr=2.38%, sys=6.28%, ctx=396358, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=685481/w=228538/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=3840.5MB, aggrb=25990KB/s, minb=25990KB/s, maxb=25990KB/s, mint=151310msec, maxt=151310msec
WRITE: io=1279.6MB, aggrb=8659KB/s, minb=8659KB/s, maxb=8659KB/s, mint=151310msec, maxt=151310msec

Disk stats (read/write):
sda: ios=678701/227455, merge=5401/734, ticks=6694184/2896740, in_queue=9595028, util=100.00%
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Moved warpdrive over to ZoL box vt-D setup again, formatted zpool as raid-0 stripe w/ all 4 devs, same fio test.

root@zol:/warpdrive-r0# fio --randrepeat=1 --ioengine=libaio --direct=1 --name=testingtons --filename=5GBtestfile --bssplit=512/10:4k/60:8k/20:64k/10 --iodepth=64 --size=5G --readwrite=randrw --rwmixread=75
testingtons: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.3
Starting 1 process
testingtons: Laying out IO file(s) (1 file(s) / 5120MB)
fio: looks like your file system does not support direct=1/buffered=0
fio: pid=2681, err=22/file:filesetup.c:575, func=open(5GBtestfile), error=Invalid argument


Whoopsie, looks like --direct=1 doesnt like ZFS. Any ideas?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Doesn't seem to be a speed daemon by any stretch of the imagination but I DO like it's simplicity w/ a proven LSI chipset (no drivers in damn near anything) and storage bolted right to that sas to pci-e bus.

EDIT: OK, with '--direct=0' it gave me 1896 READ, 632 WRITE IOPS on the ZFS raid-0 setup.

SMH
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I need to figure out what in THE HELL is wrong w/ my FIO testing methodology, results all over the place.

I typically use these three...maybe the top one needs some luv.

Code:
fio --randrepeat=1 --ioengine=libaio --direct=1 --name=testingtons --filename=5GBtestfile --bssplit=512/10:4k/60:8k/20:64k/10 --iodepth=64 --size=5G --readwrite=randrw --rwmixread=75
Code:
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=240 --group_reporting
Code:
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=240 --group_reporting
Anything wrong w/ these tests or are they flawed somehow? Are there a set of universally accepted fio tests that are the 'sweet spot' runs?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Last two seem to yield better numbers:
Code:
root@zol:/warpdrive-r0# fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=240 --group_reporting
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
randwrite: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.1.3
Starting 8 processes
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
randwrite: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 8 (f=8): [wwwwwwww] [100.0% done] [0KB/8084KB/0KB /s] [0/2021/0 iops] [eta 00m:00s]
randwrite: (groupid=0, jobs=8): err= 0: pid=2702: Thu Sep 15 16:20:30 2016
  write: io=2714.2MB, bw=11580KB/s, iops=2895, runt=240005msec
    slat (usec): min=8, max=67642, avg=2759.66, stdev=2962.27
    clat (usec): min=0, max=1525, avg= 0.84, stdev= 3.06
     lat (usec): min=8, max=67644, avg=2761.03, stdev=2962.30
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    1], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    2], 99.50th=[    5], 99.90th=[   22], 99.95th=[   26],
     | 99.99th=[   46]
    bw (KB  /s): min=  589, max= 5064, per=12.52%, avg=1449.76, stdev=525.99
    lat (usec) : 2=96.31%, 4=3.17%, 10=0.06%, 20=0.33%, 50=0.12%
    lat (usec) : 100=0.01%, 250=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%
  cpu          : usr=0.16%, sys=1.59%, ctx=716279, majf=0, minf=47
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=694825/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=2714.2MB, aggrb=11580KB/s, minb=11580KB/s, maxb=11580KB/s, mint=240005msec, maxt=240005msec
Code:
root@zol:/warpdrive-r0# fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=512M --numjobs=8 --runtime=240 --group_reporting
randread: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
...
randread: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.1.3
Starting 8 processes
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
randread: Laying out IO file(s) (1 file(s) / 512MB)
Jobs: 1 (f=1): [r_______] [7.6% done] [25516KB/0KB/0KB /s] [6379/0/0 iops] [eta 01m:13s]
randread: (groupid=0, jobs=8): err= 0: pid=2717: Thu Sep 15 16:24:35 2016
  read : io=4096.0MB, bw=723655KB/s, iops=180913, runt=  5796msec
    slat (usec): min=2, max=31059, avg=19.30, stdev=400.60
    clat (usec): min=1, max=282041, avg=311.11, stdev=3084.70
     lat (usec): min=4, max=299604, avg=330.58, stdev=3274.76
    clat percentiles (usec):
     |  1.00th=[   65],  5.00th=[   69], 10.00th=[   71], 20.00th=[   75],
     | 30.00th=[   78], 40.00th=[   80], 50.00th=[   82], 60.00th=[   85],
     | 70.00th=[   88], 80.00th=[   93], 90.00th=[  106], 95.00th=[  133],
     | 99.00th=[ 6624], 99.50th=[11456], 99.90th=[21120], 99.95th=[41216],
     | 99.99th=[154624]
    bw (KB  /s): min=  143, max=735712, per=24.11%, avg=174478.45, stdev=178722.94
    lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=86.44%
    lat (usec) : 250=9.99%, 500=0.38%, 750=0.48%, 1000=0.17%
    lat (msec) : 2=0.54%, 4=0.49%, 10=0.85%, 20=0.51%, 50=0.09%
    lat (msec) : 100=0.02%, 250=0.03%, 500=0.01%
  cpu          : usr=4.57%, sys=22.60%, ctx=8372, majf=0, minf=172
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=4096.0MB, aggrb=723654KB/s, minb=723654KB/s, maxb=723654KB/s, mint=5796msec, maxt=5796msec
NO WAY that read test is right, that must be in memory which is why I think my fio variables need some love. Maybe increase the file size's, seems too low.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Got mine in today, VERY well packed.
Mine came today, wrapped in bubble wrap and stuffed in a reinforced padded envelope. Not the worlds best packing but nothing looks broken.

Linux does not see it at all. Nothing shows up in "lspci". None of the LEDs come on. The chipset heatsink gets very hot, burn your fingers hot, then the machine shuts down.

Are these things picky about their hosts? I tried two different x8 slots in a supermicro x9scl-f.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Using a PCI-e 2.0 8x slot in my X9SRL-F board. See's it in Linux so far for me. About to let FreeNAS play with it for a bit though.

EDIT: I do see on mine however a RED steady light on LED 2 which according to this guide

https://docs.oracle.com/cd/E41278_01/pdf/E41251.pdf

Means one of the following.

On, steady: One of the following conditions applies: ■ One or more of the SSDs has failed. ■ At least one of the SSDs has reported critical temperature. ■ Backup power rail monitor failure detected. ■ Other component issues: Run the -list and -health commands in the ddcli utility to determine which component has an issue. Caution – System Damage. If the critical temperature warning persists, you can damage your card. Increase cooling or shut down your system to prevent damage.

Lovely, looks like I am cooking it probably. Will shut it down and see what I can do.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Perplexing ? o' the day from me, where in the hell do I get ddcli for Linux from? Can someone hook me up, my Ora acct is jacked up.

NM: Found it out on IBM's site. Oracle, you greedy bastards!
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
YIKES, I still see all avail zpool space, wonder what the temp threshold is?

Raid Volume status = unconfigured.
--------------------------------
WarpDrive ELP-4x200-4d-n Health
--------------------------------

Backup Rail Monitor : GOOD


------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 4 Cage : 1 Location : Upper
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 79 degree C

LSI WarpDrive Management Utility: Component at "Cage = 1, Location = Upper" is in FAILED state.
------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 5 Cage : 1 Location : Lower
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 79 degree C

LSI WarpDrive Management Utility: Component at "Cage = 1, Location = Lower" is in FAILED state.
------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 6 Cage : 2 Location : Upper
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 75 degree C

LSI WarpDrive Management Utility: Component at "Cage = 2, Location = Upper" is in WARNING state.
------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 7 Cage : 2 Location : Lower
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 75 degree C

LSI WarpDrive Management Utility: Component at "Cage = 2, Location = Lower" is in WARNING state.

Overall Health : ERROR
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
UPDATE: fans on full in chassis seem to keep it happier LOL

Select Operation [1-7 or 0:Quit]: 3
Raid Volume status = unconfigured.
--------------------------------
WarpDrive ELP-4x200-4d-n Health
--------------------------------

Backup Rail Monitor : GOOD


------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 4 Cage : 1 Location : Upper
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 64 degree C

------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 5 Cage : 1 Location : Lower
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 65 degree C

------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 6 Cage : 2 Location : Upper
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 68 degree C

------------------------------------------------------------
SSD Drive SMART Data SSD Slot #: 7 Cage : 2 Location : Lower
------------------------------------------------------------
Warranty Remaining : 100 %

Temperature : 69 degree C


Overall Health : GOOD
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Seems these two options with the following vt-D VM setup yield accurate-ish results.

Ubuntu LTS 14.04 ZoL vSphere VM, 2vcpu, 8 GB memory, vt-D passthru warpdrive to VM, zpool raid0 stripe created out of 4 200gb modules, fio run below minding to increase the --size=4096M which creates 8 4GB files (32 GB total to get 4x's outside the memory bounds)

Code:
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=4096M --numjobs=8 --runtime=240 --group_reporting
write: io=6115.8MB, bw=26094KB/s, iops=6523, runt=240003msec

Code:
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=0 --size=4096M --numjobs=8 --runtime=240 --group_reporting
read : io=10170MB, bw=43393KB/s, iops=10848, runt=240001msec

Seems more accurate/believable, I'm satisfied.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Images of zpool iostat while the write/read fio tests are running. What's odd to me is the write is clearly pounding data down but the read get's abt 1/4 of that w/ peaks of 1-1.5GBps read throughput while the eight 4GB files are created while the iops of the reads are typically higher after the test completes.

1st image = fio write
2nd image = fio read creating eight 4GB files
3rd image = fio read once actual test kicks off after files are created.

EDIT: As I typed this I looked back and zpool iostat output while fio crunching actually kicked in and the third image is the real read I believe while the 2nd image is the lower 'writing out the files' phase is all I can make of it.

Pretty damned impressive, this would make a badass 4 pool ZIL device with each 200 gb ssd 'slice' service as ZIL for a pool of magnetic drives. Where were you about 5 years ago in my ZFS journey WarpDrive...ohh yeah probably 3-5K at that time.

fio-write-test-warpdrive-ZoL-raid-0-zpool-iostat-output.png
fio-read-test-warpdrive-ZoL-raid-0-zpool-iostat-output.png
fio-read-test2-warpdrive-ZoL-raid-0-zpool-iostat-output.png
 
Last edited:
  • Like
Reactions: T_Minus

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Be SUPER careful about heat. I've cooked some of the 1.2TB Nytro's awhile back. The only servers I've been able to run them in are rackmount Dell's that I can boost fan speed.

The FusionIO cards run way cooler.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Be SUPER careful about heat. I've cooked some of the 1.2TB Nytro's awhile back. The only servers I've been able to run them in are rackmount Dell's that I can boost fan speed.

The FusionIO cards run way cooler.
Cooked as in toasted them to failure? I had mine running about 76C for 8 hours or so before I noticed it but they still say 100% warranty (assuming life) remaining. Yep I have to set my norco 2U to full speed on fans to drop them to high 50'sC low 60'sC.
 

Flintstone

Member
Jun 11, 2016
127
20
18
47
Thank you guys for testing and telling experiences. I will look at this thread every time I get tempted to buy one - I know how hot my normal LSI cards get - this one has several controllers (?) and 4 SSDs in a small volume. It is a nice device on paper, but seems like it is not suited for every usecase.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Swapped the F80 out of my norco 2212 and into my norco 4224...MUCH happier w/ temps in the 4u, able to sets fans to optimal again (in 2U it was full blast just to keep it cool) and temps are 45-50 degrees C down from 60-65 w/ 2U blowin full speed.

Had to lose the LP bracket but I will live (as will the F80 a lot longer now) :-D