1.2TB LSI Nytro Warp Drive PCIe SSDs - $240

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Did you see Intel's new PCIe SSD release today? This Nytro PCIe SSD (which I bought one of) is still faster than the 1.2TB PCIe drive that Intel announced today...

Intel unveils two new lineups of datacenter SSDs

And I'm pretty sure these won't be 200$ USD :)

Granted those ones will use less power, but, bang for buck, it's fantastic.
Someone mount that up to a vSphere cluster over NFS backed by a zfs based box or hell even native vSphere if it's supported w/ vmfs formatting and sVMotion some VM's to/from that Nytro device to datastores (that have the JUICE) and provide esxtop/cacti/freenas graphs/zpool iostat's/etc. if you would be so kind, sounded like some of you had those intentions. 10G backed network even better as we all know this drive is gonna soak/tank a 1GbE conn...sVMotions across hypervisors/phys switch would be great/accurate obviously.

Looking forward to 'real-world' results.
 

wardtj

Member
Jan 23, 2015
91
28
18
47
Someone mount that up to a vSphere cluster over NFS backed by a zfs based box or hell even native vSphere if it's supported w/ vmfs formatting and sVMotion some VM's to/from that Nytro device to datastores (that have the JUICE) and provide esxtop/cacti/freenas graphs/zpool iostat's/etc. if you would be so kind, sounded like some of you had those intentions. 10G backed network even better as we all know this drive is gonna soak/tank a 1GbE conn...sVMotions across hypervisors/phys switch would be great/accurate obviously.

Looking forward to 'real-world' results.
No problem, I'll post some before and after results this weekend. I'm planning using this one drive to replace a 9260 with 6 older Sandforce SSD's connected to it in a couple of RAID 0's in a vSphere 6.0U1 box. I'm more interested in the 4K Random Write I/O myself. I have some SIEM applications that do a lot of logging, and write performance is much more in demand. I find SSD's that are read focused to be a DSL connection.. great download, but don't ever upload anything. The Nytro is supposed to be a great MU drive, so we will see how it fares with real workload on it :)

I have played with a few PCIe SSDs, and the older Sun F20 drives for example, they are good for general purpose VMFS, but when it comes to svMotion, and such, VERY slow. The specs on these are totally different, and it's actually a modern HBA (circa 2012), so, should not be as bad as those old things.

NOTE: For anyone that has one of these Nytro's they NEED airflow. All 23xx LSI chips run very hot. They will get to over 100C in no time just at idle sitting on the bench. I've seen as high as 125C before. So, make sure to put a fan blowing front to back near the card. The heat will throttle the card as it does on the other 23xx series LSI's. I have a 8K RPM fan blowing onto one of my 9271's, and it keeps it at 52C.
 

wardtj

Member
Jan 23, 2015
91
28
18
47
Ok, as promised here are the results...

If you are using this with VMware, you need to tune the card. I'm not going to go into all of the specifics, but, this link,

Nytro-XV_NWD_VM_Performance_Acceleration

will be quite helpful too you. Make sure to set the depth, and ensure you make the changes LSI recommends. I have not tried this with Windows, this is with vSphere 6.0. This server is not ideal for this card. The card is a PCI Express 3.0 x8, this is in a PCI Express 2.0 x8 slot. The bandwidth is more than enough, but some of the features including a little better latency will be missing.

Here is my baseline. This is what one of my 24 drive NL-SAS arrays does with a 10Gbe network and a Windows 2012 R2 VM,

Baseline - 24x1.2TB NL SAS 3xRAID 6 (8+2) 10Gbe iSCSI.PNG

All tests where done using a Windows 2012 R2 VM with direct access to the datastore, no expanders, etc. Here is the before results. This is with the slow 4x60GB RAID 0 SATA II SSD's...

Before - 4x60GB RAID 0 SATA II - Windows 2012R2 Direct w VMWare 6.0U1.PNG

And with the little faster 2x111GB RAID 0 SATA III SSD's,

Before - 2x111GB RAID 0 SATA III - Windows 2012R2 - VMware 6.0U1.PNG

Now, here is the Nytro with the old firmware, and tuning as per LSI,

After - Nytro 11.00 Firmware - Windows 2012 R2 - VMware 6.0.PNG

And with the newest "13" firmware from Seagate and tuning as per LSI,

After - Nytro 13.00 Firmware - Windows 2012 R2 - VMware 6.0.PNG

Here is the test from ATTO,

After - Nytro 13.00 Firmware - Windows 2012 R2 - VMware 6.0 - Atto.JPG

And when ATTO is running, from ESXi,

After - Nytro 13.00 Firmware - Windows 2012 R2 - VMware 6.0 - ESXTOP.JPG

UPDATE **: Updated Results with latest v20.00 ESXi driver

After - Nytro 13.00 Firmware - Windows 2012 R2 - VMware 6.0 - V20 Driver.JPG

Overall, there is a performance improvement over the other drives. The random I/O is consistent, and it feels similar to the other drives. You do notice the improvement in VM boot times, as it's a little faster. Overall, not a bad investment for $200.

I have a 400GB Intel NVMe in my workstation. This drive is in a HP Z440 workstation with Windows 10, and all the proper PCI Express set up, UEFI etc.

Baseline - Intel 400GB NVMe AIC.JPG

This drive cost nearly 2x's what the Nytro does, and has 1/3 the space. Yes, it's much faster (without the PCI Express advantage), but, it is modern, and you get what you pay for.

Overall, I'm happy with the performance. I did put a small 80mm fan in the case to blow on the card, and it stays at a comfy 39C. Obligatory ddcli,

WarpDrive Selected is NWD-BFH6-1200
------------------------------------------------------------------------
WarpDrive Information
------------------------------------------------------------------------
WarpDrive ID : 1
PCI Address : 00:02:00:00
PCI Slot Number : 0x11
PCI SubSystem DeviceId : 0x100D
PCI SubSystem VendorId : 0x1000
SAS Address : 500605B 0080422F0
Package Version : 13.00.08.00
Firmware Version : 113.00.00.00
Legacy BIOS Version : 110.00.01.00
UEFI BSD Version : 07.18.06.00
Chip Name : Nytro WarpDrive
Board Name : NWD-BFH6-1200
Board Assembly Number : 03-25614-00F
Board Tracer Number : SP43013788
NUMA : Enabled
RAID Support : YES

--------------------------------
Nytro WarpDrive NWD-BFH6-1200 Health
--------------------------------

Backup Rail Monitor : GOOD




SSD Drive SMART Data Slot #: 2: Drive Serial Number 11000409511

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69158255616
Soft Read Error Rate 4.972048e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 39 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.1
Device Power Cycle Count 73
Gigabytes Erased 7912 (Gigabytes)
Reserved (over-provisioned) Blocks 35264
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 46 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6069
Total Reads To Host 1965
Write Amplification 1.18
Reserved Blocks Remaining 100 (%)
Trim Count 0


SSD Drive SMART Data Slot #: 3: Drive Serial Number 11000409488

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69116353536
Soft Read Error Rate 4.817765e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 39 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.1
Device Power Cycle Count 73
Gigabytes Erased 7910 (Gigabytes)
Reserved (over-provisioned) Blocks 35264
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 46 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6069
Total Reads To Host 1965
Write Amplification 1.17
Reserved Blocks Remaining 100 (%)
Trim Count 0


SSD Drive SMART Data Slot #: 4: Drive Serial Number 11000409503

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69108238336
Soft Read Error Rate 5.190631e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 39 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.0
Device Power Cycle Count 73
Gigabytes Erased 7955 (Gigabytes)
Reserved (over-provisioned) Blocks 34752
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 46 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6069
Total Reads To Host 1965
Write Amplification 1.18
Reserved Blocks Remaining 100 (%)
Trim Count 0


SSD Drive SMART Data Slot #: 5: Drive Serial Number 11000409615

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69104876032
Soft Read Error Rate 4.586122e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 38 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.1
Device Power Cycle Count 73
Gigabytes Erased 7951 (Gigabytes)
Reserved (over-provisioned) Blocks 34432
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 45 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6068
Total Reads To Host 1964
Write Amplification 1.18
Reserved Blocks Remaining 100 (%)
Trim Count 0


SSD Drive SMART Data Slot #: 6: Drive Serial Number 11000409944

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69056888320
Soft Read Error Rate 4.151250e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 38 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.1
Device Power Cycle Count 73
Gigabytes Erased 7949 (Gigabytes)
Reserved (over-provisioned) Blocks 34688
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 46 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6069
Total Reads To Host 1965
Write Amplification 1.18
Reserved Blocks Remaining 100 (%)
Trim Count 0


SSD Drive SMART Data Slot #: 7: Drive Serial Number 11000409538

-------------- Current (since last Power Cycle) ----------------------
Bytes Read 69105214464
Soft Read Error Rate 4.681335e-03
Wear Range Delta 0 (%)
Uncorrectable RAISE Errors 0
Current Temperature 38 (degree C)
Uncorrectable ECC Errors 0
SATA R-Errors (CRC) Error Count 0

-------------- Cumulative --------------------------------------------
Retired Block Count 0
Power-On Hours 1579.1
Device Power Cycle Count 73
Gigabytes Erased 7925 (Gigabytes)
Reserved (over-provisioned) Blocks 35264
Program Fail Count 0
Erase Fail Count 1 0
Unexpected Power Loss Count 79
I/O Error Detection Code Rate 0
Uncorrectable RAISE Errors 0
Maximum Lifetime Temperature 46 (degree C)
Cached SMART Data Age 00:00:00 (Hours:Minutes:Seconds)
SSD Life Left (PE Cycles) 100 (%)
Total Writes From Host 6069
Total Reads To Host 1965
Write Amplification 1.18
Reserved Blocks Remaining 100 (%)
Trim Count 0

Warranty Remaining : 100 %
Temperature : 39 degree C

Overall Health : GOOD

Hope this helps someone :)
 
Last edited:

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Ok here we go. T320, PCI G3 slots:

IOMETER IOPS 4 Workers, QD 128:

Single,

270,000 Read 4K Random
150,000 Write 4k Random

Triple:

355,000 Read 4k Random
320,000 Write 4k Random


Single:
1nytro.png

Triple:
3nytro.png
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Yeah those 4K #s are nice for that $/value and overall for 2016 still!
 

Roman2179

Member
Sep 23, 2013
49
14
8
So the good news is that the card is recognized by FreeNAS with out issue. It does give me a driver warning but I am not too worried about since all the data will be backed up to the main pool anyways.

The bad news is that even over 10gbe, I am getting 700MB/s reads and 300MB/s writes. which is the same speed as I am getting with my 12 disk (2x6 RAID-Z2) array. This tells me that the spinning array is also probably capable of faster speeds. So there is definitely some tuning that needs happen now I just need to figure out where to start.
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
How are you guys cooling these suckers? I don't have any servers (C2100, DL380) that have even remotely enough air flow. Anything slot mounted you can recommend?
 

wardtj

Member
Jan 23, 2015
91
28
18
47
How are you guys cooling these suckers? I don't have any servers (C2100, DL380) that have even remotely enough air flow. Anything slot mounted you can recommend?
In my chassis, it's 3u, so I took an old 80mm and mounted it behind the card, and pushes the air over it.

On my other LSI 7291s I have small 8k RPM 40mm fans that I have mounted behind the card. I just did some Mcgyvering to attach it.

Anything LSI needs airflow...
 

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Hi all

Hope it's OK to resurrect this old thread - I can't find a newer one on this forum and I know this one contains several owners of these devices.

I am strongly considering getting a 1.2TB Warp drive, the same one featured in this thread. It's rather more than you guys paid, but I'm in the UK so this is to be expected! And it's still a pretty good price.

I wanted to verify that it's possible to manipulate the individual SSDs that make up the RAID-0 array? It's discussed in this thread that the device is basically just a RAID-0 of 6 x 200GB SSDs. But I didn't see confirmation of whether anyone has used it as individual SSDs, rather than a hardware RAID-0 array?

What I'm hoping to do is flash the card to LSI's latest IT firmware and then be able to individually access 6 x SSDs to use as I please in ZFS? I will probably still end up RAID-0 striping most of them, but at least one I would like to keep separate to use for ZIL synchronous write-caching (where low latency is of paramount importance.)

I see in the manual one oblique reference to IT firmware - specifically, when discussing the LEDs it mentions a pattern for "One or more drives failed (IT mode)". And given it's a standard LSI 2308, flashing to IT should be possible.

But I can't find 100% confirmation that this does work, and it would be ideal to know that before I decide to blow my budget on this thing!

Thanks in advance.
 

zeynel

Dream Large, Live Larger
Nov 4, 2015
505
116
43
48
Hi,
i Got "some" of those drives. but i will not "risk" them , by flashing P20 IT firmware.

if you consider about ZIL log, get some 100GB S3700 or ZeusRAMs.
 
  • Like
Reactions: TheBloke

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Hi,
i Got "some" of those drives. but i will not "risk" them , by flashing P20 IT firmware.

if you consider about ZIL log, get some 100GB S3700 or ZeusRAMs.
Thanks for the response. Can you tell me why you think it is a risk to flash IT firmware?

I know from this thread that people have flashed new LSI firmware to the cards, upgrading to v20. But IR version I guess.

Do you have any reason to think that changing to IT mode could brick the card or anything?

And yeah I'd like to get a dedicated SSD for ZIL, but an S3700 would be at least another £120 and I really, really, can't afford to spend any more money than I already have :)

I may not even bother with ZIL, depending on benchmark results. I am not sure I do enough sync writes to make it worthwhile - I don't plan to put much through NFS, and I am not going to be running any databases any time soon.

But I would still like the option of accessing the SSDs individually, if it's possible.
 

zeynel

Dream Large, Live Larger
Nov 4, 2015
505
116
43
48
on the Seagate site you get the most recent firmware is "13" , but you can use the v20 driver in VMWARE or Windows.

I think there is a risk to Brik the device.

have you a link , who flashed the device to P20 Firmware IR mode ?
 
  • Like
Reactions: TheBloke

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Yeah, and actually I misread the earlier post - I thought someone gave benchmarks with v20 firmware. Actually he was talking about v20 driver.

OK so maybe firmware upgrade is not possible, if it's using a custom firmware. In which case standard LSI firmware wouldn't even install, most likely - it'll fail on the card model check.

I will hopefully receive my card tomorrow so I will have a play about then.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Concur, I would not flash them w/ regular IT mode FW unless there is an explicit update for that card provided by vendor. As @zeynel said s3700/ZeusRAM for ZIL is a great option although you can probably get away w/ using one of the 200GB devices off the WarpDrive as a ZIL device.
 
  • Like
Reactions: TheBloke

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Concur, I would not flash them w/ regular IT mode FW unless there is an explicit update for that card provided by vendor. As @zeynel said s3700/ZeusRAM for ZIL is a great option although you can probably get away w/ using one of the 200GB devices off the WarpDrive as a ZIL device.
OK thanks. So you're confirming that I can definitely access each of the 6x 200GB devices from the OS?

Logically I certainly should be able to given it's an LSI.. I guess out of the box it comes as a RAID-0 but presumably if I use the LSI BIOS to delete that the OS will then just see the individual drives?

I've only briefly used LSI cards in IR mode, but when I first got my first LSI card it was in IR and my OS saw all the connected devices without problems. The Warp will be the same?

To be honest I'm really not sure I want to use a whole 200GB disk for ZIL, it feels like an awful waste given I'll probably max out at about 2GB ZIL usage, if even that.

However I do hope to use the Warp for multiple purposes - L2ARC cache for my main 27 drive pool, and also a separate SSD-only pool - and I'm currently concerned that if I get it as a single disk and then partition it, Solaris will disable write caching and my performance will tank.

Then again, I don't want to divide it as 2 x 3-drive pools or something like that, as I'd only get half performance on each and I don't expect either purpose to be active at all times. So I'd rather sub-divide a 6-drive stripe than have multiple stripes, assuming write caching isn't a problem.

Anyway it comes down to the fact that I need to do a whole bunch of benchmarks in varying configs once I get it. But in all scenarios, having access to the individual devices will definitely help, if for no other reason than that I prefer to give 6 x individual devices to ZFS than a single HW-RAID device :)
 
Last edited:

wardtj

Member
Jan 23, 2015
91
28
18
47
OK thanks. So you're confirming that I can definitely access each of the 6x 200GB devices from the OS?

Logically I certainly should be able to given it's an LSI.. I guess out of the box it comes as a RAID-0 but presumably if I use the LSI BIOS to delete that the OS will then just see the individual drives?

I've only briefly used LSI cards in IR mode, but when I first got my first LSI card it was in IR and my OS saw all the connected devices without problems. The Warp will be the same?

To be honest I'm really not sure I want to use a whole 200GB disk for ZIL, it feels like an awful waste given I'll probably max out at about 2GB ZIL usage, if even that.

However I do hope to use the Warp for multiple purposes - L2ARC cache for my main 27 drive pool, and also a separate SSD-only pool - and I'm currently concerned that if I get it as a single disk and then partition it, Solaris will disable write caching and my performance will tank.

Then again, I don't want to divide it as 2 x 3-drive pools or something like that, as I'd only get half performance on each and I don't expect either purpose to be active at all times. So I'd rather sub-divide a 6-drive stripe than have multiple stripes, assuming write caching isn't a problem.

Anyway it comes down to the fact that I need to do a whole bunch of benchmarks in varying configs once I get it. But in all scenarios, having access to the individual devices will definitely help, if for no other reason than that I prefer to give 6 x individual devices to ZFS than a single HW-RAID device :)
Why not just partition the 1.2TB? There's no performance penalty, and in fact, you'll likely get better performance as the activity is spread over multiple drives. If the idea is to use this for ZFS, you could create the cache device and the ZIL/ZLOG/etcwhatever by just partitioning the device and only add the partitions. I have done this before with ZFS, it works. ZFS just wants a block device, and a partition will do. I have even used LVM volumes for test.
 
  • Like
Reactions: TheBloke