1.6TB intel NVMe ssd (p3605) (sun branded) ($280 usd)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Oddworld

Member
Jan 16, 2018
64
32
18
124
Anyone know if they can be flashed to default Intel p3600 firmware? If not, any limitations to be concerned about?

I have the intel p3600 1.6TB and am curious whether they would be compatible, or whether firmware variations would prohibit RAID.
 

james23

Active Member
Nov 18, 2014
453
126
43
52
T_minus / Odd world - what kind / how are you using RAID with these NVMe ? (i assume some SW raid or NAS os) - if so how does RAID perofmrance look?

im going to get 1 or 2x . fyi the seller has qty 42x left currently. tks for post!

EDIT: why cant i find any specs / info on the intel p3605 ? i can only find on p3600 or p3610. is the p3605 oem only model maybe? tks
 
  • Like
Reactions: mineblaster

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,773
2,150
113
T_minus / Odd world - what kind / how are you using RAID with these NVMe ? (i assume some SW raid or NAS os) - if so how does RAID perofmrance look?

im going to get 1 or 2x . fyi the seller has qty 42x left currently. tks for post!

EDIT: why cant i find any specs / info on the intel p3605 ? i can only find on p3600 or p3610. is the p3605 oem only model maybe? tks
I haven't benchmarked these specific drives in years... using them in ZFS though.

When I test these I'll see if I can get them in a test system with 4x and 8x at once, I need to order some new AOC though to connect them at once so it'll be a couple weeks. They'll be paired with Optane.
 

james23

Active Member
Nov 18, 2014
453
126
43
52
thanks tminus, are you using the nvme 's in raid currently or no? (if not , how are you using them?- just curuios).

Im thinking im going to use the 4x 2.5" nvme's i have as host local, esxi datastores (1 in each host).
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,773
2,150
113
thanks tminus, are you using the nvme 's in raid currently or no? (if not , how are you using them?- just curuios).

Im thinking im going to use the 4x 2.5" nvme's i have as host local, esxi datastores (1 in each host).
In ZFS pool of mirrors. Also in workstation PC as the 'work' drive for video\photo work.
 
  • Like
Reactions: james23

james23

Active Member
Nov 18, 2014
453
126
43
52
tks weighted cube.

btw- i got 2x from this ebay seller at 200$ each (he accepted my 1st offer, i maybe could have gone lower).

T_minus- now u got me going, im guessing you have 4x nvme's in a pool of of mirrors? is that a nas box or is it a box you work directly on? (i ask as: if its a nas box, are you getting full 10g speeds/file-transfers out of it to a client device?)
i know this is all a bit off topic, but im pretty heavy into freenas/zfs these past few months (and going FW).
tks
 

james23

Active Member
Nov 18, 2014
453
126
43
52
i got my 2x in, the smart data are just about the same abt 2% used up (not bad for 200$!!). as expected you can NOT update these drives with normal intel firmware (so you prob cant update FW at all)

will update later with nvme-cli data , this is from smartctl:

Code:
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-29-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPE2ME016T4S
Serial Number:                      CVMD505500EL1P6LGN
Firmware Version:                   8DV1RA13
PCI Vendor ID:                      0x8086
PCI Vendor Subsystem ID:            0x108e
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1,600,321,314,816 [1.60 TB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Sat Feb 16 02:54:54 2019 UTC
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0006):   Format Frmw_DL
Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
Maximum Data Transfer Size:         32 Pages

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +    25.00W       -        -    0  0  0  0        0       0

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         2
 1 -     512       8         2
 2 -     512      16         2
 3 -    4096       0         0
 4 -    4096       8         0
 5 -    4096      64         0
 6 -    4096     128         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning:                   0x00
Temperature:                        22 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    2%
Data Units Read:                    3,142,263 [1.60 TB]
Data Units Written:                 316,682 [162 GB]
Host Read Commands:                 44,451,731,018
Host Write Commands:                15,363,551,223
Controller Busy Time:               1,164
Power Cycles:                       81
Power On Hours:                     28,281
Unsafe Shutdowns:                   73
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged

Code:
root@ubuntu:/home/ubuntu/intl# isdct show -intelssd

- Intel SSD DC P3600 Series CVMD505500EL1P6LGN -

Bootloader : 8B1B012E
DevicePath : /dev/nvme0n1
DeviceStatus : Healthy
Firmware : 8DV1RA13
FirmwareUpdateAvailable : Please contact your Intel representative about firmware update for this drive.
Index : 0
ModelNumber : INTEL SSDPE2ME016T4S
ProductFamily : Intel SSD DC P3600 Series
SerialNumber : CVMD505500EL1P6LGN
edit: nvme-cli smart output for the same drives + it also appears to support 4k SS (and other SS), i have not tested this yet, but here is some output:

Code:
nvme-cli smart:
Smart Log for NVME device:nvme0 namespace-id:ffffffff
critical_warning                    : 0
temperature                         : 22 C
available_spare                     : 100%
available_spare_threshold           : 10%
percentage_used                     : 2%
data_units_read                     : 3142263
data_units_written                  : 316682
host_read_commands                  : 44451731018
host_write_commands                 : 15363551223
controller_busy_time                : 1164
power_cycles                        : 81
power_on_hours                      : 28281
unsafe_shutdowns                    : 73
media_errors                        : 0
num_err_log_entries                 : 0
Warning Temperature Time            : 0
Critical Composite Temperature Time : 0
Thermal Management T1 Trans Count   : 0
Thermal Management T2 Trans Count   : 0
Thermal Management T1 Total Time    : 0
Thermal Management T2 Total Time    : 0


and multi sector sizes:

NVME Identify Namespace 1:
nsze    : 0xba4d4ab0
ncap    : 0xba4d4ab0
nuse    : 0xba4d4ab0
nsfeat  : 0
nlbaf   : 6
flbas   : 0
mc      : 0x1
dpc     : 0x11
dps     : 0
nmic    : 0
rescap  : 0
fpi     : 0
nawun   : 0
nawupf  : 0
nacwu   : 0
nabsn   : 0
nabo    : 0
nabspf  : 0
noiob   : 0
nvmcap  : 0
nguid   : 00000000000000000000000000000000
eui64   : 0000000000000000
lbaf  0 : ms:0   lbads:9  rp:0x2 (in use)
lbaf  1 : ms:8   lbads:9  rp:0x2
lbaf  2 : ms:16  lbads:9  rp:0x2
lbaf  3 : ms:0   lbads:12 rp:0
lbaf  4 : ms:8   lbads:12 rp:0
lbaf  5 : ms:64  lbads:12 rp:0
lbaf  6 : ms:128 lbads:12 rp:0
 
Last edited:
  • Like
Reactions: gigatexal

james23

Active Member
Nov 18, 2014
453
126
43
52
for anyone interested, very strong performance on these quick tests i often run on nvmes. just as soemone said in that other older thread on these drive types, the oracle specs do seem to be very conservative / under spec'd! if i had fewer empty/unused 2.5" nvme's i would def buy more.

note, the drive / tests are NOT Steady Stated , and most are run ontop of an ext4 format/mount (only some are direct to /dev/nvme.. as u can see) , all under ubuntu 18 live. (i have found in the past that i often get a bit better FIO numbers on nvme s, on an actual ubuntu 18 full install to a disk vs the live usb boot version, but these are under 18 live):

-- (im seeing the performance is only a bit behind a 960gb hgst nvme - HUSMR7696BDP3Y1 drive , which is good @ this price too , as i just bought that 960g hgst ebay used at 189$ !)

Code:
root@ubuntu:~# fio --output=INTEL_fio_run_result.txt --name=myjob --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=4K --rw=randread --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s
Jobs: 4 (f=4): [r(4)][10.7%][r=2726MiB/s,w=0KiB/s][r=698k,w=0 IOPS][eta 04m:28s]
Jobs: 4 (f=4): [r(4)][21.0%][r=2722MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 03m:57s]
Jobs: 4 (f=4): [r(4)][31.3%][r=2724MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 03m:26s]
Jobs: 4 (f=4): [r(4)][41.7%][r=2723MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 02m:55s]
Jobs: 4 (f=4): [r(4)][52.0%][r=2721MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 02m:24s]
Jobs: 4 (f=4): [r(4)][62.3%][r=2719MiB/s,w=0KiB/s][r=696k,w=0 IOPS][eta 01m:53s]
Jobs: 4 (f=4): [r(4)][72.7%][r=2723MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 01m:22s]
Jobs: 4 (f=4): [r(4)][83.0%][r=2726MiB/s,w=0KiB/s][r=698k,w=0 IOPS][eta 00m:51s]
Jobs: 4 (f=4): [r(4)][93.3%][r=2729MiB/s,w=0KiB/s][r=699k,w=0 IOPS][eta 00m:20s]
root@ubuntu:~# [r(4)][100.0%][r=2724MiB/s,w=0KiB/s][r=697k,w=0 IOPS][eta 00m:00s]




root@ubuntu:~# fio --name=myjob --filename=/dev/nvme0n1 --direct=1 --norandommap --randrepeat=0 --runtime=60 --blocksize=4K --rw=randwrite --iodepth=32 --numjobs=2 --group_reporting --eta-newline=10s --name=mybs --size=4g --ioengine=libaio
myjob: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=32
...
mybs: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 3 processes
mybs: Laying out IO file (1 file / 4096MiB)
Jobs: 2 (f=2): [w(2),_(1)][19.4%][r=0KiB/s,w=516MiB/s][r=0,w=132k IOPS][eta 00m:50s]
Jobs: 2 (f=2): [w(2),_(1)][37.1%][r=0KiB/s,w=476MiB/s][r=0,w=122k IOPS][eta 00m:39s]
Jobs: 2 (f=2): [w(2),_(1)][54.8%][r=0KiB/s,w=515MiB/s][r=0,w=132k IOPS][eta 00m:28s]
Jobs: 2 (f=2): [w(2),_(1)][72.6%][r=0KiB/s,w=496MiB/s][r=0,w=127k IOPS][eta 00m:17s]
Jobs: 2 (f=2): [w(2),_(1)][90.3%][r=0KiB/s,w=487MiB/s][r=0,w=125k IOPS][eta 00m:06s]
Jobs: 2 (f=2): [w(2),_(1)][96.8%][r=0KiB/s,w=517MiB/s][r=0,w=132k IOPS][eta 00m:02s]


root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=4K --rw=randread --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=40g
mybs: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [r(4)][38.1%][r=1958MiB/s,w=0KiB/s][r=501k,w=0 IOPS][eta 00m:52s]
Jobs: 4 (f=4): [r(4)][75.9%][r=1924MiB/s,w=0KiB/s][r=492k,w=0 IOPS][eta 00m:20s]
Jobs: 4 (f=4): [r(4)][100.0%][r=1962MiB/s,w=0KiB/s][r=502k,w=0 IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=21002: Sat Feb 16 10:30:21 2019
   read: IOPS=501k, BW=1958MiB/s (2053MB/s)(160GiB/83673msec)
    slat (nsec): min=1522, max=12448k, avg=5493.56, stdev=6607.76
    clat (nsec): min=493, max=20346k, avg=249011.17, stdev=117496.18
     lat (usec): min=2, max=20350, avg=254.64, stdev=117.83


root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=4K --rw=randwrite --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=40g

mybs: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
mybs: Laying out IO file (1 file / 40960MiB)
Jobs: 4 (f=4): [w(4)][11.0%][r=0KiB/s,w=230MiB/s][r=0,w=58.8k IOPS][eta 04m:28s]
Jobs: 4 (f=4): [w(4)][21.3%][r=0KiB/s,w=264MiB/s][r=0,w=67.5k IOPS][eta 03m:57s]
Jobs: 4 (f=4): [w(4)][31.6%][r=0KiB/s,w=283MiB/s][r=0,w=72.4k IOPS][eta 03m:26s]
Jobs: 4 (f=4): [w(4)][41.9%][r=0KiB/s,w=323MiB/s][r=0,w=82.8k IOPS][eta 02m:55s]
Jobs: 4 (f=4): [w(4)][52.2%][r=0KiB/s,w=364MiB/s][r=0,w=93.3k IOPS][eta 02m:24s]
Jobs: 4 (f=4): [w(4)][62.5%][r=0KiB/s,w=388MiB/s][r=0,w=99.3k IOPS][eta 01m:53s]
Jobs: 4 (f=4): [w(4)][72.8%][r=0KiB/s,w=468MiB/s][r=0,w=120k IOPS][eta 01m:22s]
Jobs: 4 (f=4): [w(4)][83.1%][r=0KiB/s,w=621MiB/s][r=0,w=159k IOPS][eta 00m:51s]
Jobs: 4 (f=4): [w(4)][93.4%][r=0KiB/s,w=702MiB/s][r=0,w=180k IOPS][eta 00m:20s]
Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=740MiB/s][r=0,w=189k IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=20895: Sat Feb 16 10:23:07 2019
  write: IOPS=101k, BW=396MiB/s (415MB/s)(116GiB/300001msec)
    slat (usec): min=2, max=105688, avg=36.86, stdev=235.87
    clat (nsec): min=769, max=110568k, avg=1225235.14, stdev=1502863.56
     lat (usec): min=11, max=110601, avg=1262.31, stdev=1528.76
 


root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=128K --rw=randwrite --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=40g
mybs: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [w(4)][30.8%][r=0KiB/s,w=1594MiB/s][r=0,w=12.7k IOPS][eta 01m:12s]
Jobs: 4 (f=4): [w(4)][61.2%][r=0KiB/s,w=1585MiB/s][r=0,w=12.7k IOPS][eta 00m:40s]
Jobs: 4 (f=4): [w(4)][91.3%][r=0KiB/s,w=1565MiB/s][r=0,w=12.5k IOPS][eta 00m:09s]
Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=1574MiB/s][r=0,w=12.6k IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=21169: Sat Feb 16 10:43:07 2019
  write: IOPS=12.7k, BW=1586MiB/s (1663MB/s)(160GiB/103306msec)
    slat (usec): min=9, max=35945, avg=40.40, stdev=140.87
    clat (usec): min=36, max=64328, avg=10040.61, stdev=5819.81
     lat (usec): min=64, max=64387, avg=10081.51, stdev=5820.22


root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=4K --rw=randrw --rwmixread=75 --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=40g
mybs: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [m(4)][20.1%][r=776MiB/s,w=259MiB/s][r=199k,w=66.2k IOPS][eta 02m:07s]
Jobs: 4 (f=4): [m(4)][39.9%][r=785MiB/s,w=260MiB/s][r=201k,w=66.5k IOPS][eta 01m:35s]
Jobs: 4 (f=4): [m(4)][59.9%][r=803MiB/s,w=270MiB/s][r=205k,w=69.1k IOPS][eta 01m:03s]
Jobs: 4 (f=4): [m(4)][80.1%][r=808MiB/s,w=270MiB/s][r=207k,w=69.1k IOPS][eta 00m:31s]
Jobs: 4 (f=4): [m(4)][99.4%][r=812MiB/s,w=273MiB/s][r=208k,w=69.9k IOPS][eta 00m:01s]
mybs: (groupid=0, jobs=4): err= 0: pid=21072: Sat Feb 16 10:37:08 2019
   read: IOPS=202k, BW=790MiB/s (828MB/s)(120GiB/155604msec)
    slat (nsec): min=1617, max=25132k, avg=11651.19, stdev=19195.76
    clat (nsec): min=484, max=31301k, avg=524183.11, stdev=545004.32




root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=128K --rw=randrw --rwmixread=75 --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=40g
mybs: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [m(4)][32.7%][r=1316MiB/s,w=418MiB/s][r=10.5k,w=3343 IOPS][eta 01m:06s]
Jobs: 4 (f=4): [m(4)][65.6%][r=1318MiB/s,w=436MiB/s][r=10.5k,w=3489 IOPS][eta 00m:33s]
Jobs: 4 (f=4): [m(4)][98.9%][r=1358MiB/s,w=438MiB/s][r=10.9k,w=3503 IOPS][eta 00m:01s]
Jobs: 1 (f=0): [f(1),_(3)][100.0%][r=1334MiB/s,w=447MiB/s][r=10.7k,w=3575 IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=21131: Sat Feb 16 10:40:00 2019
   read: IOPS=10.4k, BW=1296MiB/s (1359MB/s)(120GiB/94858msec)
    slat (usec): min=8, max=27157, avg=37.16, stdev=117.13
    clat (usec): min=182, max=76508, avg=9748.98, stdev=6240.33
     lat (usec): min=231, max=76526, avg=9786.65, stdev=6240.66



root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=128K --rw=randread --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=80g
mybs: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
mybs: Laying out IO file (1 file / 81920MiB)
Jobs: 4 (f=4): [r(4)][19.3%][r=2567MiB/s,w=0KiB/s][r=20.5k,w=0 IOPS][eta 02m:14s]
Jobs: 4 (f=4): [r(4)][43.4%][r=2547MiB/s,w=0KiB/s][r=20.4k,w=0 IOPS][eta 01m:22s]
Jobs: 4 (f=4): [r(4)][67.6%][r=2551MiB/s,w=0KiB/s][r=20.4k,w=0 IOPS][eta 00m:45s]
Jobs: 4 (f=4): [r(4)][91.9%][r=2555MiB/s,w=0KiB/s][r=20.4k,w=0 IOPS][eta 00m:11s]
Jobs: 4 (f=4): [r(4)][100.0%][r=2554MiB/s,w=0KiB/s][r=20.4k,w=0 IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=21251: Sat Feb 16 10:51:05 2019
   read: IOPS=19.4k, BW=2423MiB/s (2541MB/s)(320GiB/135240msec)
    slat (usec): min=7, max=3253, avg=26.96, stdev= 8.04
    clat (usec): min=219, max=22917, avg=6236.45, stdev=3332.59


root@ubuntu:/media/ubuntu/p3605# fio --filename=out.fiod --ioengine=libaio --direct=1 --norandommap --randrepeat=0 --runtime=300 --blocksize=128K --rw=randwrite --iodepth=32 --numjobs=4 --group_reporting --eta-newline=30s --name=mybs --size=80g
mybs: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
...
fio-3.1
Starting 4 processes
Jobs: 4 (f=4): [w(4)][15.5%][r=0KiB/s,w=1624MiB/s][r=0,w=12.0k IOPS][eta 02m:54s]
Jobs: 4 (f=4): [w(4)][30.6%][r=0KiB/s,w=1599MiB/s][r=0,w=12.8k IOPS][eta 02m:23s]
Jobs: 4 (f=4): [w(4)][45.6%][r=0KiB/s,w=1580MiB/s][r=0,w=12.6k IOPS][eta 01m:52s]
Jobs: 4 (f=4): [w(4)][60.7%][r=0KiB/s,w=1601MiB/s][r=0,w=12.8k IOPS][eta 01m:21s]
Jobs: 4 (f=4): [w(4)][75.7%][r=0KiB/s,w=1600MiB/s][r=0,w=12.8k IOPS][eta 00m:50s]
Jobs: 4 (f=4): [w(4)][90.8%][r=0KiB/s,w=1596MiB/s][r=0,w=12.8k IOPS][eta 00m:19s]
Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=1594MiB/s][r=0,w=12.8k IOPS][eta 00m:00s]
mybs: (groupid=0, jobs=4): err= 0: pid=21263: Sat Feb 16 10:55:26 2019
  write: IOPS=12.7k, BW=1587MiB/s (1664MB/s)(320GiB/206495msec)
    slat (usec): min=9, max=14945, avg=36.60, stdev=63.32
    clat (usec): min=2, max=37195, avg=10040.04, stdev=5912.15
     lat (usec): min=61, max=37222, avg=10077.11, stdev=5911.92
 
Last edited:

james23

Active Member
Nov 18, 2014
453
126
43
52
Good. I bought 23x from the seller at $135 each.
so wow, we are now ~ 0.085 $ per gb for enterprise nvme (and pretty high WE nand). amazing price! that’s going to be a killer setup.

if you don’t mind, how do you plan on using them? san/nas (which os / disk layout?), or are they going to be used individually?

thanks
 

metril

New Member
Oct 1, 2015
14
4
3
35
They going into a supermicro NVMe storage server to serve me some fast storage for fun and games. I have not decided on the OS or the storage layout yet.
 

mimino

Active Member
Nov 2, 2018
189
70
28
Looks like a group buy would be a good solution here. I tried to get 4x250 and got turned down.
 

Marsh

Moderator
May 12, 2013
2,669
1,520
113
Is there a pattern ?

Offer accepted
2 x $200
4 x $200
23 x $135

Offer rejected
8 x $135
4 x $250

Counter-offer
8 x $225