Samsung PM1733 NVMe Drive - Very Slow

RCS

New Member
Jul 27, 2018
17
0
1
I purchased a Samsung PM1733 7.68TB NVMe drive for a build I'm working on.

This morning I installed it and quickly threw Ubuntu 18.04 onto it to do some testing/benchmarking for the whole system.

Speeds are WELL below the stated 7000MB/s read that Samsung advertises.

hdparm returns the following:

Code:
/dev/nvme0n1:
Timing buffered disk reads: 4284 MB in 3.00 seconds = 1427.57 MB/sec
I found this thread about the PM1735 and the poor performance. Speeds seem similar to what I'm getting.

Is there anything I can do or should I not even bother wasting my time with this drive and just get something from Intel?

I had a similar issue with another Samsung enterprise drive I bought last year, but thought it was just a bad drive. Regretting buying this one now :(

Edit: Oddly enough the write speed seems to be normal at around 4.4GB/s (actually it's higher than the rated 3.8GB/s!)

Screenshot (79).png
 
Last edited:

acquacow

Well-Known Member
Feb 15, 2017
589
300
63
39
I'd recommend benchmarking multi-threaded workloads on it using the "fio" benchmark tool vs hdparm/etc.
 

UhClem

Member
Jun 26, 2012
54
22
8
NH, USA
Or, for a "quickie" proof of concept on the multi-threaded workload, do:
Code:
for i in 0 50 100 150 200 250
do
hdparm -t --offset $i /dev/nvme0n1 &
done
(and add up the speeds).
 

RCS

New Member
Jul 27, 2018
17
0
1
Or, for a "quickie" proof of concept on the multi-threaded workload, do:
Code:
for i in 0 50 100 150 200 250
do
hdparm -t --offset $i /dev/nvme0n1 &
done
(and add up the speeds).
Thanks for that. Results are around the same.

Code:
1024 MB in  3.00 seconds = 341.26 MB/sec
1024 MB in  3.00 seconds = 341.12 MB/sec
1022 MB in  3.00 seconds = 340.45 MB/sec
1026 MB in  3.00 seconds = 341.73 MB/sec
1024 MB in  3.00 seconds = 341.06 MB/sec
So 1704MB/s. Still underperforming by a huge margin.

To complicate things even further, when I use the GUI benchmark test and increase the sample size from the default 10MB to 100MB or 1000MB, I get a massive increase in read speed. With 1000MB sample it's pretty much exactly where it should be.

100MB:

100mb.png


1000MB

1000mb.png

So, the drive is capable of achieving it's rated speeds... But only on specific benchmarks? The 1400MB/s with hdparm is still concerning.
 
Last edited:

UhClem

Member
Jun 26, 2012
54
22
8
NH, USA
Try adding the --direct option to both the original/single-thread and for-loop/multi-thread hdparm tests.

[Going through system buffers (when testing a device) is often counter-productive.]
 

RCS

New Member
Jul 27, 2018
17
0
1
Try adding the --direct option to both the original/single-thread and for-loop/multi-thread hdparm tests.

[Going through system buffers (when testing a device) is often counter-productive.]
Timing O_DIRECT disk reads: 4614 MB in 3.00 seconds = 1537.71 MB/sec

Possibly a slight bump in perf but around the same.
 

RCS

New Member
Jul 27, 2018
17
0
1
So I did some benchmarking with "fio" using the PM1733 drive as well as another 970 EVO I had laying around. Here are the results:

As info, this is an EPYC 7002 system so the PM1733 is running at PCIE 4.0.

speed results.png

The 970 EVO does very well with 64K blocks, and (surprisingly to me) poorly with 4K blocks, especially with only one job. It's able to max out both its read and write perf benchmarks with 64K blocks and 16 jobs. Also interesting is the massive perf difference between read and randread results.

The PM1733 is the opposite, doing very well (still below perf benchmarks from Samsung) with 4K blocks, and poorly with 64K blocks. The best read result was 4126MB/s with 4K blocks and 8 jobs, still pretty far from the stated 7000MB/s max. Write speeds seem normal, as they did in my other tests.

I'm not entirely sure what to make of this yet, but thought I would post my findings anyways.
 

bilbo1337

New Member
Sep 18, 2020
4
3
3
Maybe it's something with power settings? I know a lot of enterprise drives they can be configured
 

acquacow

Well-Known Member
Feb 15, 2017
589
300
63
39
Oh, yeah... make sure C-states and P-states are disabled in your BIOS... those will have some effect on peak performance.
 

RCS

New Member
Jul 27, 2018
17
0
1
I messed around with block size and number of jobs more afterwards, and got totally different numbers.

(I didn't finish all the randread and write tests as I was less interested in those)
Screenshot (104).png

As you can see, I started getting over 7GB/s (slightly over the rated perf) on jobs with larger block sizes. With 2 or 1 jobs, the drive would start to throttle down to 2GB/s after 15-20 seconds, this wouldn't happen with 4+ jobs and the read speed would sustain for the entire test.

So the drive is definitely capable of the rated speeds, but doesn't always perform the same. For example, 64K blocks with 8 jobs was a test I did in the initial run, and only got 2.2GB/s, the second run with the same test I got 7.4GB/s... The only thing that changed was a reboot of the system.

I'll check the BIOS out again. I don't think I've ever seen P-states or C-states listed anywhere.