Samsung 983 DCT m.2 Read Speed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

aidenpryde

New Member
Apr 30, 2020
27
1
3
Hello,

I purchased four Samsung 983 DCT m.2 NVMe drives and their performance leaves much to be desired. While Samsung says that they should reach 3000MB/s read and 1400MB/s write, I don't reach those speeds.

I'm mostly concerned with read speed as these are meant to be holding my VHDs on my hypervisor.

Linux FIO gives me 1400MB/s (as well as the Ubuntu disk benchmark utility) as does the Unraid Disk Speed docker.

Crystal Disk Mark in Windows gives me better speed at 2200MB/s.

beforefirmareupdate.png

I downloaded the DC Toolkit for Windows and looked around at some of the other comments on these forums and I'm having trouble getting the toolkit to do anything with the drives.

I saw a suggestion on these forums to do a factory erase, and I can't even do that.

Screenshot 2021-01-21 191740.png

When I run "- -disk 0 - -erase" it asks me to confirm the disk erase, so I know for a fact that the command is correct (it instead says it can't wipe the OS disk, which is expected), but when I run it against "0:c" it loops back on usage.

I'd also like to see about how to upgrade the firmware if erasing doesn't help, but I'm not entirely sure how that works as the instructions are a bit unclear as to how to download the firmware for the flash.

PS. I also read that the DRAM cache might be disabled. Not sure how to turn that back on.
 

amp88

Member
Jul 9, 2020
59
63
18
The test conditions for the officially rated specs from the product page state:

1) Random performance measured using FIO 2.7 in CentOS6.6 (kernel 3.14.29) with 4KB (4,096 bytes) of data transfer size in queue depth 32 by 4 workers and Sequential performance with 128KB (131,072 bytes) of data transfer size in queue depth 32 by 1 worker.
2) 1 MB/sec = 1,000,000 bytes/sec was used in sequential performance.
(my emphasis)

Your CrystalDiskMark benchmark screenshot appears to show sequential performance with a transfer size of 1MB, queue depth of 8, and 1 thread/worker (SEQ1M Q8T1). Try changing the parameters of the benchmark in CrystalDiskMark's settings to match those above and post your results to see if they're any closer to the rated figures.

Also, FWIW, if you're concerned about performance for virtual machine storage you probably shouldn't be focus on sequential performance; random performance would typically play a more significant role in determining your VM I/O performance, unless you've profiled your VM's I/O characteristics and determined you're routinely limited by sequential performance.
 
Last edited:

aidenpryde

New Member
Apr 30, 2020
27
1
3
This is what I get now. I think I configured it correctly.

settingschanged.png
Not much improvement with sequential, but significant improvement from random.

I'm not used to what is considered good speed for random because I'm coming from the consumer world where sequential is touted.
 

aidenpryde

New Member
Apr 30, 2020
27
1
3
The test conditions for the officially rated specs from the product page state:



(my emphasis)

Your CrystalDiskMark benchmark screenshot appears to show sequential performance with a transfer size of 1MB, queue depth of 8, and 1 thread/worker (SEQ1M Q8T1). Try changing the parameters of the benchmark in CrystalDiskMark's settings to match those above and post your results to see if they're any closer to the rated figures.

Also, FWIW, if you're concerned about performance for virtual machine storage you probably shouldn't be focus on sequential performance; random performance would typically play a more significant role in determining your VM I/O performance, unless you've profiled your VM's I/O characteristics and determined you're routinely limited by sequential performance.
See above, meant to reply.
 

amp88

Member
Jul 9, 2020
59
63
18
I'm not used to what is considered good speed for random because I'm coming from the consumer world where sequential is touted.
It's impossible to really know until you test your VMs and see how they perform. If your storage isn't fast enough to keep up then you'll see a few symptoms like latency spikes in your guests (which could manifest in slow loading, hangs, dips in performance etc), and ESXi/Proxmox/XCP-ng etc will give you some way to monitor or profile your storage, either via built-in means or by exporting usage statistics to an external monitoring/metric server. The user manual or wiki for whatever you use might be a better resource for that kind of performance profiling procedure though.
 

Aardvark0

New Member
Jun 27, 2018
15
6
3
I think --erase is for sas and I don't think the windows toolkit drivers support nvme erase, under linux it should be DCToolkitD --disk 0:c --nvme-format-namespace --user-data-erase
 

aidenpryde

New Member
Apr 30, 2020
27
1
3
I think --erase is for sas and I don't think the windows toolkit drivers support nvme erase, under linux it should be DCToolkitD --disk 0:c --nvme-format-namespace --user-data-erase
I'll have to give that a shot.

Do you know much about firmware updates?

It's not clear to me how it works.