Samsung PM983a M.2 22110 SSD NVMe PCIe 3.0x4 1.88TB - open box - $125 OBO + free ship

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jasonsansone

Member
Sep 15, 2020
97
52
18
A little bit of feedback on these drives. I tested with and without over provisioning and both 512 and 4096 block sizes.

No Over Provisioning:
4K_IO1_1T: write: IOPS=45.9k, BW=179MiB/s (188MB/s)(10.5GiB/60001msec)
4K_IO256_1T: write: IOPS=47.0k, BW=184MiB/s (193MB/s)(10.8GiB/60001msec)
4K_IO1_4T: write: IOPS=138k, BW=541MiB/s (567MB/s)(31.7GiB/60002msec)
4K_IO1_16T: write: IOPS=179k, BW=699MiB/s (733MB/s)(40.9GiB/60002msec)
4M_IO256_1T: write: IOPS=329, BW=1319MiB/s (1383MB/s)(77.3GiB/60003msec)


Over Provisioning, 4096 LBA:
4K_IO1_1T: write: IOPS=48.5k, BW=190MiB/s (199MB/s)(11.1GiB/60001msec)
4K_IO256_1T: write: IOPS=48.6k, BW=190MiB/s (199MB/s)(11.1GiB/60001msec)
4K_IO1_4T: write: IOPS=187k, BW=732MiB/s (767MB/s)(42.9GiB/60001msec)
4K_IO1_16T: write: IOPS=331k, BW=1291MiB/s (1354MB/s)(75.7GiB/60002msec)
4K_IO1_32T: write: IOPS=339k, BW=1323MiB/s (1388MB/s)(77.5GiB/60002msec)
4M_IO256_1T: write: IOPS=325, BW=1302MiB/s (1365MB/s)(76.3GiB/60002msec)


Over Provisioning, 512 LBA:
4K_IO1_1T: write: IOPS=49.1k, BW=192MiB/s (201MB/s)(11.2GiB/60001msec)
4K_IO256_1T: write: IOPS=50.3k, BW=196MiB/s (206MB/s)(11.5GiB/60001msec)
4K_IO1_4T: write: IOPS=169k, BW=661MiB/s (694MB/s)(38.8GiB/60001msec)
4K_IO1_16T: write: IOPS=241k, BW=943MiB/s (988MB/s)(55.2GiB/60002msec)
4K_IO1_32T: write: IOPS=341k, BW=1331MiB/s (1396MB/s)(77.0GiB/60002msec)
4M_IO256_1T: write: IOPS=326, BW=1304MiB/s (1367MB/s)(76.4GiB/60003msec)
 
Last edited:

zac1

Well-Known Member
Oct 1, 2022
432
358
63
I'm not sure what happened here...
iKVM_capture.jpg

Retrying with diskspd
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
thanks for the detailed feedback, I have the same card 4 x 4 using 16x PCIe.

Out of interest have you tried to run concurrent benchmarks on each drive, I found some odd results where the card drops some of the NVMe's during write loads on more than a single card?

My tests would only yield 1100MB/s (write) on 1 of the 4 cards with the others less than 200MB/s

I checked the slot was rated at 16x and since I was seeing 4 cards inside the OS the bifurcation was working.
This is reproducible on my end. Weird... thanks for the heads up.
iKVM_capture (1).jpg
 

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
How were the temps for plx controller?
(and temps of the nvme's)

Potentially got too hot.
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
How were the temps for plx controller?
(and temps of the nvme's)

Potentially got too hot.
Drive temps seemed fine (<80C). Don't think there is a PLX controller as the motherboard supports bifurcation natively. All the drives work fine individually, it's only when all are active that a couple seem to be unable to write.
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
I suspect your card is not supplying sufficient amps (@3.3v) for 4 of those hungry M.2s.
I thought these cards are just simple traces, so would that imply the motherboard is the issue? The x16 PCIe slot is unable to support the 4 drives?
 

mrpasc

Active Member
Jan 8, 2022
466
244
43
Munich, Germany
Those cards are simple traces, used the same with four PM983 0.96TB at SM X10SDV and was able to write to all four of them at expected speed.
PCIE x16 electrical should deliver 75W accordingly to the PCIE specs.
I was even able to software-RAID0 them and got around 5800MB/s.
But of course it might be that your card isn’t 100% working, the supercaps at mine didn’t look that „high end“ like they do at brand names cards.
 
  • Like
Reactions: zac1

zac1

Well-Known Member
Oct 1, 2022
432
358
63
Those cards are simple traces, used the same with four PM983 0.96TB at SM X10SDV and was able to write to all four of them at expected speed.
PCIE x16 electrical should deliver 75W accordingly to the PCIE specs.
I was even able to software-RAID0 them and got around 5800MB/s.
But of course it might be that your card isn’t 100% working, the supercaps at mine didn’t look that „high end“ like they do at brand names cards.
Thanks, also using PM983(a) with an X10SDV board. I opened a return request with the seller of the quad M.2 card.

What were you using to do your benchmarks?
 

UhClem

just another Bozo on the bus
Jun 26, 2012
433
247
43
NH, USA
Those cards are simple traces ...
But your statement is an oversimplification :)...
yes, the actual data lanes are just traces, but the power (and ReferenceClock) are NOT.

A PCIe x16 slot is spec'd for 75w (total), BUT only 10w of that is @3.3v which is clearly insufficient for (almost) any 4 M.2 NVMe sticks. Therefore, all of these quad cards likely use buck regulator(s) on the (otherwise unused) 12v (slot supply), to get sufficient 3.3v power/current.

It is clear (to me) from those CDM results that the
card is not supplying sufficient amps (@3.3v) for 4 of those hungry M.2s.
... used the same with four PM983 0.96TB at SM X10SDV and was able to write to all four of them at expected speed.
By "same", do you mean that you also used the same (not just similar) card [Link]?
If so, then zac1's (one) card IS defective (bad regulator). Else the linked card is under-designed (ala the amperage/power spec of its regulator(s)) for 4x hungry (but still within M.2 spec) drives.
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
But your statement is an oversimplification :)...
yes, the actual data lanes are just traces, but the power (and ReferenceClock) are NOT.

A PCIe x16 slot is spec'd for 75w (total), BUT only 10w of that is @3.3v which is clearly insufficient for (almost) any 4 M.2 NVMe sticks. Therefore, all of these quad cards likely use buck regulator(s) on the (otherwise unused) 12v (slot supply), to get sufficient 3.3v power/current.
Thank you for taking the time to enlighten me. It makes sense now that power would be the likely issue since the data lanes are "simple traces." :oops:
 

mrpasc

Active Member
Jan 8, 2022
466
244
43
Munich, Germany
By "same", do you mean that you also used the same (not just similar) card [Link]?
If so, then zac1's (one) card IS defective (bad regulator). Else the linked card is under-designed (ala the amperage/power spec of its regulator(s)) for 4x hungry (but still within M.2 spec) drives.
By "same" i meant "same model" or "similar type of card by same manufacturer". English isn't my mother tongue.....
 

VMman

Active Member
Jun 26, 2013
125
45
28
I've tested the same card as @zac1 with the same result, I'll keep the card and use them for 2x M2 which seems to work fine.
After searching I found the gigabyte model to be interesting, I managed to snag one before they sold out on Newegg so will test when they arrive.

GIGABYTE CMT4034 PCI-Express 3.0 x16 PCI-Express 4 x M.2 PCIe x16 Card
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
I've tested the same card as @zac1 with the same result, I'll keep the card and use them for 2x M2 which seems to work fine.
After searching I found the gigabyte model to be interesting, I managed to snag one before they sold out on Newegg so will test when they arrive.

GIGABYTE CMT4034 PCI-Express 3.0 x16 PCI-Express 4 x M.2 PCIe x16 Card
I got this same Gigabyte card. It seemed great quality, but the problem was Newegg did not include the brackets or black aluminum heat spreader...
 

UhClem

just another Bozo on the bus
Jun 26, 2012
433
247
43
NH, USA
I've tested the same card as @zac1 with the same result ...
Thanks for the confirmation; I guess this card IS "under-designed" ...
But after a bit of research, I do not fault, or criticize, the card. I feel that the PM983a (and possibly the PM983) is uniquely voracious in sucking power. The 1.92 & 3.84 are spec'd for 8.7w/10.6w (r/w). I could not find ANY other M.2 that used >= 9w.

This card uses 2 equivalent power circuits, one for each pair of M.2 sockets. Note that since simultaneous reads are OK, the circuit supplies at least 17.4w, but less than 21.2w. It is very possible that one read + one write (19.3w) would be OK. [I have a different quad card, and it uses (2x) 6A buck-regs.] My guess is that your card's circuit also uses a buck regulator rated for 6A yielding max power of 19.8w.

If this is correct, you could "pair" each 983a with ANY other M.2 and be OK; you don't have to waste the sockets/lanes.

Good luck with the Gigabyte card ... but I wouldn't be surprised if it also uses 6A (from the picture, it also looks like a 2-circuit card). Before you install it, could you identify the part number of the buck regulator. They are the 2 black rectangulars positioned between the two silver rectangulars, at the top (edge connector on bottom). It will require either a loupe/magnifier, or a high-res camera image [zoomed in]. thanks

PM983a M.2: ["gold star" if you can spot the error]
pm983a-power.jpg
The Notes: were copied from the U2 version of the doc.

------------
Gigabyte CMT4034:
Gig-cmt4034-x2.jpg
 

aclysma

New Member
Nov 10, 2022
4
3
3
I haven't read through this thread, but I recently picked up some PM983 (not a) drives on ebay. I was getting NVMe timeouts on a couple of them and erratic latencies compared to two other drives that didn't show the same problems. All of the drives were from the same seller with 0 power-on hours when I received them. At first I was concerned, but writing over the drives showing strange behavior with fio once resolved the issues. My guess is I didn't see it with the first two drives because I had already run through some write workloads before running that test that demonstrated the latency spikes.

Moral of the story: I would precondition any drives that appear unhealthy before assuming they are bad.
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
I haven't read through this thread, but I recently picked up some PM983 (not a) drives on ebay. I was getting NVMe timeouts on a couple of them and erratic latencies compared to two other drives that didn't show the same problems. All of the drives were from the same seller with 0 power-on hours when I received them. At first I was concerned, but writing over the drives showing strange behavior with fio once resolved the issues. My guess is I didn't see it with the first two drives because I had already run through some write workloads before running that test that demonstrated the latency spikes.

Moral of the story: I would precondition any drives that appear unhealthy before assuming they are bad.
In my case, the drives work fine independently. It's only when they're benchmarked concurrently while plugged into the same card that they show any problems.