Wherefore art thou PLP M.2 <=2280 NVME SSDs?!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

StanLee

Member
Apr 10, 2022
31
4
8
So, this concern you all have for safeguarding one's file system is starting to rub off on me, dang it. Since a resiliant file system seems to be off the table for me in NTFSlandia I looked for a modicum of HW-protection for the FS.

But, alas, most of the (paltry few) M.2 PLP SSDs I've found are either SATA or 22110, the first of which is needlessly slow for 2022 (IMHO), & the second is too darn long for my MB. The only solution I've found are smaller members of the Micron 7400 series, which are TLC, & I'm not entirely convinced 1.3 DWPD is sufficient for a virtual memory application of the size I'm attempting on a long-term basis. I have a feeling I'll find out.

The alternatives would be to use an adapter to relocate the M.2, switch to U.2, thrash out to HDD, or worse.

So, does anyone know if there are 0.5+TB M.2 NVME <=2280 SSDs available in 2-bit MLC? If not, same question but dropping the MLC requirement?

Thank you for any assistance.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
If you're going to be dealing with memory's worth of write activity, I'd recommend a pci-e carrier with multiple M.2 drives or an array of sata drives in order to get the throughput and wear-life that you need.
 
  • Like
Reactions: StanLee

StanLee

Member
Apr 10, 2022
31
4
8
To boil it down to the key points, which will still be rather long. There are unknowns to be explored, & there are variables, some if which I control, others I don't. Basically, much to my chagrin, what GPUs I can use & how I use them are limited by the amount of RAM available.

In theory, with the (maxed out) RAM I have, I can use cards with up to 32GB of VRAM, as long as I don't use NVLink for memory pooling with another card, or I could use pairs of 16GB VRAM cards if I do use it. For reasons of various efficiencies I'm not very interested in those GPUs.

Individual 24GB cards shouldn't present a problem with RAM usage. But they are also a smaller canvas to use for scenes.

If I use individual cards with 48GB of VRAM, or pairs of 24GB cards with NVLink for memory pooling, then I will have a theoretical RAM deficit of ~16GB. This deficit I hope can be overcome with virtual memory, but I don't know if it can be or not.

Ideally, I would like to use 48GB VRAM cards with NVLink for memory pooling, but then I would have a theoretical RAM deficit of ~160GB, over a 100% RAM deficit. This was my optimal use case for GPUs.

Each scene is processed via CPU/RAM/Drives then sent on to the GPUs, a process that is likely to be on the scale of seconds to minutes. The actual amount of RAM required will likely vary from frame to frame, depending on resources used, techniques used, maybe even camera angles. Not very many people seem to be working at this level, at least not that are discussing it. There's a lot of arguing among the few who claim to be in the know about how it works & how easy it is or not.

I knew the system I'm building now was always going to be a testbed, a minimal investment to figure out the process, & smoke out the bugs, as well as renewing my PC building skills. Really, a better system was always in the plans, but I needed to get real world experience & figure out how all the parts work together, or don't, in order to properly plan the next version. I intend to push on as far as I can, & learn the limits.

If you're going to be dealing with memory's worth of write activity, I'd recommend a pci-e carrier with multiple M.2 drives or an array of sata drives in order to get the throughput and wear-life that you need.
I looked into that, but I haven't the knowledge to figure out which enterprise versions would be useful, & available, at least without extensive research. I'm outside my expertise, that's why I'm here. To my bafflement, all the consumer versions I looked at were actually considerably slower than using standalone M.2 socket SSDs in reviews. I *think* I mentioned this about Asus's cards over in my earlier thread, but I didn't spend much time looking as it wasn't panning out at the time.

PCIe sockets are at a premium, but I could sacrifice a Gen 3 x8 or x16 slot for a solid solution. It doesn't have to be M.2 based either, it could be a standalone/monolithic (?) SSD x16 card. But I could use specific MLC recommendations, in particular.

But before going that -- likely more expensive -- route I should do tests with M.2, or perhaps U.2 to see if it's required. One thought that occurred to me is that while RAIDing M.2 NVMe is usually discouraged as its not productive, with TLC drives that are known to have a speed drop (when their SLC cache is full) RAID might be an option. Also, the smaller these Micron drives I discussed are, the slower they get, they might be candidates as well to maintain write speeds.

I'm a little fuzzy on what effect PLP has from a user perspective. Do all drives need PLP, such as one just used for virtual memory, in my case? Or does it also prevent HW damage to the SSDs? Obviously they do nothing to content in RAM that is not being saved, so the virtual memory contents should be considered disposable as well?
 

StanLee

Member
Apr 10, 2022
31
4
8
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
FWIW, the ASUS x16 quad-m.2 cards take 22110 and work just fine with bifurcation; I'm using one now on an X9-gen board. JEYI also has a low-profile (half-height) quad-m.2 card. Cards that use bifurcation are really simple, just routing pcie lines. You can also get cards with dual- or quad-8643 (or oculink), and then run cables to U.2 drives. Those cards again can either be simple bifurcating cards, or redrivers, retimers, or PLX switches (in roughly increasing order of cost).

Many SSDs come in both 22110 and U.2 formats; all else equal the U.2 tend to perform a little bit better due to built-in heatsinks.
 
  • Like
Reactions: StanLee

StanLee

Member
Apr 10, 2022
31
4
8
The Micron 7300 Pro has PLP and is 2280 form factor.
[Several hundred web pages read later... sorry for the delay]

Thanks, Sean. I had found the U.2 version for sale. They are 2280 up to 960gb in M.2, and 22110 on up, and all TLC, as best as I can tell.

I'm beginning to think that finding the perfect M.2 SSD is as hard as finding the perfect woman. :eek:
 

StanLee

Member
Apr 10, 2022
31
4
8
FWIW, the ASUS x16 quad-m.2 cards take 22110 and work just fine with bifurcation; I'm using one now on an X9-gen board. JEYI also has a low-profile (half-height) quad-m.2 card. Cards that use bifurcation are really simple, just routing pcie lines. You can also get cards with dual- or quad-8643 (or oculink), and then run cables to U.2 drives. Those cards again can either be simple bifurcating cards, or redrivers, retimers, or PLX switches (in roughly increasing order of cost).
I have been learning more about these cards, bifurcation issues, etc.


One thing I ran across was a Samsung Enterprise SSD which was 30+mm wide, and so should be called an M.3.?? I haven't run across that term officially. Someone was wanting to stick these on an Asus card and it wasn't going to work. Not sure if they are too wide for desktop motherboards.

Are there a lot of wider models? The connector is the same. Since the sites selling stuff aren't info rich, and sometimes the manufacturer isn't either for discontinued items, I'm finding it hard to figure out what is what. There was an SSD WIKI site, but AFAIK it stopped having drives added quite some time ago.

I'm considering a U.2 adapter card or dedicated x16 SSD, but it would complicate my PCIe issues in regard to GPUs. A U.2 adapter for a M.2 socket seems more likely, but I'm looking at what's available domestically, a lot of stuff is out of stock. Do the U.2 drives fit in standard 2.5" bays, they look beefier, taller at least, I think?

Many SSDs come in both 22110 and U.2 formats; all else equal the U.2 tend to perform a little bit better due to built-in heatsinks.
My bigger concern is finding posts elsewhere of U.2's overheating. Intel seems to rate them for a weird airflow. Would a case fan rated for up to 100-150cfm mounted in front of the bays be sufficient to cool them for extended periods of use, as they look mainly enclosed? Or is it going to take unleashed Delta server fans?

In the consumer space cooling M.2 seems to be mostly under control. Either it's integrated with the motherboard cooling system, or the MB comes with M.2 heatsinks, or you buy them aftermarket. Or waterrcool. GPU coverage seems to be the main problem. I ran across a post of someone putting them in a Dell WS, running benchmarks, and the M.2 failing. Is U.2 cooling as bad in mainstream workstations as that seems?

Thanks, again.
 

StanLee

Member
Apr 10, 2022
31
4
8
For server/workstation probably no, for desktop/consumer stuff (especially at the lower end) I think it could be problematic...
It's always something. ;)

I'm not sure how thick things get, interference with MB heat shields or GPUs is the obvious issue. They are coming out with gigantic HSF's for M.2 SSDs.



 
  • Like
Reactions: nabsltd

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
I'm considering a U.2 adapter card or dedicated x16 SSD, but it would complicate my PCIe issues in regard to GPUs. A U.2 adapter for a M.2 socket seems more likely, but I'm looking at what's available domestically, a lot of stuff is out of stock. Do the U.2 drives fit in standard 2.5" bays, they look beefier, taller at least, I think?
U.2's - all that I have seen are 2.5 x 15mm which is the spec size for 2.5". In the consumer space though you're commonly looking at 7mm for Sata SSD's (there are a few SAS 7mm - Micron 630's come to mind), 2.5" WD Red Spinning Rust 1TB is 9.5mm But what you'll typically see for enterprise gear is 15mm high.

My bigger concern is finding posts elsewhere of U.2's overheating. Intel seems to rate them for a weird airflow. Would a case fan rated for up to 100-150cfm mounted in front of the bays be sufficient to cool them for extended periods of use, as they look mainly enclosed? Or is it going to take unleashed Delta server fans?
directed air flow might work in a consumer chassis - maybe some ducting. I am using an Optane 900p in a case without directed airflow and it is DOING OKAY but I also have a Noctua PPC 3000 pulling air at the center of the rear wall of the chassis and across the u.2 drive on one side and a dual nvme bifurcation riser on the other. Intake are vents on the side of the chassis at the front. Definitely drawing the warm out through the back of the chassis and moving a lot of air too.

[quote[
In the consumer space cooling M.2 seems to be mostly under control. Either it's integrated with the motherboard cooling system, or the MB comes with M.2 heatsinks, or you buy them aftermarket. Or waterrcool. GPU coverage seems to be the main problem. I ran across a post of someone putting them in a Dell WS, running benchmarks, and the M.2 failing. Is U.2 cooling as bad in mainstream workstations as that seems?
[/quote]

Rest of my u2's are in server chassis (supermicro) obs with static pressure and yeah delta/sanyo ace fans etc.
I'm only using Optane U.2's now too, 900p or 905p. the 900p's are SLOG for spinning disks and the 905p's are for VM storage. I may put a 905p as a boot/general use disk on the image processing workstation I'm rebuilding so that would then be in a consumer chassis. NZXT and I'll probably mount it in the motherboard bay so it will have a front fan blowing air across it.

I also have an asus quad card hyper v2 with 4 x m.22110 samsung 963 @ 960GB - works great under load. the chassis is pushing air from 2x plane jane corsair 140mm at the front. Hasn't caught fire yet nor seen a significant degradation of of performance either under testing or as storage for transcoding video. I do run with the heatsink cover installed on the card and it has that teensy fan in there too.
 
  • Like
Reactions: StanLee

StanLee

Member
Apr 10, 2022
31
4
8
U.2's - all that I have seen are 2.5 x 15mm which is the spec size for 2.5". In the consumer space though you're commonly looking at 7mm for Sata SSD's (there are a few SAS 7mm - Micron 630's come to mind), 2.5" WD Red Spinning Rust 1TB is 9.5mm But what you'll typically see for enterprise gear is 15mm high.
I'm sure that's usually correct, I did see one exception, an Enterprise U.2 marketed as low-power that was ~7-7.5mm, I think it might have been an Intel or Sandisk. Sorry, I've looked at too many drives in a short time to remember. Might come in handy for an ultrabook U.2 mod. ;)

directed air flow might work in a consumer chassis - maybe some ducting. I am using an Optane 900p in a case without directed airflow and it is DOING OKAY but I also have a Noctua PPC 3000 pulling air at the center of the rear wall of the chassis and across the u.2 drive on one side and a dual nvme bifurcation riser on the other. Intake are vents on the side of the chassis at the front. Definitely drawing the warm out through the back of the chassis and moving a lot of air too.

Rest of my u2's are in server chassis (supermicro) obs with static pressure and yeah delta/sanyo ace fans etc.
I'm only using Optane U.2's now too, 900p or 905p. the 900p's are SLOG for spinning disks and the 905p's are for VM storage. I may put a 905p as a boot/general use disk on the image processing workstation I'm rebuilding so that would then be in a consumer chassis. NZXT and I'll probably mount it in the motherboard bay so it will have a front fan blowing air across it.

I also have an asus quad card hyper v2 with 4 x m.22110 samsung 963 @ 960GB - works great under load. the chassis is pushing air from 2x plane jane corsair 140mm at the front. Hasn't caught fire yet nor seen a significant degradation of of performance either under testing or as storage for transcoding video. I do run with the heatsink cover installed on the card and it has that teensy fan in there too.
Since my manual claims there is a shared U.2 port on my MB, I'm going to take the plunge and get one U.2 drive and make sure there's a beefy fan directly in front & go from there as I work out the rest of the fans (one for the rear will also be ordered, 3 came with the case I believe: ultimately most of the fan ports will be occupied one way or another). I hope to avoid Delta levels of shrieking (at least until the GPUs kick in) or toasted silicon, but we will see.

I spent all of today looking at M.2 to U.2 & SAS adapters. I'll have two working M.2 sockets if I use the onboard U.2 port, and may have some other tricks up my sleeve for more adapters, but I'n not sure on those yet. It almost seems like the people coding these vendor websites are speaking another language, and I have to struggle at times to figure out what a given product actually does.

BTW, someone was kind enough to Like one of my posts, reminding me that I should do the same for all the helpful people in my threads, so I've tried to do so today. I'm usually an anti-social-network person, and the pandemic didn't cause an improvement in that. Please yell at me if I'm ungrateful in the future.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
I haven't had any issues with U.2 drives overheating, as long as they have some sort of airflow across them, just like spinners need. Oftentimes, consumer towers will have poor airflow across the PCIe area or the nooks + crannies where 2.5" drives are fit, but a bit of ducting or supplementary fans can help with that.

At least here at STH, there's absolutely no expectation that you have to Like any post, so if you do so, let it be not out of obligation but voluntarily from appreciation or agreement.
 

StanLee

Member
Apr 10, 2022
31
4
8
I haven't had any issues with U.2 drives overheating, as long as they have some sort of airflow across them, just like spinners need. Oftentimes, consumer towers will have poor airflow across the PCIe area or the nooks + crannies where 2.5" drives are fit, but a bit of ducting or supplementary fans can help with that.
Good to know. Plan A is to make sure to have a higher speed PWM fan blowing directly on the drives. Plan B, if A doesn't work (looking at my case's manual it looks like they took great pains to ensure it won't), will be to put the 2.5" U.2 in a 3.5" bay adapter with attached fan. Plan C is that if case layout makes B an issue I'll have to find a place for a 3rd party bay or build a 3D printer & design a solution.

Looking on eBay, my SATA cable order was shipped, canceled, & refunded?! No explanation provided. I may have to dig in boxes or try to buy locally. Anyway, everything but storage is here, I should have a 2.5" drive somewhere, so it's time to build.

At least here at STH, there's absolutely no expectation that you have to Like any post, so if you do so, let it be not out of obligation but voluntarily from appreciation or agreement.
I cannot express how grateful I am to all of you. I know my posts must seem ridiculous on several levels, especially from a server/storage perspective. Some answers I could find with searches, but it might have taken days. Others really are based in experience, and this is about the best place to find that. If I had a brain, I would have been Liking as I went along, but when I'm hunting information I'm very goal-oriented, absent-minded, & blind to everything else. Liking helpful information is the least I should do.