[eBay] $100 offer accepted Intel Optane PMEM 200 512GB PC4-3200 NMB1XXD512GPS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zachj

Active Member
Apr 17, 2019
226
137
43
Unless you already have an ice lake Xeon system I think optane pmem200 is a bad way to spend money; it’s only compatible with ice lake Xeon so buying into the entire platform when it’s already three generations old seems quite silly.

The 100 gen is cheaper. Cascade lake processors and motherboards are cheaper. Ice lake isn’t all that much faster than cascade lake in most metrics.

I could see an argument for buying 300 generation and sapphire rapids platform but it’s still locked to sapphire rapids. So you’ll be throwing it away the moment you want to upgrade. To say nothing of the fact that 300 generation is basically unobtainium.
 

miraculix

Active Member
Mar 6, 2015
150
51
28
Unless you already have an ice lake Xeon system I think optane pmem200 is a bad way to spend money; it’s only compatible with ice lake Xeon so buying into the entire platform when it’s already three generations old seems quite silly.
I have exactly that... a couple of Ice Lake Xeons on X12SPM boards. I bought these CPU/mobo combinations because of PCIE 4.x NIC and NVME U.2 performance. Granted they're Silvers and for now I can only put these PMEM 200 sticks in the Xeon 4314 system, but I plan to upgrade my 2nd 4316 based X12SPM to a decent Gold PMEM 200 compatible Xeon when the price is right.

The 100 gen is cheaper. Cascade lake processors and motherboards are cheaper. Ice lake isn’t all that much faster than cascade lake in most metrics.
There's some validity to your assertions, but PCIe 4 is the glaring exception, and I think PMEM 100 vs 200 pricing difference is negligible and should shrink more over time. Xeon SP gen3 pricing should also continue to to drop over the next year.

Having said all that... I did recently purchase a single X11SPM-TPF and a Xeon 6230, and I look forward to trying out 128GB DDR-2933 RDIMMs plus 256GB or 512GB PMEM 100 sticks in the future.

I could see an argument for buying 300 generation and sapphire rapids platform but it’s still locked to sapphire rapids. So you’ll be throwing it away the moment you want to upgrade. To say nothing of the fact that 300 generation is basically unobtainium.
I'm not at all interested in PMEM 300 and Xeon SP gen4 at this time. But who knows maybe someone else here is.

For anyone looking at Intel PMEMs and trying to understand benefits, use cases, and applicability:
Supermicro PMEM 100 guide for Supermicro X11 servers
Supermicro PMEM 200 guide for Supermicro X12 systems
 
Last edited:
  • Like
Reactions: roswellian

Koop

Well-Known Member
Jan 24, 2024
396
302
63
I would argue time is money, too. But that's just me :)
I totally agree. It was actually a nightmare. Seemed like 1 out of 4 did not work for some reason. That was only after troubleshooting for hours on my end making sure I wasn't the one at fault with other parts or my config as I was totally (and still am tbh) new to using PMEM.

And then in TrueNAS the modules did not shop up with serials? I was very sad lol still never really figured that out. I wanted to use a few modules a group of them for a metadata special vdev.

miraculix said:
PCIe 4 is the glaring exception
The main argument I think would be made here is that if you want a PCIe 4 platform you are better served by EPYC and it's many additional lanes. Of course you then lose out on PMEM. Maybe a bit of an apple/oranges argument. Not sure what would end up being the higher cost overall. Ultimately whatever you can find a deal on I guess.

My next question would be PMEM 100 vs 200 just comparing side by side- is it really much of a difference? Of course 200 series can run much faster but how does that translate to disk performance when in app direct mode for example? I would assume 'really good' but I'm not sure actually.

Oddly, for me, I have more PMEM 300 series modules vs 200. Just what I happened across with used hardware locally (I wasn't kidding earlier about local ewaste).
 
Last edited:

chrgrose

Active Member
Jul 18, 2018
146
80
28
Cascade Lake will clock down to 2666 if you pair 2933 RDIMMs with Optanes.

By the way, it seems to me that 100 series dimms are similar or possibly more expensive compared to 200 series, precisely because Ice Lake was kind of a lame platform. But if you already have it..

I don't pay attention to 200 series Optane market prices, but in my opinion, current good prices for the 100 series are ~$20 (128GB), ~$50 (256GB), $125 (512GB).
 
  • Like
Reactions: omgwtfbyobbq

zachj

Active Member
Apr 17, 2019
226
137
43
I have exactly that... a couple of Ice Lake Xeons on X12SPM boards. I bought these CPU/mobo combinations because of PCIE 4.x NIC and NVME U.2 performance. Granted they're Silvers and for now I can only put these PMEM 200 sticks in the Xeon 4314 system, but I plan to upgrade my 2nd 4316 based X12SPM to a decent Gold PMEM 200 compatible Xeon when the price is right.



There's some validity to your assertions, but PCIe 4 is the glaring exception, and I think PMEM 100 vs 200 pricing difference is negligible and should shrink more over time. Xeon SP gen3 pricing should also continue to to drop over the next year.
I intentionally omitted mention of the number or speed of pcie lanes simply because for the vast majority of home lab use cases there is no conceivable rationale for needing more/faster than cascade lake. What home lab use case needs 100GB/s worth of u.2 storage or 400gbE networking?

If you’re the .0001% who truly needs it then you have my full blessing :). If you’re like me and you just choose to spend your stupid money on computers instead of purses or cars or travel then you also have my full blessing. For everyone else there’s Mastercard—I mean cascade lake.

I’m not knocking ice lake. While I agree the platform came into this world already a lame duck at birth that doesn’t make it bad; there are no bad products, just bad prices. I tend to agree with the comment above that unless you desperately NEED large memory support (and only assuming your use case doesn’t care about the speed difference between optane and actual memory) you’re better served by epyc. Epyc motherboards are unfortunately really expensive if you want support for Milan or newer—especially if you want more than 8 DIMMs. I’ve seen single socket ice lake Xeon boards with 16 DIMM slots on eBay for faaar less than epyc boards.
 
Last edited:

Cruzader

Well-Known Member
Jan 1, 2021
763
786
93
Times like that are when i grab another 20 then when im admiring the stack of them on my workbench i start considering what to do with them.

Got a spare 15x 100gbe cards sitting on my bench now because i needed just a few and had a good offer accepted.
 
  • Like
Reactions: nexox

Demonking

New Member
Feb 18, 2025
1
0
1
Just out of curiosity? Could one buy the pmems and use a CXL style adapter to turn them into regular style SSD that could be used either on any device or any device that uses cxl?

Just bc in terms of $/GB I think these ones are better then buying a 1.5TB drive and they seem to be more available at the moment.
 

NablaSquaredG

Bringing 100G switches to homelabs
Aug 17, 2020
1,736
1,158
113
Could one buy the pmems and use a CXL style adapter to turn them into regular style SSD that could be used either on any device or any device that uses cxl?
If such an adapter existed: Sure.

But such an adapter does not exist. Optane DCPMM is very proprietary (DDR-T protocol, not standard NDVIMM-P). And such an adapter most likely won’t ever exist.

So the overall answer is: Nope.
 
  • Like
Reactions: nexox and chrgrose

wardtj

Member
Jan 23, 2015
97
30
18
48
Also, important to note that 100 series supports pmem and appdirect modes. 200 and later only app direct.

I have a 2x6230 with 128GB ram and 512GB optane on supermicro it has population rules that must be followed. I was able to run this in pmem mode with the latest firmware, but had to do some configuration on Linux as it wanted appdirect and was a pain to switch to pmem. Once I did the switch I could get vmware,llinux etc to see the full 512GB of optane. Performance was not noticeably worse since my working sets are below 128GB which keeps the system happy. Remember these things can only write at 7GB/s or something so 4 dimms is 28GB/s. Regular DDR4 is 75GB/s per channel.

It was fun to put together but overall expensive. I have Sappgire Rapids which will take the optane but nvme is a cheaper choice for my workloads. It's real benefit not matter the series is nor DBs in AppDirect mode and loading that DB in seconds on start up versus hours for ginormous systems.
 

miraculix

Active Member
Mar 6, 2015
150
51
28
I totally agree. It was actually a nightmare. Seemed like 1 out of 4 did not work for some reason. That was only after troubleshooting for hours on my end making sure I wasn't the one at fault with other parts or my config as I was totally (and still am tbh) new to using PMEM.
Man I hope to avoid that with these 512GB PMEM 200s but will find out soon enough. I'm fine doing swaptronics, reflashing bricked components etc, and I used to be all about breaking out the soldering station or at least isolating problems enough to send out for professional reflow work, but I just don't have the patience any more. And these PMEM 200s are going into "production homelab" servers where I'd rather avoid mechanically futzing with the DIMM slots too much.

The main argument I think would be made here is that if you want a PCIe 4 platform you are better served by EPYC and it's many additional lanes. Of course you then lose out on PMEM. Maybe a bit of an apple/oranges argument. Not sure what would end up being the higher cost overall. Ultimately whatever you can find a deal on I guess.
I had a single bad experience with an unstable Epyc Rome system a few years back that left me stuck in an "Intel just works" mentality. But... I do want to look into small & efficient Epyc systems to replace my limited old Lenovo P330 Tiny Proxmox cluster... it's somewhere on my list of priorities :)

My next question would be PMEM 100 vs 200 just comparing side by side- is it really much of a difference? Of course 200 series can run much faster but how does that translate to disk performance when in app direct mode for example? I would assume 'really good' but I'm not sure actually.
I probably won't have them installed and testable for a couple of weekends (again "production homelab") but I'll definitely send out whatever results I gather for the PMEM 200s. I won't be buying & testing PMEM 100s for my single cascade lake system though... @chrgrose comment about 100s knocking RAM speed down to 2666 saved me from a potential impuse buy.

Oddly, for me, I have more PMEM 300 series modules vs 200. Just what I happened across with used hardware locally (I wasn't kidding earlier about local ewaste).
A vendor I worked at many years ago had a pretty cool value recovery system... sometimes it was ewaste level, but typically it was current gen, maybe 1-2 generations old at most. Neat stuff for anyone who was handy enough.
 
Last edited:

zachj

Active Member
Apr 17, 2019
226
137
43
I intentionally omitted mention of the number or speed of pcie lanes simply because for the vast majority of home lab use cases there is no conceivable rationale for needing more/faster than cascade lake. What home lab use case needs 100GB/s worth of u.2 storage or 400gbE networking?

If you’re the .0001% who truly needs it then you have my full blessing :). If you’re like me and you just choose to spend your stupid money on computers instead of purses or cars or travel then you also have my full blessing. For everyone else there’s Mastercard—I mean cascade lake.
Yes. 200 and 300 will not do straight pmem. There's some updated docs that talk about it. You can do appdirect and storage modes, hence mm but not pmem.
The internet disagrees with you. There’s a supermicro opt and configuration guide PDF that shows in detail how to configure memory mode for pmem200. Technically it’s still configured in app direct mode but you can still specify 100% allocation to memory mode and the host will report the entire installed pmem capacity as its memory capacity after reboot.
 

mtg

Member
Feb 12, 2019
83
45
18
Can these be used for LLM inference?
I'm planning to test that out soon (~1 week probably, waiting on a 4198 cpu)!

It might be limited to whatever 7.45GB/s comes out to in Tokens per second, or somewhere between the weighted average of the actual DDR4 + the Optane stick, unless the CPU is clever enough to be rotating weights through the pmem before and after they are actually needed. Which it almost 100% is not.

One specific thing it would be very good at is a very, very large embedding model, which is just a sparse lookup. So you have a massive table but only grab 4KB of that at a time. Recommender models like DLRM come into this category as well, I think.
 

miraculix

Active Member
Mar 6, 2015
150
51
28
Man I hope to avoid that with these 512GB PMEM 200s but will find out soon enough. I'm fine doing swaptronics, reflashing bricked components etc, and I used to be all about breaking out the soldering station or at least isolating problems enough to send out for professional reflow work, but I just don't have the patience any more. And these PMEM 200s are going into "production homelab" servers where I'd rather avoid mechanically futzing with the DIMM slots too much.
For anyone interested in buying from this seller but worried about questionable/defective sticks, mine arrived in the original retail packaging, exactly as shown in the listing but still sealed by the sticker. I probably won't be able to install and test them until next weekend but so far so good...