Trying to saturate 100 gbe (with optane), 3rd gen xeon with optane dimms VS something ddr5 and optane U.2 drive array

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Justin Utherdude

New Member
Mar 9, 2023
20
4
3
Howdy guys.

I haven't posted here much, but I figured that you people are some of the most knowledgeable about homelab setups and optane so this would be the place to go. I'm getting ready to build out a nas for my homelab. I was previously just using a ryzen chip and motherboard. I'm looking to move into something with optane for longevity and for a bit of speed. My question is whether it is worth it to go with an intel 3rd gen xeon scalable and optane dimms or if I would be better off going with something with ddr5 and then building an array or optane u.2 drives. I know that in his original optane dimm video Patrick had said that the optane dimms were supposed to be a cost effective option for adding a great deal of addressable memory that is somewhat in between storage and RAM. The prices of the 2nd generation p200 optane memory though is still pretty high. Ebay list price (and some independent vendors) is roughly 600$ per 256gb dimm. Six to eight of these would be 3600$-4200$. Are these still relevant? Would it be better for me to go with a more modern chipset with DDR5 memory and then build a raid array of u.2 optane drives to quickly offload from storage? I'd like roughly 1.5tb or more of storage that is capable of saturating a 100gbe connection. I'm still trying to figure out which would best serve me. Please let me know if you have any ideas. Optane dimms vs ddr5 and raided u.2 optane drives FIGHT! (probably 905p as they seem to be the most cost effective u.2 drives!)

Thank you for your time!
PS - I don't know how active Mr. Kennedy is on the forums but I'd love to get his take on this situation, as he probably has access to this equipment and tons of experience in the lab with stuff like this. I <3 Servethehome.

Edit: I have found that the gen 1 optane dimms are quite a bit more affordable, like 140$ for a 256gb dimm. Would it be worth it to go with those instead of the 2nd gen dimms. Would it make a difference in this fight?
 

Rand__

Well-Known Member
Mar 6, 2014
6,642
1,777
113
Saturating your connection is only partly depending on the underlying hardware. Of course its relevant, but *how* you use it will be the more significant part.

With how I mean what kind of programs will access what kind of data, how many parallel threads by how many client boxes.
You will need a lot of clients/processes to saturate 100G.

Eg - its simple if you have 1000 clients that all use 100MB/s , but its difficult if each client needs to use 10GB/s cause u only got 10...

Long story short - please provide more details about the use case and technology used on the client side.
 

i386

Well-Known Member
Mar 18, 2016
4,376
1,616
113
35
Germany
why do people write everything in one gigantic paragraph? modern display can show more than 80 cahracters :D

I would go with a "standard" solution where you could replace parts. It currently looks like optane is dead and nobody picks it up -> if something fails ebay is (probably) your only source.
With standard u.2 ssds you could buy another brand, new or used, replace cpu/mainbaord without caring much about support like you would have to with optane dimms.

And saturating a 100gbe link with 4k random io requires a massive storage system...
 
  • Like
Reactions: T_Minus and nexox

Justin Utherdude

New Member
Mar 9, 2023
20
4
3
Sorry for the formatting. Chicago manual of style never sat well with me :p.

As far as saturation goes. This is for a small office lab setup. Multiple clients (6-24) all connected over 100gbe fiber. The idea is to be able to write 1.5tb to the "cache" at max speeds over 100gbe before it gets written to the main hdd raid array. I chose optane largely for its darn near infinite write endurance, as well as the fact that it works differently from the modern 4 layer drives that only have small dram caches on the drive.

I guess my question is still whether to go with large amounts of optane ram (like 1.5tb) ddr4 256gb (probably 2666mhz) sticks and write to that before shuttling that data off to the raid array, or whether it would be better to write the data to a much smaller but much faster ddr5 ram cache and then shuffle that off to an optane u.2 based raid array.

Whatever the setup, this would likely be used purely as a cache before writing data to the raid array, or bulk copying off the raid array before getting sent to the client.
 

Rand__

Well-Known Member
Mar 6, 2014
6,642
1,777
113
So slow and big vs small and fast?
Will totally depend on what your clients can provide.

They all run pcie 5 nvmes and connect via SMB over RDMA?
 

Justin Utherdude

New Member
Mar 9, 2023
20
4
3
Several are running 2x pcie 4.0 nvme drives. About half a dozen or so of the bigger units are running 4x pcie 4.0 nvme raid 0 cards. For a few of the big players I'd like to get my hands on p5800x optane drives at some point. Not all of them support smb over rdna, but we're working on upgrading the units that don't.

Thanks for your guidance!
 

Rand__

Well-Known Member
Mar 6, 2014
6,642
1,777
113
Your answer is "simple" - if your clients can exceed the optane memory ingestion rate then you need faster cache.
O/c if your faster cache is smaller your permanent storage needs to be even faster than for the optane's since you need to evict the cache faster.

I'd recommend to run a (small scale) test setup because all theorizing wont really answer this for your particular (special) config/requirements.
 

zachj

Active Member
Apr 17, 2019
172
115
43
Depends a lot on the block size and the queue depth. A 4k block size with a deep queue wouldn’t take very many p5800x to saturate 100gb Ethernet, whereas a queue depth of 1 would require a pretty wide array of disks.