Epyc NVME 7443P

msg7086

Active Member
May 2, 2017
352
111
43
33
Good news! The below product is now back in stock as of 06/10/2021 and we are sending you this notification as per your request. If you are still interested in purchasing this product, please use the “Add To Cart” button below to start the checkout process. We recommend completing your purchase right away. We do not guarantee price or availability for the below item.

Fractal Design FD-A-TRAY-001 HDD Drive Tray Kit - Type-B for Define 7 Series and Compatible Fractal Design Cases - Black (2-pack)
Fractal Design FD-A-TRAY-002 HDD Drive Tray Kit - Type-B for Define 7 Series and Compatible Fractal Design Cases - White (2-pack)
They have stock right now on newegg. If you are lucky you may get some before they are gone. I already ordered what I need.
 
  • Like
Reactions: lihp

lihp

Member
Jan 2, 2021
100
20
18
Ah, didn't know you are not US based. If you are having trouble getting them elsewhere maybe you can source them from US.
Got a special message from Fractal support. But from the wording I would almost bet its the DACH CEO.

Here a translation:
"Unfortunately, I currently have no delivery date due to the international logist situation. There are a lot of goods on the way to the distribution but unfortunately I can not confirm an exact delivery date."

Bottom line: I hold my breath.
 

jpmomo

Active Member
Aug 12, 2018
212
62
28
have you done any testing with raidix yet? I am curious as to how the raidix folks got to 55.8GB/s. Have you thought about using the mellanox 516-cdat card instead of the IB card? the 516-cdat is pcie 4.0 and can do nvmeof and roce. Are you using an IB switch?
Very interesting build and I like the added twist of involving your son :)
 

lihp

Member
Jan 2, 2021
100
20
18
have you done any testing with raidix yet?
Yes, but not on our hardware. Bottom line: I saw those Ultrastar NVMEs fly. Impressive enough to plan with it for myself architecture-wise. So far - from what I saw - I can confirm that read performance is around 90% of the added value of all drives in a RAID5 NVME array. And write depends on the size of the RAID5 array, so basically 90% of the added value of n-1 drives write performance in an array. Kinda impressive.

I am curious as to how the raidix folks got to 55.8GB/s.
You can actually read it on their website. Basically they sat together with a bunch of mathematicians and rewrote RAID logic optimized for NVME drives.

Have you thought about using the mellanox 516-cdat card instead of the IB card?
I am a complete newbie to Mellanox cards - well not 100% but close to. I dont even have those cards up for now (but inserted and connected). So I dont know on the benefits with the 516 cdat. The 455A were plain a bargain once I saw pricing. Like: miss diner with a business partner twice and buy both cards ;).

Are you using an IB switch?
No. So direct link for now. Mellanox and IB are for now my hobby. Then again I have some great ideas for business scenarios.

Very interesting build
Exactly what I think. I want to push it as far as possible. So far I tested a whole lot on architecture and bottlenecks beforehand. Main issue in the end was single core performance. Thats why I wanted the 7443P - cost-effective, high single core performance, new chip design, PCIe 4.0, well and ofc AMD for the PCIe lanes should I upgrade to more NVME drives. Because once this really works, its time to think about scaling it up... And then 55 GB/s total performance is actually not the end of the story in a multi-user environment.

Single user, single threaded maybe even 15GB/s is possible (I wouldn't bet on it but consider it possible as a close call). But multi-user, asynchronous loads... by then only PCIe lanes are the limit.

I like the added twist of involving your son :)
It was a blast - he is telling in preschool that he built a server... he still does. Kinda funny.
 
Last edited:

jpmomo

Active Member
Aug 12, 2018
212
62
28
thanks for all of your responses. My question about "how did they get 55.8GB/s was more about the configuration that they used not their technology. ex. how many drives did they use and which type to get that throughput? Which testing software did they use and what were the parameters? I have been experimenting with both hw and sw raid on some nvme drives and was curious to here about raidix. I have a lot of different hw to experiment with including the pm1735 that you mention. I also have a couple of the new intel optane p5800x drives that are supposed to be pretty good at IOPs.
with regards to the mellanox cards, the 516-cdat are dual port 100GE nics. The main benefit with theses cards are that they are pcie 4.0 x16. That means that the pci bus should not be a bottleneck for 200Gbps out of a single pci slot. The cards are not that expensive but probably more than the 455 (connectx-4 vs connectx-5) that you have.
I also have a pair of the milan cpus (7763) that should not create any bottlenecks either :)
 

lihp

Member
Jan 2, 2021
100
20
18
thanks for all of your responses. My question about "how did they get 55.8GB/s was more about the configuration that they used not their technology. ex. how many drives did they use and which type to get that throughput? Which testing software did they use and what were the parameters?
Ya know, I am all in regarding your environment and hey if ya got spare p5800x ;) - you won't need to throw em away and Id put em to good use, promised...

Apart from that, I relayed your questions. Normally I should have the answer by tomorrow...