Epyc NVME 7443P

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lihp

Active Member
Jan 2, 2021
186
53
28
Really? Could you provide a link? Have not been able to find that :)

Bottom of page: license type and "get your..." contact form.
 
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Thats basically the driver (Era) opposed to the full OS build Radix, right?
 

msg7086

Active Member
May 2, 2017
423
148
43
36
Good news! The below product is now back in stock as of 06/10/2021 and we are sending you this notification as per your request. If you are still interested in purchasing this product, please use the “Add To Cart” button below to start the checkout process. We recommend completing your purchase right away. We do not guarantee price or availability for the below item.

Fractal Design FD-A-TRAY-001 HDD Drive Tray Kit - Type-B for Define 7 Series and Compatible Fractal Design Cases - Black (2-pack)
Fractal Design FD-A-TRAY-002 HDD Drive Tray Kit - Type-B for Define 7 Series and Compatible Fractal Design Cases - White (2-pack)
They have stock right now on newegg. If you are lucky you may get some before they are gone. I already ordered what I need.
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
Ah, didn't know you are not US based. If you are having trouble getting them elsewhere maybe you can source them from US.
Got a special message from Fractal support. But from the wording I would almost bet its the DACH CEO.

Here a translation:
"Unfortunately, I currently have no delivery date due to the international logist situation. There are a lot of goods on the way to the distribution but unfortunately I can not confirm an exact delivery date."

Bottom line: I hold my breath.
 

gsrcrxsi

Active Member
Dec 12, 2018
291
96
28
Great build.

can you load it up to 100% with some load and report the all-core clock speed?
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
have you done any testing with raidix yet? I am curious as to how the raidix folks got to 55.8GB/s. Have you thought about using the mellanox 516-cdat card instead of the IB card? the 516-cdat is pcie 4.0 and can do nvmeof and roce. Are you using an IB switch?
Very interesting build and I like the added twist of involving your son :)
 

lihp

Active Member
Jan 2, 2021
186
53
28
have you done any testing with raidix yet?
Yes, but not on our hardware. Bottom line: I saw those Ultrastar NVMEs fly. Impressive enough to plan with it for myself architecture-wise. So far - from what I saw - I can confirm that read performance is around 90% of the added value of all drives in a RAID5 NVME array. And write depends on the size of the RAID5 array, so basically 90% of the added value of n-1 drives write performance in an array. Kinda impressive.

I am curious as to how the raidix folks got to 55.8GB/s.
You can actually read it on their website. Basically they sat together with a bunch of mathematicians and rewrote RAID logic optimized for NVME drives.

Have you thought about using the mellanox 516-cdat card instead of the IB card?
I am a complete newbie to Mellanox cards - well not 100% but close to. I dont even have those cards up for now (but inserted and connected). So I dont know on the benefits with the 516 cdat. The 455A were plain a bargain once I saw pricing. Like: miss diner with a business partner twice and buy both cards ;).

Are you using an IB switch?
No. So direct link for now. Mellanox and IB are for now my hobby. Then again I have some great ideas for business scenarios.

Very interesting build
Exactly what I think. I want to push it as far as possible. So far I tested a whole lot on architecture and bottlenecks beforehand. Main issue in the end was single core performance. Thats why I wanted the 7443P - cost-effective, high single core performance, new chip design, PCIe 4.0, well and ofc AMD for the PCIe lanes should I upgrade to more NVME drives. Because once this really works, its time to think about scaling it up... And then 55 GB/s total performance is actually not the end of the story in a multi-user environment.

Single user, single threaded maybe even 15GB/s is possible (I wouldn't bet on it but consider it possible as a close call). But multi-user, asynchronous loads... by then only PCIe lanes are the limit.

I like the added twist of involving your son :)
It was a blast - he is telling in preschool that he built a server... he still does. Kinda funny.
 
Last edited:

jpmomo

Active Member
Aug 12, 2018
531
192
43
thanks for all of your responses. My question about "how did they get 55.8GB/s was more about the configuration that they used not their technology. ex. how many drives did they use and which type to get that throughput? Which testing software did they use and what were the parameters? I have been experimenting with both hw and sw raid on some nvme drives and was curious to here about raidix. I have a lot of different hw to experiment with including the pm1735 that you mention. I also have a couple of the new intel optane p5800x drives that are supposed to be pretty good at IOPs.
with regards to the mellanox cards, the 516-cdat are dual port 100GE nics. The main benefit with theses cards are that they are pcie 4.0 x16. That means that the pci bus should not be a bottleneck for 200Gbps out of a single pci slot. The cards are not that expensive but probably more than the 455 (connectx-4 vs connectx-5) that you have.
I also have a pair of the milan cpus (7763) that should not create any bottlenecks either :)
 

lihp

Active Member
Jan 2, 2021
186
53
28
thanks for all of your responses. My question about "how did they get 55.8GB/s was more about the configuration that they used not their technology. ex. how many drives did they use and which type to get that throughput? Which testing software did they use and what were the parameters?
Ya know, I am all in regarding your environment and hey if ya got spare p5800x ;) - you won't need to throw em away and Id put em to good use, promised...

Apart from that, I relayed your questions. Normally I should have the answer by tomorrow...
 

lihp

Active Member
Jan 2, 2021
186
53
28
thanks for all of your responses.
PM me, I'll get you a test license to create the same or similar testbed - however you like it to. Your answers still pending there.
 
Last edited:

lihp

Active Member
Jan 2, 2021
186
53
28
My question about "how did they get 55.8GB/s was more about the configuration that they used not their technology.
I poked them for the scenario you listed. Yet I got first info on a close one - or which was the basis for the one you mentioned:

This result we got on system our client: write bw=24.7GiB/s, read bw=42.7-52.2GiB/s

read: IOPS=2459k, write: IOPS=435k

This result was getting on local system with ERA raid6 24 disks(MO006400KWVND HPE 6.40 TB Solid State Drive PCI Express 3.0 x4 2.5) xfs.

Tests was making by fio programm on local system with following params:

fio --name=bandwidth --filename=/xfs3/bandwidth.fio --ioengine=libaio

--iodepth=128 --rw=read --bs=1024k --direct=1 --size=2048G --numjobs=16

--runtime=120 --group_reporting



fio --name=bandwidth --filename=/xfs3/bandwidth.fio --ioengine=libaio

--iodepth=128 --rw=write --bs=1024k --direct=1 --size=2048G --numjobs=16

--runtime=120 --group_reporting



fio --name=iopsread --filename=/xfs3/bandwidth.fio --ioengine=libaio

--iodepth=1 --rw=read --bs=4k --direct=1 --size=2024G --numjobs=128

--runtime=120 --group_reporting



fio --name=iopswrite --filename=/xfs3/bandwidth.fio --ioengine=libaio

--iodepth=1 --rw=write --bs=4k --direct=1 --size=2024G --numjobs=16

--runtime=120 --group_reporting
 

lihp

Active Member
Jan 2, 2021
186
53
28
Actually considering the NVME drives and the IOPS it becomes obvious again, that especially latency of the drives is a limiting factor. I figure some 16+ M5800X drives in an array would go completely elsewhere....