Epyc NVME 7443P

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jpmomo

Active Member
Aug 12, 2018
531
192
43
Thanks. As is the case with most of this type of benchmarking, the devil is in the details . I would still like to compare the highpoint sw raid with raidix in a more apples to apples test. I was using raid0 vs raid6. I also used 8 drives vs 24 but mine were pci gen4. Windows NTFS vs xfs and several other differences.
 

lihp

Active Member
Jan 2, 2021
186
53
28
Thanks. As is the case with most of this type of benchmarking, the devil is in the details . I would still like to compare the highpoint sw raid with raidix in a more apples to apples test. I was using raid0 vs raid6. I also used 8 drives vs 24 but mine were pci gen4. Windows NTFS vs xfs and several other differences.
Here now apples to apples. The exact info on the case you mentioned:
Platform: Supermicro X11DPU-ZE+

Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz

Drive: Western Digital (0TS1954) WUS4CB032D7P3E3 3.2TB

Kernel: 5.4.17-2011.3.2.1.el7uek.x86_64



We used ERA raid5 by 20 drives with strip size 64kb.

Tests was making by fio program on local system with following params:

direct=1

bs=1024k

rw=read

numjobs=72

offset_increment=50G

size=100%

group_reporting

ioengine=libaio

runtime=20m

stonewall

ramp_time=10s

time_based

random_generator=tausworthe64

qd=256
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
I installed centos 8.2 on one of my systems (sm h12ssl-i, amd epyc 7502 cpu). I reran some benchmarks with fio on the sw based nvme raid0 (highpoint 7505 cards). I was able to get around 30GB/s write with fio. I will remove the highpoint drivers and test with raidix when I get their sw to test. The highpoint setup is with 2 7505 cards (4 x 1TB m.2 pci gen4 each card) for a total of 8 x 1TB m.2 ssds. I am able to combine 2 of these cards into what they call cross-sync. Keep in mind that these tests are using multiple threads/jobs vs your goal of optimizing your (or your son's:)) server:

1623803110584.png
 

lihp

Active Member
Jan 2, 2021
186
53
28
The highpoint setup is with 2 7505 cards (4 x 1TB m.2 pci gen4 each card) for a total of 8 x 1TB m.2 ssds. I am able to combine 2 of these cards into what they call cross-sync. Keep in mind that these tests are using multiple threads/jobs vs your goal of optimizing your (or your son's:)) server:
The Raidix "lives" from having at least 4 NVMEs, ideally starting with 8 NVMEs. RAID 0 and RAID 1 are that simple that I wouldnt expect a huge difference if at all. 8 NVME in one RAID5 is where Id expect them to shine. Also the 55GB/s wasnt single threaded (no single core I know of can handle that).

Great testing btw. Very curious what you can achieve in either configuration and what the final outcome is. I am still waiting for my NVMEs...
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
the highpoint system only allows for raid0, raid1 or raid 1+0. I ran a couple more tests on that setup with raid 1+0 and got very good read throughput (30GB/s) but much less write (8GB/s). I downloaded the raidix drivers for centos 8.2 this morning and will try and install and test shortly. I think I can only use 4 of the drives and may have to switch them from the highpoint card to a simple pci gen4 x16 AIC (gigabyte aorus gen4). This would allow me to test 4 of the 1TB m.2 drives in multiple raid formats.
 

lihp

Active Member
Jan 2, 2021
186
53
28
This would allow me to test 4 of the 1TB m.2 drives in multiple raid formats.
I am so curious on how that turns out. My PM1735 cards are (ofc?!?) delayed. So it'll take more time until I can test also....
 

ca3y6

New Member
Apr 3, 2021
24
3
3
Why not using 8 RAM modules? I understand the CPU has 8 channels. Aren’t you leaving memory bandwidth on the table?
 

lihp

Active Member
Jan 2, 2021
186
53
28
Why not using 8 RAM modules? I understand the CPU has 8 channels. Aren’t you leaving memory bandwidth on the table?
Dual rank/bank x4 DIMMs so I am actually using 8 memory channels with 4 modules. Meaning it takes advantage of the full bandwidth.
 
  • Like
Reactions: ca3y6

lihp

Active Member
Jan 2, 2021
186
53
28
the highpoint system only allows for raid0, raid1 or raid 1+0. I ran a couple more tests on that setup...
Availability is a b*tch right now. Seems I have to change the whole setup - I plain don't get my SSDs... Three times already I had fixed arrival dates - all gone...
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
which ssds are you looking to get? are you ok with u.2 or do you still want pci?
 

lihp

Active Member
Jan 2, 2021
186
53
28
which ssds are you looking to get? are you ok with u.2 or do you still want pci?
Just looking for higher end right now - even ordered cables now to be prepared for any outcome (PCIe NVME HBA or SATA HBA/Motherboard NVME connectors,...).

Options to me are:
  • 4x Kioxia CM6-V, 3.2TB 3DWPD - right now my first choice if I can but pricey (the 3.2TB due to performance) - thinking about ordering them too ;)
  • 4x Samsung PM9A3 in 1.9TB or 3.8TB - ok choice, fine with me also available at decent pricing IF in stock - ordered 4 already
  • 4x Samsung PM1735 (right now the least liked by me - the U.2 versions performed pretty bad under load, so its safe to tell PCIe may too) - ordered 4 already and payed em - still not arrived here
Getting decent ssds right now at normal pricing is a b*tch. All ordered in wholesale, but the waiting times..... Probably the PM1735 by mid/end of the month, the PM9A3 should have been here today, but got postponed. The CM6-V were available in 1.5TB but performance is subpar to the 3.2 TB and pricing is off on the CM6-V 1.5 TB.
 
Last edited:

lihp

Active Member
Jan 2, 2021
186
53
28
which ssds are you looking to get? are you ok with u.2 or do you still want pci?
And yeah fine with U.2 also - ordered the cables for the H12SSL-NT for U.2 drives today. Just in case...
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
cdw had the pm1735s in stock briefly at a decent price but ironically, the website just went down due to maintenance! I purchased a pm1735 3.2TB for around $875 just before the chia stuff disrupted everything. cdw didn't change their prices but the have been mostly out of stock. they had the pm9a3 u.2 at a very good price but also just went out of stock. Have you considered the intel d7-5510 u.2 drives? and if you really want to go high end....the intel optane p5800x 1.6TB might make for some good storage :)
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
cdw had the pm1735s in stock briefly at a decent price but ironically, the website just went down due to maintenance! I purchased a pm1735 3.2TB for around $875 just before the chia stuff disrupted everything. cdw didn't change their prices but the have been mostly out of stock. they had the pm9a3 u.2 at a very good price but also just went out of stock. Have you considered the intel d7-5510 u.2 drives?
I live in Europe, where availability is quite different...

On the Intel d7-5510 drive: hmm, I wasnt aware on them - didnt really check them before. And pricing seems to be decent too atm. At least they are close to the Kioxia drives.... gonna check.

and if you really want to go high end....the intel optane p5800x 1.6TB might make for some good storage :)
For normal the p5800x are out of my league - pricing-wise. It's just a private endeavor what I am doing here. But I am already there and consider those drives nonetheless. Obviously my "dream drives" - 4x 1.6TB of them.... sick IOPS, sick latency... No clue on their availability here - they are listed but nowhere in stock and I dont see any orders... meaning: it seems not many are buying them (due to their pricing?)
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
The intel optane p5800x are actual a pretty good value when compared directly to their predecessor, the p4800x. They both are roughly the same price (admittedly high on a per GB basis) but the p5800x is a big step up in performance and endurance. The 800GB version seems to be available (at least in the US). There is someone on ebay (US based seller) that has both the 800GB and 1.6TB versions for roughly their retail price. that seller also had the 400GB version but just sold it. I was trying to find a couple of the 400GB versions for a project that I was working on but couldn't find any. I wound up getting 2 of the 800GB versions. Intel came out with the d7-5510 series of nvme pci gen 4 ssds at the same time. They seem to be easier to find lately and are priced like the non-optanes that they are!
 

lihp

Active Member
Jan 2, 2021
186
53
28
The intel optane p5800x are actual a pretty good value ...
We are on the same page there. If I do, then it would be 4 drives and the 1.6TB version to make sense - in my case as hot storage. Thats a total of 11-14K € atm - no way I invest that right now into a wager and Proof of Concept (even though Id like to being the nerd I am ;) ).
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
there is also another seller on ebay that has a lot of the 9a3 3.84TB ssds in stock. They are not optanes but seem to be a good all round drive for a good price. if you are still considering the samsung pm1735 pci, cdw has them in stock for $489 but that is for the 1.6TB. even though your mb has a lot of x16 gen4 pci slots, I would be hesitant to use them on only a 1.6TB drive. It might be more interesting to do what I did with a similar mb and use the highpoint 7505 nvme raid controllers with 4 x m.2 nvme gen4 drives. it is a good way to fully utilize at least a couple of the x16 slots!
 

lihp

Active Member
Jan 2, 2021
186
53
28
there is also another seller on ebay that has a lot of the 9a3 3.84TB ssds in stock. They are not optanes but seem to be a good all round drive for a good price. if you are still considering the samsung pm1735 pci, cdw has them in stock for $489 but that is for the 1.6TB. even though your mb has a lot of x16 gen4 pci slots, I would be hesitant to use them on only a 1.6TB drive.
I am confident that I have 4 drives by the end of this month. More likely 8 or 12 even. Might as well become one of the bad boys and sell PM1735 3.2TB at sick prices. I am actually quite positive on the PM9A3 - the U.2 versions seem to be a steal: price and performance wise.

It might be more interesting to do what I did with a similar mb and use the highpoint 7505 nvme raid controllers with 4 x m.2 nvme gen4 drives. it is a good way to fully utilize at least a couple of the x16 slots!
I am not a fan of your highpoint cards at all. Frankly I am quite sceptical about them. Thats exactly why I am glad that you are also testing, simply since opinion is one thing, testing is another. I am all in for M.2 drives for OS partition or so. But I doubt those M.2 as hot storage or even hot cache.

If we are talking about normal RAID controllers for U.2 drives etc... by then I believe the time for hardware RAID controllers is over or is soon over. Again thats why I am so interested in these tests. ZFS is imho ill-suited even though it has a huge fanbase. Just because ZFS is a monolithic approach to filesystem and storage. Monoliths are destined to fail on the long or on the short run. MDADM, RAIDIX and more are imho the future. Granular control, higher flexibility, no artificial/secret drive standards for recovery,... instead a bunch of devices you can just plug in to another system and given the current driver it just works with minimal effort. Also this enables a whole lot of optimization and control, which a RAID controller does not. Apart from that, RAID controllers are a SPOF too. Nothing bad by getting rid of them if performance and reliability dont suffer.
 
  • Like
Reactions: uldise