System absurdly underperforming on Madmax plotter (chia)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
It's not normal, should be around 100 MB/sec (I think), so yeah, something is definitely wrong but I have no idea what it could be.
my case with sas expander backplane is gooxi rm4024. adaptec bios reads as GOOXI 4024_36V1.0


this backplane has 3 sff-8087 port. it doesn't matter which port i connect to adaptec 78165, i alway get arround 25MB/ speed

could this backplane be the bottleneck ??
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
iam getting closer to 260MiB/s straight to jbod over 6gb sas/sata3 on ubuntu.
as long as its not super old disks or those terrible SMR disks that can get really slow something must be wrong.
 

boomheadshot

Member
Mar 20, 2021
64
3
8
I've got 256 GB of RAM, and when I try to make to plots in parallel, my system freezes. It was fine when I did 1 plot in parallel with 128 GB of RAM. The screen just freezes and the keyboard turns off, I'm guessing I might be running out of RAM or something.

All I want is to have a program show the max RAM usage. Are there any retard-friendly solutions for people like me?

I've tried googling, and there were some ways of doing it, but I was just hoping to get something like this
1627108252461.png

That would let me monitor these fields (or at least some of them), but only on ubuntu.

Edit: Never mind, just tried making only 1 plot, and the system still froze up on me. So it must be something with the sas10k mdadm array as temp1. I think it might also be because of a kernel patch I performed to get the P410 work in HBA mode (that thing has been causing some strange problems such as a reboot loop after I turn on the system, have to take the CMOS battery after turning it off. But what has helped was to use the init 6 command instead, then load into windows on a dualboot, and then shutting down),

edit2: mdadm seems to be the cause, because if I plot on the sas drives independently, everything is perfectly smooth. But when I try to make a 2 or 4 drive raid0 array with mdadm (following these instructions), it crashes right at the end of phase 1, sometimes a bit earlier or later. Really strange.

edit3: took me a while to narrow it down and find people with a similar problem
xfs+mdadm+discard seems to be a problem
trying btrfs right now (it was the second fastest for me after xfs), seems to be working well (not crashing during phase 1 right now)
edit4: btrfs didn't crash. phew
 
Last edited:

boomheadshot

Member
Mar 20, 2021
64
3
8
Sorry guys, gonna bump the thread.

So, it takes me 27 mins to make 1 plot with 4 x SAS 10k drivers as temp 1 and a RAMDisk for temp2. But when I try to make 2 plots in parallel (4 x SAS for each temp 1 and a RAMDisk for each temp 2), it takes 40 minutes to make both plots. Is it really supposed to be THAT much of a performance hit or am I doing something wrong? I was hoping to do 3 plots in parallel with 384 GB of RAM. But it seems much slower than I was expecting.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
Sorry guys, gonna bump the thread.

So, it takes me 27 mins to make 1 plot with 4 x SAS 10k drivers as temp 1 and a RAMDisk for temp2. But when I try to make 2 plots in parallel (4 x SAS for each temp 1 and a RAMDisk for each temp 2), it takes 40 minutes to make both plots. Is it really supposed to be THAT much of a performance hit or am I doing something wrong? I was hoping to do 3 plots in parallel with 384 GB of RAM. But it seems much slower than I was expecting.
this would make sense, because with 2 parallel plots on the same temp1 and temp2, you will indeed see decrease in speed. because now you ar writing 2 times data on the same temp-folders.

but i wonder how you get 27 min for totall plotting time.

i use Windows 10 and i have
- 4* sas 10k as STRIPED volume(as temp1) (hba card is hp h240 in hba mode, pcie-v3 x8),
- RamDisk as temp2 (119GB = 110 GiB)
- threads 32, bucket 512....i have plotting time about 2 hours and 20 min.

another situation with plotting, with the same plotting time of 2 hours and 20 min:
- temp1 (240 GB) and temp2 (119 GB) in Ramdisk (drive letters X and Z)
- threads 32, bucket 512

i can set the threads to 64, but this gives me even longer plotting time.

pretty strange, because my hardware:
- dual epyc 7551 (totall of 128 threads /64 cores)
- 512 GB ram memory of 2666 mhz (16 * 32gb sticks)
 

boomheadshot

Member
Mar 20, 2021
64
3
8
this would make sense, because with 2 parallel plots on the same temp1 and temp2, you will indeed see decrease in speed. because now you ar writing 2 times data on the same temp-folders.

but i wonder how you get 27 min for totall plotting time.

i use Windows 10 and i have
- 4* sas 10k as STRIPED volume(as temp1) (hba card is hp h240 in hba mode, pcie-v3 x8),
- RamDisk as temp2 (119GB = 110 GiB)
- threads 32, bucket 512....i have plotting time about 2 hours and 20 min.

another situation with plotting, with the same plotting time of 2 hours and 20 min:
- temp1 (240 GB) and temp2 (119 GB) in Ramdisk (drive letters X and Z)
- threads 32, bucket 512

i can set the threads to 64, but this gives me even longer plotting time.

pretty strange, because my hardware:
- dual epyc 7551 (totall of 128 threads /64 cores)
- 512 GB ram memory of 2666 mhz (16 * 32gb sticks)
I think you misunderstood me: I actually had 8 sas drives running and 256 GB of ram, so 4 sas drives and 1 x 110 GB RAMDisk for each parallel plot. I guess even 1 plot on a RAMDisk gets somewhat close to unleashing the potential of the CPU, so that's why the times were so much worse with 2 plots in parallel.

Regarding your surprise about my 27 min time. It's nothing spectacular for Linux, people get even faster times. Please read my thread from the beginning and just accept that windows is trash. Try ubuntu, I'm not a tech savvy guy and I figured it out. You can do it too.
NTFS temp 1 and NTFS RAMdisk are trash (for some platforms), especially on epyc/threadripper systems for some reason (people on Ryzens actually do fine on windows), but our Epycs just don't work well, I've documented this whole thread to show others how big of a difference there can be on some systems.

I'm testing 4 parallel plots (1SAS 10k for temp1 and 1SAS 10k for temp 2), and I see the CPU threads in htop only hitting 30% max (and not even the whole time), but with one RAMDisk plot it was close to 70% and almost completely maxing out with 2 parallel plots, so I guess it's just the CPU becoming the bottleneck.
 
Last edited:
  • Like
Reactions: ari2asem