System absurdly underperforming on Madmax plotter (chia)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

boomheadshot

Member
Mar 20, 2021
64
3
8
Yes haven't found any settings that make a massive difference, i leave smt on though disabled most memory encryptions, set cpu tdp to 280, determinism to power.
How many kh did you get mining
40 kh/s at 2.6 GHz, 50 kh/s at 3.6 GHz. Kept it at 2.6 because it was 2x less power draw (going by the HWinfo numbers that could be wrong, 400W vs 800W)
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
Yes that sounds same as mine i did about 100k overclocked, try Ubuntu there's a Linux port of zen states that works
 

boomheadshot

Member
Mar 20, 2021
64
3
8
Yes that sounds same as mine i did about 100k overclocked, try Ubuntu there's a Linux port of zen states that works
I did an overnight test plot on a 7200 RPM 2TB drive, but it was extremely slow (didn't even finish phase 1 after like 10 hours). Screw SMR, lol.


I couldn't get zenstates to work.
When I installed "$ sudo apt install pip3 python3-tk wheel", it would install, but then it would say "
E: Unable to locate package pip3", the same with tkinter.

Then when I installed PySimpleGUI, there would be an error with importing it during execution, even though it's installed.

I got a friend of mine to help me out, so we installed anaconda and used the python on there.
Even though I got the program to launch and the GUI to work, I don't think my commands changed anything, because in the IPMI sensors the VCORE always stayed the same, and "cat /proc/cpuinfo" didn't reflect the frequencies being applied. Under load, the frequencies actually dropped back to 400 MHz (-_-)
I tried googling and stuff, but it was either too complicated to follow, or the information was just too scarce, I really tried to fix it for 2 days, but I gave up after having a bunch of mental breakdowns like a psycho. I can google and follow instructions and stuff, but ubuntu is too steep for me, because a lot of the instructions already assume that you know what you're doing and omit a lot of details, so whenever I try to apply what was written on the internets, it just didn't work out for me.

After switching back from my X99 build, now my SAS drives don't show up when I use the HP p410 on my ASUS-KRPA-U16. I switch back to the X99 build, it goes into a bootloop (can't even get into the BIOS), so I pulled it out, got into the bios, disable storage booting in the CSM settings, and then plugged the raid controller back in. It works for the X99 board, but the drives don't show up when I switch to the EPYC board (it worked before), so I'm just trying to flash it to an earlier firmware.
 

boomheadshot

Member
Mar 20, 2021
64
3
8
are you using zenstates-rome-es ?
all i need to run is sudo ./ZenStates-Rome-ES/zenstates.py --oc-frequency 3200
and it works, but there are also some presets that can be modified, they were a bit wonky for me.

edit: link https://forums.servethehome.com/ind...ocking-epyc-rome-es.28111/page-17#post-273175
I downloaded the file from here: irusanov/ZenStates-Linux , found the link on page 1 of the same thread.

So it wasn't the one that Zhang posted.

*facepalm*, I just realized that that one is for Ryzen CPU's, OMG LOL silly me
I downloaded from the github via terminal, but I didn't download infrared's file from page 1 of that thread. Myyyyy bad.

Thank you, @mirrormax , I'll give it another go after I get my HP p410 to work


Zenstates.py from the github is 22 244, while Zhang's is 22 544 bytes ... nvm I realized the problem. I decided to download from the github, and not the file that infrared attached on page 17. FML LOL
 
Last edited:

mirrormax

Active Member
Apr 10, 2020
225
83
28
yea ive tried to get them to sticky zzhangs version wich is actually working even for dual socket.
 

boomheadshot

Member
Mar 20, 2021
64
3
8
yea ive tried to get them to sticky zzhangs version wich is actually working even for dual socket.
I decided to just download it from github because it'll be safer, but I didn't notice that it wasn't the one for Epycs, so my fault.

I'll give it one more shot, thanks for giving me a fresh look at the situation! Now I just need to test it
 

boomheadshot

Member
Mar 20, 2021
64
3
8
are you using zenstates-rome-es ?
all i need to run is sudo ./ZenStates-Rome-ES/zenstates.py --oc-frequency 3200
and it works, but there are also some presets that can be modified, they were a bit wonky for me.

edit: link https://forums.servethehome.com/ind...ocking-epyc-rome-es.28111/page-17#post-273175
So I got the overclocking tool to work, the frequencies didn't downclock, but it was still extremely slow

It seems that I used too many threads, because it was faster when I used only 20 threads WITHOUT even setting the frequency higher

root@KRPA-U16-Series:~# sudo /home/oem/Desktop/chia-plotter/build/./chia_plot -n 2 -r 120 -u 256 -t /mnt/sas10k1/ -2 /mnt/sas10k2/ -d /mnt/sas10k3/ -p [redacted] -f [redacted]
Multi-threaded pipelined Chia k32 plotter - 9e649ae
Final Directory: /mnt/sas10k3/
Number of Plots: 2
Crafting plot 1 out of 2
Process ID: 11741
Number of Threads: 120
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^8 (256)
Pool Public Key: [redacted]
Farmer Public Key: [redacted]
Working Directory: /mnt/sas10k1/
Working Directory 2: /mnt/sas10k2/
Plot Name: plot-k32-2021-06-23-03-29-622f6e23fd097e7fb5e6cb0376704985a7b63f62eab20541f5ccfc8ef801f5cc
[P1] Table 1 took 389.553 sec
[P1] Table 2 took 1393.22 sec, found 4294890630 matches
[P1] Table 3 took 2592.5 sec, found 4294756473 matches
[P1] Table 4 took 3180.69 sec, found 4294502503 matches
[P1] Table 5 took 3086.4 sec, found 4294079867 matches
[P1] Table 6 took 2451.58 sec, found 4293205547 matches
[P1] Table 7 took 1277.49 sec, found 4291356391 matches
Phase 1 took 14399.3 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 1147.29 sec
[P2] Table 7 rewrite took 734 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 625.65 sec
[P2] Table 6 rewrite took 439.77 sec, dropped 581534591 entries (13.5455 %)
[P2] Table 5 scan took 678.599 sec
[P2] Table 5 rewrite took 252.155 sec, dropped 762202854 entries (17.7501 %)
[P2] Table 4 scan took 677.013 sec
[P2] Table 4 rewrite took 277.329 sec, dropped 829034425 entries (19.3046 %)
[P2] Table 3 scan took 653.554 sec
[P2] Table 3 rewrite took 249.003 sec, dropped 855231159 entries (19.9134 %)
[P2] Table 2 scan took 618.086 sec
[P2] Table 2 rewrite took 269.626 sec, dropped 865685855 entries (20.1562 %)
Phase 2 took 6734.89 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 1063.86 sec, wrote 3429204775 right entries
[P3-2] Table 2 took 809.864 sec, wrote 3429204775 left entries, 3429204775 final
[P3-1] Table 3 took 847.377 sec, wrote 3439525314 right entries
[P3-2] Table 3 took 757.431 sec, wrote 3439525314 left entries, 3439525314 final
[P3-1] Table 4 took 1773.14 sec, wrote 3465468078 right entries
[P3-2] Table 4 took 823.068 sec, wrote 3465468078 left entries, 3465468078 final
[P3-1] Table 5 took 904.794 sec, wrote 3531877013 right entries
[P3-2] Table 5 took 864.15 sec, wrote 3531877013 left entries, 3531877013 final
[P3-1] Table 6 took 1510.05 sec, wrote 3711670956 right entries
[P3-2] Table 6 took 897.115 sec, wrote 3711670956 left entries, 3711670956 final
[P3-1] Table 7 took 2624.79 sec, wrote 4291356391 right entries
[P3-2] Table 7 took 1207.83 sec, wrote 4291356391 left entries, 4291356391 final
Phase 3 took 14111.1 sec, wrote 21869102527 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 615.556 sec, final plot size is 108786780893 bytes
Total plot creation time was 35870.2 sec (597.837 min)
Started copy to /mnt/sas10k3/plot-k32-2021-06-23-03-29-622f6e23fd097e7fb5e6cb0376704985a7b63f62eab20541f5ccfc8ef801f5cc.plot

1624444977991.png
^^^this is at 3.0 GHz 1.05 VID, decided to attach just in case.




What seems to be wrong is that in htop you can really see that the threads don't get loaded up (only rarerly are there spikes where all of the threads get loaded, other than that - nothing). I've seen how all of the threads are constantly under stress in other systems when you look at the "progress bars", nothing like that for me.

even slower than on windows, I don't get it... :(
1624448360764.png
^Windows seems to do a better job of making those threads work.


Is there any kind of drivers that I need or something? (I thought you don't need that for Linux)

I remember on Sloth Tech's TV video where he talks about the plotter for Windows, people compared about how the threads settings didn't work correctly, and I never gave it much thought. But below you can see that with 16 threads set, it actually makes 58 threads work (correct me if I understand this correctly)
1624448878946.png
 
Last edited:

mirrormax

Active Member
Apr 10, 2020
225
83
28
if you wanna see full 128 thread usage you need to run several plots parallel, phase one uses up to 32~ threads, phase2 ive seen up to 128, phase3 seems to only use 16 threads max.
That said something looks very wrong with your numbers, i would leave SMT on, seems like you have it on off, but still trying to use 120threads?
optimal for your setup would probabaly be SMT on, and run 4 plots parallel with 32threads each if you have the nvme storage to do that, or at least 2.

edit: nothing explains the low scores in benchmark though, maybe try a bios reset. have you tested xmr mining lately to make sure it still performs like it should there?
 
Last edited:

Nubbins

New Member
Mar 19, 2019
6
3
3
The clue is in your disks. If you look at your disk "busy" you'll see that it's at 100%. The cpu's are simply waiting for the disks to finish the read or write before it moves on to the next step. Use nmon rather than htop, it will show you your CPU and Disk IO in the same screen.

Why are you using a Threadripper paired with mechanical drives? Buy a couple of decent NVME disks and some RAM.

I use dual e5-2690v2 with 256Gb and a couple of Intel PCI NVMe disks in mdadm R0 which probably cost a fraction of your setup and I'm doing plots in under 25 minutes.

Also, yes, NTFS is SHOCKING in ubuntu. I can only read at ~90MB/s from the same disks that windows reads at over 200MB/s.
 

boomheadshot

Member
Mar 20, 2021
64
3
8
if you wanna see full 128 thread usage you need to run several plots parallel, phase one uses up to 32~ threads, phase2 ive seen up to 128, phase3 seems to only use 16 threads max.
That said something looks very wrong with your numbers, i would leave SMT on, seems like you have it on off, but still trying to use 120threads?
optimal for your setup would probabaly be SMT on, and run 4 plots parallel with 32threads each if you have the nvme storage to do that, or at least 2.

edit: nothing explains the low scores in benchmark though, maybe try a bios reset. have you tested xmr mining lately to make sure it still performs like it should there?
1624484478781.png

The CPU is working fine, 3.0 GHz, 1.05 V, and guess what? Now the temps don't go up too much, and it's only showing 120W consumption??!?!?!
WTF, this CPU works in strange ways, before the CPU Die temp used to be 60 degrees higher than the CCD temps under load, and the power consumption used to be through the roof.
Only the mobo temps are still off
1624484998058.png
The only thing I'm doing different is locking the frequency in the "Extra" tab of the overclocking tool, as I've noticed that it helps a lot in R15, and I just assumed that it will help when the plotter doesn't have to wait for the cores to turbo back up.
I was switching ON/OFF with SMT because one of my Win 10 installs didn't work when SMT was ON, so I was fiddling back and forth with it.

The clue is in your disks. If you look at your disk "busy" you'll see that it's at 100%. The cpu's are simply waiting for the disks to finish the read or write before it moves on to the next step. Use nmon rather than htop, it will show you your CPU and Disk IO in the same screen.

Why are you using a Threadripper paired with mechanical drives? Buy a couple of decent NVME disks and some RAM.

I use dual e5-2690v2 with 256Gb and a couple of Intel PCI NVMe disks in mdadm R0 which probably cost a fraction of your setup and I'm doing plots in under 25 minutes.

Also, yes, NTFS is SHOCKING in ubuntu. I can only read at ~90MB/s from the same disks that windows reads at over 200MB/s.
Can you please tell me where it says the disk is "busy"? Htop? Iostat? I really don't see where, sorry, lol.
I was thinking that it might be because of NTFS, because most people on Linux kind of scoff at it, but I never knew why. I mentioned that I had NTFS and nobody brought it up, so I guessed it didn't matter. I didn't bother changing the file system because I'm still going back and forth between windows and ubuntu, so I never really delved into it.
Will ext4 be better on ubuntu, or is there some other file system that I should be using for plotting?

Thank you for bringing this up, I kept thinking that something was really off, but I just never knew what.

I'm using mechanical drives because I already kind of overstretched the budget, and I was preparing for traditional plotting (just before Mad Max came out), and since I have a shitload of cores, I decided that the most cost-effective solution would be to just plot in parallel with a ton of SAS drives. I've got 19 of them, 3 raid cards, and I'm just trying to find the optimal setup before I use all of them.
Can't really afford to buy more NVMe's (not that it makes sense anymore with the current Chia price and netspace size), but I thought I'll just make plots for people for cheap (especially considering that I've got SAS drives). But I was still disappointed with my NVMe (I guess because it's in NTFS), so I never really bothered with getting more anyway). I've only got 2 x 1TB Corsair Force MP600's (yeah, not the best choice because of low sustained write speeds, but I bought them because I was looking at TBW first and foremost, a mistake on my part).

edit: Another thing I forgot to add is that after switching to X99 and then back to the Epyc mobo, my raid controller wouldn't start at all, and I've wasted like a whole day trying to get it to work again. Downgraded the firmware (heard that it helps with compatibility), still nothing. Then I tried another PCI slot and everything started to work again (WTF?!). The silver lining is that this older firmware actually seems to be more performant anyway.
 
Last edited:

Nubbins

New Member
Mar 19, 2019
6
3
3
Chia plotting is massively i/o intensive. Honestly, you won’t have much joy on that CPU with those disks. Also, you say you have raid cards… don’t use hardware raid. Use mdadm to create a raid 0.

As for disk busy, install nmon.When you start it , press c & d (that will show you your CPU and Disk stats).

how much ram do you have?

Edit: Also, traditional plotting hammers the disks more than madmax. With traditional potting the i/o is much more random with X plots in parallel. Only phase 1 is multithreaded so I hate to say it but if you bought this rig for plotting, you could have spent the money in better ways.

what hardware do you have available? Perhaps we can suggest a build that works a bit better.
 
Last edited:

mirrormax

Active Member
Apr 10, 2020
225
83
28
he did say he tried with nvme in the original post, what was your times with nvme disk? even my non enterprise nvmes can do sub 2000s plottimes.
dont trust HWmon for power usage, get a wattometer at 1.05 full load you are def using 2-400w. and i would check temps in IPMI where they probabaly are labeled correctly so you know which one is VRM/memory etc. how are you cooling this? seems like you are pushing it near thermal throttling limits
 

boomheadshot

Member
Mar 20, 2021
64
3
8
Guys, I think the problem is with NTFS. On ext4, I've been getting much better times with the SAS drives and the NVMe. About 30% faster. But they max out at like 100MB/s, so they're crap. I don't get how people get much better times on them. Is it because they use an HBA or something?

Finally decided to just try plotting with a RAMDisk on ubuntu. temp 2 was the ramdisk in tmpfs and temp 1 was the nvme drive in ext4. 1655 seconds total completion time (~27.5 minutes) versus 75 minutes on an NTFS Ramdisk + temp1 NVMe drive in NTFS. That's a huge difference.

I think it's like a bug on ubuntu or something, because I did 2 tests back to back with 4 sas drives. 2 drives were NTFS, and the other 2 were ext4. When I monitored the plotting process on NTFS, it seemed that only temp2 would work, but when plotting on ext4, both drives were working simultaneously. I only partly tested with phase 1, but there was a noticeable difference.

Will try with xfs and btrfs, I think there is more performance to be had.

Thanks to everyone for the help (especially @mirrormax), but I think @Nubbins should get the $20 that I promised because he was the first to point out the problems with NTFS.
 

mirrormax

Active Member
Apr 10, 2020
225
83
28
Happy you figured it out, sounds like you should just stick to ramdisk plotting, maybe get a few higher capacity sticks if you are low
 

Nubbins

New Member
Mar 19, 2019
6
3
3
Not here for the $20, just here to offer some advice. Spent MANY hours working out how to plot fast and figured I'd see if I could help.

As a side note, Not sure what you did with the ramdisk to make it NTFS but simply do as the README.md says. Delete your current ramdisk and do:

Code:
sudo mount -t tmpfs -o size=110G tmpfs /mnt/ram/
Don't format it. Just point yourTMP2 to it.

Also, don't use the RAID0 modes on the p410i, they're crap and halved the perf of my 4x Intel S4510 SSD RAID0. Switch to HBA mode (you'll need to google how) and create an MDADM Raid0.
 

boomheadshot

Member
Mar 20, 2021
64
3
8
Not here for the $20, just here to offer some advice. Spent MANY hours working out how to plot fast and figured I'd see if I could help.

As a side note, Not sure what you did with the ramdisk to make it NTFS but simply do as the README.md says. Delete your current ramdisk and do:

Code:
sudo mount -t tmpfs -o size=110G tmpfs /mnt/ram/
Don't format it. Just point yourTMP2 to it.

Also, don't use the RAID0 modes on the p410i, they're crap and halved the perf of my 4x Intel S4510 SSD RAID0. Switch to HBA mode (you'll need to google how) and create an MDADM Raid0.
On Windows I used a program called ImDisk, and the default file system was NTFS when you configure it. I don't remember what the other options were, but I never touched it because I had no idea what that would do.

On Ubuntu I'm doing it exactly as it says in the plotter wiki, exactly that line.

The p410 doesn't have HBA mode, so I'm stuck with raid 0 on each disk. I'll try different cache settings and turning it off altogether. It's on right now and set to 50% read/50% write, and that was the best option for Windows, but it might be different on ubuntu. I'm only doing this with 1 plot right now, so I'm expecting it to be much slower with a few plots in parallel.

btrfs seems to be even better (btr, ha-ha) than ext4. I can't give you the exact numbers because I didn't do all of phase one, but it's noticeable.

I have p410 (non i), and I've read about how to flash it into IT mode, that's a bit too advanced for me right now, I'm quite happy with the performance boost so far, I'm just going to test different file systems/cache settings/threads/buckets for now.

he did say he tried with nvme in the original post, what was your times with nvme disk? even my non enterprise nvmes can do sub 2000s plottimes.
dont trust HWmon for power usage, get a wattometer at 1.05 full load you are def using 2-400w. and i would check temps in IPMI where they probabaly are labeled correctly so you know which one is VRM/memory etc. how are you cooling this? seems like you are pushing it near thermal throttling limits
My ipmi readings are pretty scarce, but the temps are about the same as the CCD temps (in the high 50s). This is how little info there is (not under load)

ID | Name | Type | Reading | Units | Event
1 | CPU1 Temperature | Temperature | 44.00 | C | 'OK'
2 | +VCORE1 | Voltage | 1.02 | V | 'OK'
3 | +VSOC1 | Voltage | N/A | V | N/A
4 | +VDDIO_ABCD_CPU1 | Voltage | N/A | V | N/A
5 | +VDDIO_EFGH_CPU1 | Voltage | N/A | V | N/A
6 | +12V | Voltage | 12.12 | V | 'OK'
7 | +5V | Voltage | 5.14 | V | 'OK'
8 | +5VSB | Voltage | 4.97 | V | 'OK'
9 | +3.3V | Voltage | 3.34 | V | 'OK'
10 | +3.3VSB | Voltage | 3.31 | V | 'OK'
11 | VBAT | Voltage | 3.10 | V | 'OK'
12 | FRNT_FAN1 | Fan | N/A | RPM | N/A
13 | FRNT_FAN2 | Fan | 800.00 | RPM | 'OK'
14 | FRNT_FAN3 | Fan | 800.00 | RPM | 'OK'
15 | FRNT_FAN4 | Fan | N/A | RPM | N/A
16 | FRNT_FAN5 | Fan | N/A | RPM | N/A
17 | FRNT_FAN6 | Fan | N/A | RPM | N/A
18 | REAR_FAN1 | Fan | N/A | RPM | N/A
19 | REAR_FAN2 | Fan | N/A | RPM | N/A
20 | PMBPower1 | Power Supply | N/A | W | N/A
21 | PSU1 Over Temp | Temperature | N/A | N/A | N/A
22 | PSU1 AC Lost | Power Supply | N/A | N/A | N/A
23 | PSU1 Slow FAN1 | Fan | N/A | N/A | N/A
24 | PSU1 PWR Detect | Power Supply | N/A | N/A | 'Power Supply Failure detected'
25 | CPU1_ECC1 | Memory | N/A | N/A | 'Presence detected'
26 | Memory_Train_ERR | OEM Reserved | N/A | N/A | 'OK'
27 | DIMMA1_Temp | Temperature | N/A | C | N/A
28 | DIMMA2_Temp | Temperature | N/A | C | N/A
29 | DIMMB1_Temp | Temperature | N/A | C | N/A
30 | DIMMB2_Temp | Temperature | N/A | C | N/A
31 | DIMMC1_Temp | Temperature | N/A | C | N/A
32 | DIMMC2_Temp | Temperature | N/A | C | N/A
33 | DIMMD1_Temp | Temperature | N/A | C | N/A
34 | DIMMD2_Temp | Temperature | N/A | C | N/A
35 | DIMME1_Temp | Temperature | N/A | C | N/A
36 | DIMME2_Temp | Temperature | N/A | C | N/A
37 | DIMMF1_Temp | Temperature | N/A | C | N/A
38 | DIMMF2_Temp | Temperature | N/A | C | N/A
39 | DIMMG1_Temp | Temperature | N/A | C | N/A
40 | DIMMG2_Temp | Temperature | N/A | C | N/A
41 | DIMMH1_Temp | Temperature | N/A | C | N/A
42 | DIMMH2_Temp | Temperature | N/A | C | N/A
44 | Watchdog2 | Watchdog 2 | N/A | N/A | 'OK'

This is how I'm cooling it. Please, don't laugh :D . Ikea dish rack + Immeln shower basket work like a charm for the 2.5 inch drives. This plate holder is a good fit for the 3.5 inch drives. I thought that this was pretty ingenious, ha-ha. r/RedneckChiaFarmer/ for the win!

So I don't think I'm thermally throttling XD


Happy you figured it out, sounds like you should just stick to ramdisk plotting, maybe get a few higher capacity sticks if you are low
If I had the cash, I would just get 512 Gigs of ram, because the RAMdisk seems to be the next bottleneck. With 256 gigs you can still only do 1 parallel plot in a RAMdisk, if you do 2, then you're really limited on threads (like I was. That 27.5 min time is with only 16 threads because I was afraid of running out of RAM)... But I don't have the money for that, so I'll have to optimize a bunch of SAS drives for now. Thank you for your help!

Edit: XFS actually turned out to be much faster in phase 3, and that makes up for the drawbacks of phases 1 and 2. Phase 4 is also faster than btrfs. So it does seem to be the better choice.

This is btrfs↓
oem@KRPA-U16-Series:~/Desktop/chia-plotter/build$ sudo ./chia_plot -n 2 -r 32 -u 256 -t /mnt/sas10kbtrfs1/ -2 /mnt/sas10kbtrfs2/ -d /mnt/hdd1/chia/ -p [redacted] -f [redacted]
[sudo] password for oem:
Multi-threaded pipelined Chia k32 plotter - 9e649ae
Final Directory: /mnt/hdd1/chia/
Number of Plots: 2
Crafting plot 1 out of 2
Process ID: 100975
Number of Threads: 32
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^8 (256)
Pool Public Key: [redacted]
Farmer Public Key: [redacted]
Working Directory: /mnt/sas10kbtrfs1/
Working Directory 2: /mnt/sas10kbtrfs2/
Plot Name: plot-k32-2021-06-25-01-42-05a014c560fc3a0897b8043d7528ee8b4fa161240a90fa614130ae01057b42de
[P1] Table 1 took 194.272 sec
[P1] Table 2 took 687.062 sec, found 4295039141 matches
[P1] Table 3 took 1358.74 sec, found 4295079574 matches
[P1] Table 4 took 1723.49 sec, found 4294933818 matches
[P1] Table 5 took 1620.12 sec, found 4295014589 matches
[P1] Table 6 took 1288.18 sec, found 4294973874 matches
[P1] Table 7 took 808.838 sec, found 4294916267 matches
Phase 1 took 7680.77 sec
[P2] max_table_size = 4295079574
[P2] Table 7 scan took 146.502 sec
[P2] Table 7 rewrite took 192.834 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 159.919 sec
[P2] Table 6 rewrite took 123.088 sec, dropped 581282465 entries (13.534 %)
[P2] Table 5 scan took 334 sec
[P2] Table 5 rewrite took 146.915 sec, dropped 762006171 entries (17.7416 %)
[P2] Table 4 scan took 277.278 sec
[P2] Table 4 rewrite took 177.166 sec, dropped 828841876 entries (19.2981 %)
[P2] Table 3 scan took 233.01 sec
[P2] Table 3 rewrite took 145.591 sec, dropped 855149411 entries (19.91 %)
[P2] Table 2 scan took 255.942 sec
[P2] Table 2 rewrite took 88.4865 sec, dropped 865621825 entries (20.154 %)
Phase 2 took 2323.16 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 293.145 sec, wrote 3429417316 right entries
[P3-2] Table 2 took 167.358 sec, wrote 3429417316 left entries, 3429417316 final
[P3-1] Table 3 took 355.395 sec, wrote 3439930163 right entries
[P3-2] Table 3 took 152.394 sec, wrote 3439930163 left entries, 3439930163 final
[P3-1] Table 4 took 729.451 sec, wrote 3466091942 right entries
[P3-2] Table 4 took 221.746 sec, wrote 3466091942 left entries, 3466091942 final
[P3-1] Table 5 took 422.651 sec, wrote 3533008418 right entries
[P3-2] Table 5 took 514.36 sec, wrote 3533008418 left entries, 3533008418 final
[P3-1] Table 6 took 493.065 sec, wrote 3713691409 right entries
[P3-2] Table 6 took 783.113 sec, wrote 3713691409 left entries, 3713691409 final
[P3-1] Table 7 took 1619.62 sec, wrote 4294916267 right entries
[P3-2] Table 7 took 1908.04 sec, wrote 4294916267 left entries, 4294916267 final
Phase 3 took 7672.57 sec, wrote 21877055515 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 646.671 sec, final plot size is 108834923212 bytes
Total plot creation time was 18323.3 sec (305.388 min)


And this is XFS↓
oem@KRPA-U16-Series:~/Desktop/chia-plotter/build$ sudo ./chia_plot -n 2 -r 32 -u 256 -t /mnt/sas10kxfs1/ -2 /mnt/sas10kxfs2/ -d /mnt/sdg2/chia/ -p [redacted] -f [redacted]
[sudo] password for oem:
Multi-threaded pipelined Chia k32 plotter - 9e649ae
Final Directory: /mnt/sdg2/chia/
Number of Plots: 2
Crafting plot 1 out of 2
Process ID: 101048
Number of Threads: 32
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^8 (256)
Pool Public Key: [redacted]
Farmer Public Key: [redacted]
Working Directory: /mnt/sas10kxfs1/
Working Directory 2: /mnt/sas10kxfs2/
Plot Name: plot-k32-2021-06-25-01-42-bebcca109c74eb7b75c039292aaffbd5a0d01adef9beca52f05f5fc253a83f20
[P1] Table 1 took 193.733 sec
[P1] Table 2 took 737.067 sec, found 4294898626 matches
[P1] Table 3 took 1569.28 sec, found 4294892720 matches
[P1] Table 4 took 1695.52 sec, found 4294775625 matches
[P1] Table 5 took 1593.61 sec, found 4294615134 matches
[P1] Table 6 took 1352.24 sec, found 4294089837 matches
[P1] Table 7 took 1000.04 sec, found 4293192544 matches
Phase 1 took 8145.56 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 290.979 sec
[P2] Table 7 rewrite took 354.516 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 228.482 sec
[P2] Table 6 rewrite took 168.975 sec, dropped 581374327 entries (13.5389 %)
[P2] Table 5 scan took 347.717 sec
[P2] Table 5 rewrite took 166.808 sec, dropped 762125691 entries (17.7461 %)
[P2] Table 4 scan took 317.125 sec
[P2] Table 4 rewrite took 336.656 sec, dropped 828926885 entries (19.3008 %)
[P2] Table 3 scan took 227.433 sec
[P2] Table 3 rewrite took 184.671 sec, dropped 855147880 entries (19.9108 %)
[P2] Table 2 scan took 289.571 sec
[P2] Table 2 rewrite took 127.379 sec, dropped 865606147 entries (20.1543 %)
Phase 2 took 3069.45 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 261.695 sec, wrote 3429292479 right entries
[P3-2] Table 2 took 140.268 sec, wrote 3429292479 left entries, 3429292479 final
[P3-1] Table 3 took 340.78 sec, wrote 3439744840 right entries
[P3-2] Table 3 took 152.976 sec, wrote 3439744840 left entries, 3439744840 final
[P3-1] Table 4 took 315.486 sec, wrote 3465848740 right entries
[P3-2] Table 4 took 171.027 sec, wrote 3465848740 left entries, 3465848740 final
[P3-1] Table 5 took 300.838 sec, wrote 3532489443 right entries
[P3-2] Table 5 took 174.087 sec, wrote 3532489443 left entries, 3532489443 final
[P3-1] Table 6 took 317.332 sec, wrote 3712715510 right entries
[P3-2] Table 6 took 174.976 sec, wrote 3712715510 left entries, 3712715510 final
[P3-1] Table 7 took 727.86 sec, wrote 4293192544 right entries
[P3-2] Table 7 took 327.776 sec, wrote 4293192544 left entries, 4293192544 final
Phase 3 took 3415.87 sec, wrote 21873283556 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 92.7084 sec, final plot size is 108811925901 bytes
Total plot creation time was 14723.7 sec (245.395 min)

Edit2: I'm getting ready to scale up my plotting, and decided to test without the 1 GB cache module on the P410. XFS is even faster (14%) than with the cache, but BTRFS is much, much slower. Decided to add this in case anyone is in the same boat.
 
Last edited:

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Interesting read.

I am running on Ubuntu 20.04.2 remote machines (ie not my main desktop) and have the following two plotters configs running.

Supermicro based: X9-DR3-LN4F
  • Dual E5-2690v1 (8c/16t 2.9GHz-->3.8GHz each)
  • 192GB Ram
  • Tmp1=Ent nvme (nvme to PCIe adapter),
  • Tmp2=Ram Drive (tmpfs)
  • Plot times 32min (single plots)
HP ML350p g8
  • Dual E5-2690v1 (8c/16t 2.9GHz-->3.8GHz each)
  • 96GB Ram
  • Tmp1=Ent nvme (nvme to PCIe adapter)
  • Tmp2= 4x 400GB Ent SAS SSD (R0)
  • Plot times 36min (single plots)
I run with

Code:
nohup ./build/chia_plot -n 32 -r 30 -u 128 -t /mnt/chia-tmp1/ -2 /mnt/chia-tmp2/ -d /mnt/spacepool/chia-plots-1/ -c [Pool contract ID] -f [Farmer Public Key] &
The nohup and & pushes the process into the backgeround (&) so I can continue doing things and keeps the process running with stdout goiung to a nohup.out file. I can tail -100f nohup.out to see what is going on and log-off from the server without killing the process.

If you want to run in parallel then just start in different directories so the nohup files do not overrite each other.

MadMax also now has a -k option to allow for a multiplier for Phase 2 plotting (ie 16t for P1 and 32t for p2 by having -n16 -k2).

For monitoring I tend to either tail the nohup file or use the following;

Code:
watch -n5 sudo ipmitool sdr
  • Code:
    sudo apt install lm-sensors ipmitool
    to install the packages
Monitor file creation

Code:
watch -d 'ls -lt [Plot directory]/*.plot | head '
I also dump the contents of the Chia debug log to a Splunk srver and have a basic dashboard running so I can see things are working at a glance from my web browser.

HBA cards
HP H240 (8 drives IT mode) - Very cheap. Possibly not vendor locked but have not tested (mine is in my HP server).
Supermicro AOC-S3008L-L8E - If you have a Supermicro board, these branded LSI3008 cards tend to be cheaper.

I also posted some timings in the Chia thread here when MadMax came out.

Hope some of this helps.
 
  • Like
Reactions: Jonesgold

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
offtopic question, regarding chia....

when i transfer completed plot ( about 110gb) from TEMP-map (sata 6gbps ssd) to DEST-map (sata 6gbps 7200rpm hdd), my transfer rate in Windows 10 is around 23-25 MB/s.

is this normal transfer speed ??

using adaptec 78165 for hdd-sata as DEST-map


using hp h240 (in raid 0 mode) for TEMP-map sata ssd
 

boomheadshot

Member
Mar 20, 2021
64
3
8
offtopic question, regarding chia....

when i transfer completed plot ( about 110gb) from TEMP-map (sata 6gbps ssd) to DEST-map (sata 6gbps 7200rpm hdd), my transfer rate in Windows 10 is around 23-25 MB/s.

is this normal transfer speed ??

using adaptec 78165 for hdd-sata as DEST-map


using hp h240 (in raid 0 mode) for TEMP-map sata ssd
It's not normal, should be around 100 MB/sec (I think), so yeah, something is definitely wrong but I have no idea what it could be.