THE CHIA FARM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
SWAR Plot Manager ...
... destination_directory - Can be a single value or a list of values. This is the final directory where the plot will be transferred once it is completed. If you provide a list, it will cycle through each drive one by one.
Plotman ...
# One or more directories; the scheduler will use all of them.
# These again are presumed to be on independent physical devices,
# so writes (plot jobs) and reads (archivals) can be scheduled
# to minimize IO contention.
dst:
- /mnt/dst/00
- /mnt/dst/01
 
  • Like
Reactions: Rand__

Rock

Member
Jan 28, 2020
74
47
18
Northern California
Some comments posted online today regarding Swar from a very new user:

When drives get full the skip to the next drive is not handled gracefully, as Swar appears to assign fixed destination drive location when plots begin, instead of checking before they are written. Likewise a slow drive that is already being hammered by plot writes will get additional plots assigned to it and allow it to be further impacted with writes that it cannot handle.
I was out for a few days, 3 usb drives filled, I've ended up with several stuck plots for each of them, and then the new plots assigned work to the remaining usb drive (black port) and a networked drive. The USB drive now has 8 plots moving to it from temp at the same time, total gridlock.
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
I think you can only avoid this situation if you are queuing archiving the plots. I don't know any script that can do that, as finishing and archiving a plot is not predictive.
 

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
I think that is an issue he can not solve. He is just passing variables when starting the plot. I don't know of a way to change dest location once a plot has been initiated. To be fair i have not spent time researching it.
 
  • Like
Reactions: gb00s

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
The only way I see is to set up a fast dest target (SSD) and then moving each plot, if transfer from temp is completed, from the fast dest to the final drive one by one with a script. You could reduce plot time and avoid 'locks'.

Like with ENTR or so:

while true; do
ls /dest_target/*.plot | entr mmv \*.plot \#1.plot.temp
ls /dest_target/*.plot.temp | entr mv /dest_target/*.plot.temp /final_target/
ls /final_target/*.plot.temp | entr mmv \*.plot.temp \#1.plot
sleep 60
done
You get the idea. Just need to create a short loop which final_target to use.
 
Last edited:
  • Like
Reactions: Marsh

Marsh

Moderator
May 12, 2013
2,644
1,496
113
Yep, my design is finishing the plot using SSD staging , then move plots to harvester storage.
No pooling.

I run 3 FIO SSD, with 3 staging SSD, 800gb per SSD and up, it would give me 6-8 hours buffer to transfer to harvester.

If I am using 600gb SSD, then I use mdadm to pool 3 x 600gb SSD for staging area.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
iam kinda waiting for someone to fork/alt chia into a non premine version, i dont really see any point in the business paper/company/ipo side of it, at least not for such a huge premine gives me ripple vibes. you could probabaly make a chia alt that could be farmed with same plots and iam sure someone is working on it already.
The code is pretty terrible -- I see no purpose in forking it. The value here is the hype train, like all ponzi schemes. This was well hyped which is the only reason it has any value.
 
  • Like
Reactions: rootpeer and Marsh

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
Tonight, I got rid of MergerFS on all pools. Now I'm single disk farming. Proof/response times improved from 0.3877s (... and rising with more plots) to 0.1689s this morning. Still looking for a solution to avoid locks. Maybe, as suggested earlier, via fast dest target and moving files to a shared final target or just 'netcat'ing it to a specific target.
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
Not sure if this is correct if you see a steady decline for almost 2 days now. I've seen sharp declines and the 'explanation' might make sense then. But, an almost 48hrs decline? We were almost 16EIB. That's a 5% 'decline' ... Prices of HDD's in the lower space sector are falling for the 1st time in weeks. Coin price is testing the 'lows' again. So one market segment doesn't play well with this market ... I was measuring sentiment on FinTwit for futures and some cryptos for years just from analyzing tweets. Twitter is amazing for these things. Believe me, the sentiment in Chia took a big hit in the last week ...

REDDIT .... community members are being obnoxiously entitled

Maybe you are right. I just noticed.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I am sure you've much more experience with these things than I do, the guy on reddit it just sounded reasonable :) (as the "drops" have been there before every now and then)
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
Some of the people who bought loads of drives before the price spiked surely have finished plotting by now. I'm nearly there.

At normal HDD prices it was easy to say "well if this crypto crashes I'll still own hdds" -- this reduces risk dramatically when deciding if you want to ride this train. With used drives going for 2x the price now vs 2 months ago, you need to make back half your money before chia falls apart or you're left holding the bag -- that's a much riskier proposition for someone looking to get into this now.

Meanwhile, chia price has dropped far from the peak, and block difficulty is way up. Between those factors, I'd expect that's slowed down chia hdd purchases quite a lot.

However, plotting still can take a long time so there's surely some larger farms that will still be generating plots in all of June.

Beyond that, it's only a matter of time before malware / botnets end up with a zombie hoard of farmers.

If network growth slows enough to cause HDD prices to drop back to (close to) normal, you're also likely to see renewed interest in buying more drives to farm chia, on the assumption they can be sold at any time for what you paid for them. So for a while yet there will be some floor on the price of hard drives where chia miners will gladly buy more of them so long as its only a little overpriced.

So there are certainly some factors working to slow down the growth, but overall, network storage space will continue to rise, and will most likely well overshoot past the level where anyone can make a decent ROI.
 
  • Like
Reactions: T_Minus and gb00s

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
The netspace is calculated by how quickly blocks are discovered it is not an exact number.

So you might wonder "why" the netspace is recently sometimes decreasing. In fact it is not decreasing, but this phenomen is a proof that the netspace is consistently slowing down its growing curve.

The netspace is calculated regarding the amount of proofs that are found compare to the difficulty. Every day the difficulty is adjusted to match the netspace such that there is always ~4608 block added to the blockchain.

However, for the difficulty adjustement to be relevant, it needs to include the growth rate of the Netspace, and not only a "snapshot", and it does. Until recently, the growing curve of the netspace, the derivative, was growing as well so we never noticed this "Sawtooth", but the derivative of the netspace might have stop growing. So what is happening ?

The netspace calculation algorithm take a snapshot of the growth rate and adjust the difficulty accordingly. But since this growth rate is decreasing, the algorithm "over-predict" the growth. It assume that the netspace will grow by X but in fact the netspace grow by Y with Y<X.
Basically, shortly before a difficulty adjustement the netspace is "over-estimated" because the previous adjustement has expected a certain growth, but the actual growth was lower. After an adjustement the netspace is re-evaluate to its real value, and the GUI show a decreasing Netspace. In fact, it apply a "correction". To be more precise, the difficulty is adjusted based on the previous growth rate, which is higher than the actual growth rate, so less proof than intended are found shortly after a difficulty adjustement, until the netspace reach the correct number for it to generate the expected amount of proofs on each challenge.

When the derivative of netspace was growing, this "correction" was not noticed, because the netspace was constantly predicted "lower" than it actually was, so the correction was not displaying a "decrease" but rather a "smaller increase". But these "sawtooth" we see recently indicate that netspace is now sometime calculated "higher" than it actually is. This is prooving that the netspace is slowing down and have reached an inflexion point.

Note that the netspace curve can reach multiple inflection point before really slowing down, especially with the pooling protocol imminent, it is expected that it will hit another inflexion point (derivative start growing again) before hitting a 3rd inflexion point and slow down definitively.
Stolen from reddit.
 
Last edited:
  • Like
Reactions: funkywizard

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Quick plotman question

Code:
        tmpdir_stagger_phase_major: 2
        tmpdir_stagger_phase_minor: 1
        # Optional: default is 1
        tmpdir_stagger_phase_limit: 1

        # Don't run more than this many jobs at a time on a single temp dir.
        tmpdir_max_jobs: 7

        # Don't run more than this many jobs at a time in total.
        global_max_jobs: 28

        # Don't run any jobs (across all temp dirs) more often than this, in minutes.
        global_stagger_m: 10

        # How often the daemon wakes to consider starting a new plot job, in seconds.
        polling_time_s: 20
Any idea why this would happily start 10+ plots right after each other (all polling_time_s) when i only have one tmpdir configured?

Would have expected 7 max until phase 2.1 has been reached on the first and then the next one is started
 

Bert

Well-Known Member
Mar 31, 2018
822
383
63
45
Quick plotman question

Code:
        tmpdir_stagger_phase_major: 2
        tmpdir_stagger_phase_minor: 1
        # Optional: default is 1
        tmpdir_stagger_phase_limit: 1

        # Don't run more than this many jobs at a time on a single temp dir.
        tmpdir_max_jobs: 7

        # Don't run more than this many jobs at a time in total.
        global_max_jobs: 28

        # Don't run any jobs (across all temp dirs) more often than this, in minutes.
        global_stagger_m: 10

        # How often the daemon wakes to consider starting a new plot job, in seconds.
        polling_time_s: 20
Any idea why this would happily start 10+ plots right after each other (all polling_time_s) when i only have one tmpdir configured?

Would have expected 7 max until phase 2.1 has been reached on the first and then the next one is started
Which OS? Are there 14 jobs?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
ubuntu - and i think it starts them...
Code:
(venv) chia@chialplot2:~$ plotman plot
...starting plot loop
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_28_03.841520+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_28_23.895136+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_28_43.948490+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_29_04.001964+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_29_24.055518+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_29_44.108963+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_30_04.162373+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_30_24.216286+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_30_44.269752+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_31_04.323292+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_31_24.376840+00_00.log')
...sleeping 20 s: (True, 'Starting plot job: chia plots create -k 32 -r 6 -u 128 -b 3389 -t /mnt/nvme/00 -d /mnt/chiashares -f -p  ; logging to /home/chia/chia/logs/2021-06-01T18_31_44.430392+00_00.log')
^CTraceback (most recent call last):
 
  • Like
Reactions: Bert

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
I believe until you don't hit global_max_jobs: 28 ..... Set it to 7 and it will just do the expected 7 plots. Until then the script start every 20 seconds, so it starts new plots. SWAR is much clearer there. If you have defined a second temp_ then the 8th plot job would hit this target ...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
but I want a global maximum of 28 to be runable (21 in phases 2+) (actually would have wanted to set max 7 jobs per tmpdir and at the same time total max 7 before 2:1, but its not smart enough for that)
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
A plot is associated with blocks from a blockchain, the number of which depends upon the percentage of the total space a farmer has allocated compared to the entire blockchain network.
If the 'Total Network Space' is defined by 'found blocks' and the ever-rising 'difficulty' then it should tell you that the 'found blocks' have a huge negative impact in this calculation at the moment. The sharper the fall in 'found blocks' the more the adjustment takes until the 'difficulty' increases at least at the same pace of the fall. That's clear. Either lots of people are no longer plotting or lots of farms got burned.