THE CHIA FARM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,163
569
113
Poland
but I want a global maximum of 28 to be runable (21 in phases 2+) (actually would have wanted to set max 7 jobs per tmpdir and at the same time total max 7 before 2:1, but its not smart enough for that)
Alternative plot manager was provided :rolleyes: ;)
 

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
If the 'Total Network Space' is defined by 'found blocks' and the ever-rising 'difficulty' then it should tell you that the 'found blocks' have a huge negative impact in this calculation at the moment. The sharper the fall in 'found blocks' the more the adjustment takes until the 'difficulty' increases at least at the same pace of the fall. That's clear. Either lots of people are no longer plotting or lots of farms got burned.
I think it's that lots of people are no longer plotting. Very likely, a whole lot of people who got into this like a month ago are just now finishing plotting the space they bought back then, and more recently have decided against buying more space to plot because the ROI is no longer there.
 
  • Like
Reactions: gb00s

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Seems likely that the initial wave is plateauing. Drive prices are silly and availablility is now terrible.

I have one local distributor that has a timeline of end 2022 for 16TB drives. I suspect this is because they just cant get a date from the manufacturer. For other sizes ETA's are July and Aug.

Unfortunately it takes 3 weeks or so to get Ebay items to me now which means the first items I initally bought have only just arrived but subsequent purchases are still slowely appearing.

On the up side I did get 4x 400GB HGST SAS SSD's yesterday for US$200 so will see how they look (TBW remaining) after they arrive.

Chia price is now 1/3 what it was when I started this journey :).

My direction now will pretty much be;
  • Wait for the last items to arrive from EBay / others.
  • Final build, tuning & reporting.
  • Only long term storage, replacement parts (burned up SSDs etc) and significant upgrades at low prices.
At around 20 plots a day I am barely keeping up with the predicted payout period. With 214 plots I have a estimated 5 month payout wait. Stopping plotting for 12 hours to rebuild a machine jumps back to 6 months. The other 62 plots I have on my 2 remote harvesters are not included in that calc.

Just as a guide for others...

I am currently runing two machines
  • Supermicro X9DR3 - dual E5-2690, 192GB 1600MHz ram (Awaiting replacement Mboard due to falty mem channel, drives)
  • HP ML350p G8 - Single E5-2690, 32GB 1600MHz ram (awaiting heasink, ram, 8bay 2.5" cage, HP420 controller, SAS 10k drives)
  • My desktop - i7-7800X, 32GB ram.
One tmp per plot (parallel not staggered) is giving the following phase 1 timings using E5-2690's, &1600MHz ram with fairly standard ploting settings, not really tuned at all. Plotting is controlled via Swar plot manager on the E5 machines, manually on my desktop.
  • NVME (Samsung) - 4h
  • SSD (Intel / Samsung) - 4h
  • HGST 2TB HDD - approx 5h
Total time is between 9.5h and 11h (NVME & SSD)
Total time is between 7h and 16h (SSD & Hard drive)

My plots have recently rolled over or I would have taken a screen shot.

The timings are pretty much the same for my E3-1270v1 / v2 machines (currently off-line due to fan noise and lack of drives).

Interenstingly, just taking a look at this... the plots with higher CPU (i.e. 130%) are over twice as fast against less CPU (71%). Understandable due to CPU intensive portions but how is 130% CPU provisiioned......

Will look closer at the logs tonight.

I have Splunk grabbing debug and plotter logs from one harvester but have not linked the others in yet and have not sorted out the props / transforms / searchs & dashboards. Another job on the list ;).

Would be interesting to see others current specs and timings.
 

Bert

Well-Known Member
Mar 31, 2018
764
363
63
45
but I want a global maximum of 28 to be runable (21 in phases 2+) (actually would have wanted to set max 7 jobs per tmpdir and at the same time total max 7 before 2:1, but its not smart enough for that)
This is exactly how plotman works for me. I am confused why it is not working for you. On the other hand, I use plotman interactive.
 

Rand__

Well-Known Member
Mar 6, 2014
6,622
1,762
113
This is exactly how plotman works for me. I am confused why it is not working for you. On the other hand, I use plotman interactive.
No idea - installed swar now, will see if thats working as expected.
In the end I dont care which one starts the plots;)
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Any guestimate now the growth is flattening off as to when we should all stop buying used 'Chia burned' ssd's and NVME drives.

I would expect there to be a wave of them hitting the market soon if not already.
 

Rand__

Well-Known Member
Mar 6, 2014
6,622
1,762
113
Dont think we're there yet (in masses). Most will stop when/if pooling does not come or payout is not as good as hoped for...
 

gb00s

Well-Known Member
Jul 25, 2018
1,163
569
113
Poland
Somebody else noticing a 22-24hrs cycle in proof/response times? The below screenshot shows response times. The cleaner the 'picture' the lower the proof/response times. Seeing this for around a week now. Can't be affected by the count of the plots generated as the time in 24hrs is equally shared for that. Can this be something from the 'network'?

phase.png
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
Somebody else noticing a 22-24hrs cycle in proof/response times? The below screenshot shows response times. The cleaner the 'picture' the lower the proof/response times. Seeing this for around a week now. Can't be affected by the count of the plots generated as the time in 24hrs is equally shared for that. Can this be something from the 'network'?

View attachment 18868
Are you copying plots onto these drives, say, once a day?

Our response times shoot up when there's other disk activity, such as, copying new plots onto farming drives.
 

gb00s

Well-Known Member
Jul 25, 2018
1,163
569
113
Poland
No no, plots are written directly to target as they come in. Sequence how they are coming in is always the same. It's weird.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Woohoo am in the 1 billienth club.

Have started to grab the daily payout from Chia Faucets so I have something to setup Chia pool contracts when they become available.

327 plots (32TB) - no earned Chia yet. Still plotting, still hoping :) .
 
  • Like
Reactions: Bert

Bert

Well-Known Member
Mar 31, 2018
764
363
63
45
At this point I am not sure if mergerFS is really the right solution for storing plots. Will mergerFS scale up to petabytes and 100's of drives? Can someone help here? I am at 100TB and so far so good but I don't want hit a dead end with mergerFS. I am also annoyed with the wasted space but I guess that is unavoidable.


With ZFS/StorageSpaces kind of solution, there seems to be more flexibility but they are hard to make it work properly especially for parity builds and parity is must have here. I also don't know how ZFS/Storage Spaces store metadata and ensure safety but I am worried that all File system will be lost if the disks holding the metadata is lost. This is one advantage of mergerFs, filesystem is distributed.
 
Last edited:

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
At this point I am not sure if mergerFS is really not the right solution for storing plots. It seems like a solution like ZFS or Storage Spaces is the best as we can add parity support
All of that is silly.

One mount point for each drive, no raid.

A drive fails you lose those plots who cares.
 
  • Like
Reactions: NateS

azee

Member
Jan 7, 2017
43
8
8
Stockholm, Sweden
Late on the chia itself but trying to reuse the existing servers for something ...

I setup couple of server with dual E5 2640 v3 but having problem with one which is using the Swar-Chia-Plot-Manager

python3 manager.py status

gives the following output

----------------------------------------------------start-----------------------------------------------------------------------------

==========================================================================================================================

num job k plot_id pid start elapsed_time phase phase_times progress temp_size

==========================================================================================================================

1 970Pro 32 5979ad5 10728 2021-06-04 11:13:23 07:21:25 3 03:09 / 01:21 93.46% 155 GiB

2 970Pro 32 f603a14 10983 2021-06-04 11:50:47 06:44:01 3 03:14 / 01:27 82.35% 154 GiB

3 970Pro 32 71edb42 11924 2021-06-04 14:37:06 03:57:42 2 03:02 49.50% 202 GiB

4 970Pro 32 6bab110 12221 2021-06-04 15:24:12 03:10:36 1 32.57% 159 GiB

5 970Pro 32 4293601 12681 2021-06-04 16:29:19 02:05:28 1 21.81% 163 GiB

6 970Pro 32 f4a9c71 12878 2021-06-04 16:54:22 01:40:25 1 17.10% 159 GiB

7 970Pro 32 912cd23 13453 2021-06-04 17:59:29 00:35:18 1 7.01% 94 GiB

==========================================================================================================================
Manager Status: Running
===============================================================================================
type drive used total % # temp dest
===============================================================================================

t/- /mnt/md0 1.06TiB 1.83TiB 58.5% 7/- 1/2/3/4/5/6/7

-/d /mnt/OS_14TB_01 1.09TiB 12.63TiB 9.1% -/7 1/2/3/4/5/6/7

===============================================================================================

CPU Usage: 38.5%

RAM Usage: 14.31/125.81GiB(12.2%)

Plots Completed Yesterday: 0

Plots Completed Today: 11

Next log check at 2021-06-04 18:35:48

----------------------------------------------------end-----------------------------------------------------------------------------

Problem is that Swar plot manager is starting each plot after an hour and sometimes even later.

while I have my “global” settings as

max_concurrent: 7

max_for_phase_1: 4

minimum_minutes_between_jobs: 25



and the “jobs” settings as

- name: 970Pro
max_plots: 999
temporary_directory: /mnt/md0/
temporary2_directory:
destination_directory: /mnt/OS_14TB_01/
size: 32
bitfield: true
threads: 6
buckets: 128
memory_buffer: 4096
max_concurrent: 7
max_concurrent_with_start_early: 7
initial_delay_minutes: 0
stagger_minutes: 25
max_for_phase_1: 4
concurrency_start_early_phase: 4
concurrency_start_early_phase_delay: 0
temporary2_destination_sync: false
exclude_final_directory: false
skip_full_destinations: true
unix_process_priority: 10
windows_process_priority: 32
enable_cpu_affinity: false
cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]



From my understanding stagger in "job" and in "global" are equal and each job should have 25 minutes apart between each other until total of 4 jobs are in phase 1.

Any idea what I am missing?
 

Bert

Well-Known Member
Mar 31, 2018
764
363
63
45

MergerFS allows me to add new drives while plotman is still running against the pool. At this point that is useful for me since I still have a few drives left to add.



Btw, my luck turned around. I just farmed another block! Given that my ETW is 20 days. This is amazingly good!

I think I am going to join hpool now as I have very few drives left and growth will kill my chance to nil soon. When the pools come, in any case I will have to replot so I don't mind giving my private keys.

I also noticed now, it is getting close to 3 months on ROI on hard drive investment. Are you guys buying and adding more hard drives at the current ROI?
 
Last edited:
  • Like
Reactions: Marsh

gb00s

Well-Known Member
Jul 25, 2018
1,163
569
113
Poland
minimum_minutes_between_jobs: 25
if you have different jobs they all start 25min after the other and one by one. Has nothing to do with staggering !!! If you have a job named '970Pro' and a job '980Pro', they would never start at the same time. Job '980Pro' would start 25min after job '970Pro' with your settings. I hope this makes sense for you.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
MergerFS allows me to add new drives while plotman is still running against the pool. At this point that is useful for me since I still have a few drives left to add.



Btw, my luck turned around. I just farmed another block! Given that my ETW is 20 days. This is amazingly good!

I think I am going to join hpool now as I have very few drives left and growth will kill my chance to nil soon. When the pools come, in any case I will have to replot so I don't mind giving my private keys.

I also noticed now, it is getting close to 3 months on ROI on hard drive investment. Are you guys buying and adding more hard drives at the current ROI?
At "3 months" roi, and 5% daily network growth, you might never earn a return, certainly not in 3 months.


advanced mode

assume 500PB / day growth for 60 days and zero growth after that

71 plots (fits on an 8tb drive which currently costs $200), after 30 days, earns $35. After 3 months, $58, after 1 year, $117.

So no, I would not buy drives for Chia at the current price which is 100% above the pre-mania price.
 
  • Like
Reactions: Bert

gb00s

Well-Known Member
Jul 25, 2018
1,163
569
113
Poland
I may don't have enough history, of course not, but it's kind of very visible that proof times could be negatively affected by the growth rate (plotting) of the whole 'Network Space'. It's kind of obvious that in times of 'Network Space' consolidation the proof times drop largely in favor of the plotter and increase largely when the 'Network Space' growth accelerates.phase_1.png
In the last 6-8 hours the reported 'Network Space' growth accelerated a lot. This was led by a large increase in proof times 4-6 hours. After what is written about how 'Network Space' is calculated, this makes sense to me. On the left or 2 days ago, you can see the super short 'proof times' when 'Network Space' was consolidating at the most since the beginning of May. This was followed by a network growth with higher 'proof times'.

It's concerning in terms of how this 'blockchain' is designed. For me, it seems that larger pools can negatively affect smaller pools by just outperforming them by building up 'latency' in the market or just jumping the queue if you want to call it this way. This looks like an error in design and structure. Intentionally or not, this is concerning. There's now one single pool just 8-9% shy of the network majority. If Chia doesn't get the 'official' pools running in a very short time, Chia is dead before .... If 1 dominant plotting pool can disadvantage other pools, where is the way to go? Who's 'involved' in HPool may never go away or you have to begin from the very new with new 'wallets'.

Plotting now exactly for 1 month and 1 day. 1832 plots stored. Around 40 to make to get all available space filled with plots. Not a single reward here other than reactivating my sense for analyzing markets.
 
Last edited:
  • Wow
Reactions: Bert