THE CHIA FARM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bert

Well-Known Member
Mar 31, 2018
820
383
63
45
Can someone advise me if there is a way I can make sure the drives are always spinning and not going to idle. Some of my drives on the raid card and I assume raid card will control the behavior directly. For others, I don't know how in linux to control the "power options" to ensure they never spin down.

I heard some kind of cron job that will poke each drive regularly. Perhaps, I need something like that.
 

Marsh

Moderator
May 12, 2013
2,643
1,496
113
My Chia journal.

Woke up this morning, found 2 more XCH, total of 4 XCH in 16 days.
I have 1400 plots now.

I was going to wind down plotting in next 2 days.

Now I am hooked, more plotting.
Moving my backup files from 8 TB drive. to 4 TB drives.
I am committing another 100TB to Chia.

Correction, about 7 days, actually was 16 days
 
Last edited:

BobTB

Member
Jul 19, 2019
81
19
8
Woke up this morning, found 2 more XCH, total of 4 XCH in 16 days.
I have 1400 plots now.
You are extremely lucky, I have 1500 plots, 0 XCH now for 20 days. This is lottery sadly, and it does not pay off at all. Waste of space and time, except for a few very lucky ones :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Woke up this morning, found 2 more XCH, total of 4 XCH in 16 days.
I have 1400 plots now.
Good for you - I've had two days of "fun" - my setup basically exploded ...

O/c I have made the mistake to simply deploy a few full nodes and then move the plots to a central single large directory...
This has been working without issues, I'd say up to 6-700 plots.

Then first filer became slow (zfs filled up), so copy times became worse - o/c Gui looked fine...

Then found the comment re mergerfs so built an OMV box as secondary storage... bad idea - mergerfs is significantly slower than individual disks when written to and - more importantly - I had no idea re options, so one night the drive said full despite only a single disk being full...
So then I had some 50 plots not moved over + first suspicions that the single ZFS filer might be a problem...
so i started moving plots around...

Now 100G plots on spinners dont move fast.... so this is taking ages...
then i noticed that my proof & copy times became abysmal

After playing around for two days I came to the conclusion that

1. running multiple full nodes is a bad idea for shared storage, as all will try to proof and being application agnostic the disk tries to do all of them at the same time
2. Multiple read/write operations at 100G size on a single disk ea is not really good - especially when proofing is running in parallel
3. UnionFS/MergerFS is slower then expected, don't use it exclusively. I now have a local harvester including all individual disks and just push via unionfs for space balancing
4. Do a reboot after installing a harvester's init -c - else it wont connect

But better now, only need to fix the 100T TrueNAS filer to use a local harvester somehow, or migrate the plots or convert to ZoL

Just as a warning - all looks good until it goes south - check the logs;)
 

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
When you start hitting large amount of plots like 100TB+ you see these issues and realize that maybe you should have put more thought into your setup. Even on 10gb network you run into issues of rust disk speed with moving these files all over the place.

My setup is as follows. I have four 4U servers that both act as plotters and remote harvesters(one is full node). I also have two desktops that act only as plotters. Each of the 4U's has a raid0 of 2-3TB total of SSD's connected to the motherboard SATA ports. I plot using the raid0 SSD's as destination drive and have a script that round robins movement from the this to individually partitioned drives.

The two desktops rsync plots in a round robbin fashion to the remote harvesters. This has pretty much solved all the issues of slamming a specific disk too hard and sending proof response times through the roof. If you are plotting with high threads on a harvester/full node you run the risk of CPU spikes causing missed challenges(30 second+ response time). If I was starting from scratch today I would have set everything up like this gentleman from github.

 
  • Like
Reactions: Bert

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
I'm also stuck as my 1 pool is full and the other pool already 38%. Things went south last day with the setup here. As I still don't have a single Chia with 128TB of plots and 'Estimated Time To Win' increases every day as the 'Total Network Space' keeps the pace of widening, I have to decide ...
  1. Stop plotting and clean up the second pool to be able to reorganize and move plots
  2. Continue to buy more hardware and reorganize ... unlikely without any Chia and more costs on all fronts
  3. Selling disks with keys if someone thinks this Chia hype is worth betting for 'as a business'
  4. Totally give up ...
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I'm also stuck as my 1 pool is full and the other pool already 38%. Things went south last day with the setup here. As I still don't have a single Chia with 128TB of plots and 'Estimated Time To Win' increases every day as the 'Total Network Space' keeps the pace of widening, I have to decide ...
  1. Stop plotting and clean up the second pool to be able to reorganize and move plots
  2. Continue to buy more hardware and reorganize ... unlikely without any Chia and more costs on all fronts
  3. Selling disks with keys if someone thinks this Chia hype is worth betting for 'as a business'
  4. Totally give up ...
Drives have gone up in price. Maybe you can make a profit just liquidating them on ebay.
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
The problem here is the difference between the reported local 'Heights' in the local 'blockchain' and the reported 'Heights' from other nodes I'm connected to was widening immensely. So I wasn't even sure my 'blockchain' here was correct. Lets keep calling it blockchain while it isn't a blockchain per se, but .... I thought this may come from replicating the blockchain to several new 'nodes'. They synced all well until 1 1/2 days ago. So I had to get rid off of all my local 'blockchains'.

Another issue seems to be that I barely get 2 'eligible' plots, while others report like 5-9 plots. In 95% only eligible plots logged and reported are 1. All plots are checked once a day. I had one issue in the beginning when I was 'stupid' enough to move a plot just as it was and it got corrupted. There are also no flipped plots with any other errors as the system is protected with ECC.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
The problem here is the difference between the reported local 'Heights' in the local 'blockchain' and the reported 'Heights' from other nodes I'm connected to was widening immensely. So I wasn't even sure my 'blockchain' here was correct. Lets keep calling it blockchain while it isn't a blockchain per se, but .... I thought this may come from replicating the blockchain to several new 'nodes'. They synced all well until 1 1/2 days ago. So I had to get rid off of all my local 'blockchains'.

Another issue seems to be that I barely get 2 'eligible' plots, while others report like 5-9 plots. In 95% only eligible plots logged and reported are 1. All plots are checked once a day. I had one issue in the beginning when I was 'stupid' enough to move a plot just as it was and it got corrupted. There are also no flipped plots with any other errors as the system is protected with ECC.
Getting the blockchain to stay reliably connected is a mission. We have all of our farmers and plotters whitelisting each other as a big cluster so they all learn about blocks sooner from each other. The default settings for the full node software is atrocious. At minimum, bump up the max outbound connections from the default of 8 to something like 30.
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
Local blockchains always had the same state. Not a problem. I became out of balance with outside notes. Even when I introduced myself to others and jumped ... I just sometimes had the feeling I got systematically disadvantaged, like on the exchange ... Without a properly synced blockchain, you get nothing.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
Without a properly synced blockchain, you get nothing.
Correct.

I know someone who wrote a script to ban anyone who connected to him that was more than 100 blocks behind. Apparently, trying to help people sync up is a huge cpu suck and can cause your node to desync.

Chia: This ponzi scheme is in beta.
 
  • Like
Reactions: zunder1990

Bert

Well-Known Member
Mar 31, 2018
820
383
63
45
How do you know you are pit of sync? Does it show on the UI as "syncing" and your farmers stop addressing the challenges. The reason I ama asking is that I never had a sync issue so I am curious if I am missing it or just being plain lucky
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
How do you know you are pit of sync? Does it show on the UI as "syncing" and your farmers stop addressing the challenges. The reason I ama asking is that I never had a sync issue so I am curious if I am missing it or just being plain lucky
If other nodes learn blocks very much earlier than you do, you're out of sync. Not necessarily a warning for that, if all your neighbors are also just as far behind as you are. However, there may be a warning for "desync" when your node -knows- it's way behind (or disconnected) and hasn't been able to catch up.

If it takes more than a few seconds for your farmer to learn the new block height, you basically can't win blocks.
 

Marsh

Moderator
May 12, 2013
2,643
1,496
113
My mistakes:

First couple days, I want to have taste of Chia
Initially, I installed the full node on a Windows machine, use the GUI to do plot.
After first day, than I saw Chia GUI screen, it would take 2 years to win.
So I lost interest.

I started building the Chia system in piecemeal, flat network, no monitoring, not committing drives to the farm.

What I learned,
Takes many hours to tune the plotter, to observe each setting change.
After 4-5 days, I was able to tune the plotters to produce total output 180 plots per day.

Then the second problem is the flat network.
During plots transfer, the system is not fast enough to solve Chia challenger in 30 seconds.
Big mistake, when I moved plots to plotter to harvester, the main network is saturated.

Solution:
I deployed a private network to transfer plots between harvester and plotter.

Problem:
The plotters produced more plots than the harvester could ingested.

Solved it by using few mini ITX harvester storage pod to decentralize between farmer and harvester.
Reducing the daily output to 140 plots ( shutting off plotter machines),
rsync the plots to 2 separate harvester / storage.
I have 5 harvesters now, so at least 3 harvesters are not affected by plots transfer, and kept on farming.

Future plan,
move full node from Window to Linux OS, or using VM

Implement full 2.5gbe network to transfer plots between plotter and harvester.
( May not happen soon, until I could take the system down )

Build 1 more high power plotter to test new Chia , plotter-manger release

Current, Ubuntu PXE auto deployment server is for Ubuntu 18.x,
build another PXE deployment server to serve Ubuntu 20.x.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
How do you know you are pit of sync? Does it show on the UI as "syncing" and your farmers stop addressing the challenges. The reason I ama asking is that I never had a sync issue so I am curious if I am missing it or just being plain lucky
I had the problem that the challenges were not coming timely every 10-20sec any more; i.e. the full node was synced fine, but the harvester died after a few minutes (caused by too many broken [in move] plots) but o/c there was no visible error since even critical log issues don't get exposed to the gui...
 
  • Like
Reactions: Bert

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
For me, very often, if I don't delete a random node from my connections, my 'blockchain' gets stuck and I do not get any challenges. Seems like I have to limit the connections. Right now I have 78 nodes connected .... Like :rolleyes:
 

Attachments

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ah right, had that too for a while, and saw a couple of posts in reddit... some ppl got the impression that some (chinese) ip's tried blocking other ppl from farming... no idea if thats true, and it got better after the last update (for me)
 

gb00s

Well-Known Member
Jul 25, 2018
1,175
586
113
Poland
Reminds me of 'some guys' flooding orderbooks with outside market orders and you were stuck and couldn't add anything to the market for a split of a ms. Just keeping you away from entering the game. Don't want to go this way, but imagine you control a large pool of nodes .... ;)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Yeah, easy to forget, but while its friendly rivalry for most of us here (I'd assume), its a tough business worth a lot of money for some other folks out there :/