THE CHIA FARM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rthorntn

Member
Apr 28, 2011
81
12
8
Hi,

For the farmers out there that are running big JBODs with no redundancy, what software are you using to manage it, I'm looking at maybe trying an Unraid array with no parity, thinking that having a single share that fairly evenly allocates plots across a bunch of disks would be the best method. Having dozens of shares (a share for each disk) would be a pita to manage, add to that if this is all happening on a bare *nix OS being able to monitor (SMART) all of the drives?

Thanks.
Richard
 
  • Like
Reactions: Bert

rthorntn

Member
Apr 28, 2011
81
12
8
You probably want to tell us about your topology first.
Thanks, I'm not really familiar with seperate harvesters, currently running an all-in-one Threadripper Ubuntu system while I'm still doing a lot of plotting and thats running out of disk capacity and has no free PCIE slots and so I'm building up a file server with a JBOD.

I was thinking that it would be nice to use something like Unraid to get all the management and monitoring.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
I think you should use a separate harvester to make things a whole lot easier. Currently my farming machine runs Debian 10 (soon 11) and a harvester. Just an SSD running the OS, and JBOD single drives in XFS. Format the drives and mount them into somewhere like /mnt/chia-01..12 and add all of those into farming directories. Write a script to start harvester, and put it inside cron job @reboot, or systemd service. Fire and forget.

If you are more comfortable with GUI based systems like unraid that's fine. But I didn't bother looking at how 3rd party GUIs work and I think they are too complicated to start with.
 
  • Like
Reactions: rthorntn

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I set up OpenMediaVault as gui for my two newer Storage boxes, using a local harvester to farm and mergerfs/smb share to push data to it. (add individual drives, not mergerfs drive to harvester as thats slower).

Installation (of Chia) is simple following the official ubuntu installaltion guides - OMV takes a bit to get used to.

Havent found a simple way to migrate my TrueNas Data over to another OMV instance yet - the ZFS plugin in OMV doesnt work for me and TNC does not want to mount EXT2-4 at all for writing... so left with copying over the network :(
 

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
Do NOT(!!!) use MergerFS for this if your pool gets bigger than 50TB or so .... The more plots you get on it, the slower it becomes. Sometimes, with parallel writes, I get 13-16MB/s write performance on SAS drives which impacts plotting times massively. Yes, to store data without the need of performance and constant availability, MergeFS is cool and nice. But not for these pools.

I just added now all disks manually and adjust destinations. If I continue with pooling, then setup will be reaorganized. You have to replot for pooling anything anyway. So ....
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Do NOT(!!!) use MergerFS for this if your pool gets bigger than 50TB or so .... The more plots you get on it, the slower it becomes.

I just added now all disks manually and adjust destinations. If I continue with pooling, then setup will be reaorganized. You have to replot for pooling anything anyway. So ....
Yeah i noticed it was quite slow, and it needs to be observed. The only reason i use it is load distribution since i run only 6T drives... will verify it stays acceptable when I reach 50T on those.

But yes, its an excellent argument that pools will provide opportunity to reorg...

So what are alternatives to mergerfs ?
I need something that I can simply move plots to without caring which disk to use/free space left etc ...
I could setup a temp location and pull files down from the storage boxes I guess, they know what drives they have and where they have free space
 

mirrormax

Active Member
Apr 10, 2020
226
86
28
mergerfs as destination was a really bad idea at least with default settings, i ended up with 100plots overnight all stuck copying at 2mb/s, straight to a jbod. still using it for farming wich seems fine.

just writing straight to individual disks when plotting now for that solid 200+mb/s speeds, would be sweet to use mergerfs as plot destination if someone can find any settings that work but as of now seems theres too much overhead that goes nuts when transfering several files even with plenty of cpu available.
 

mirrormax

Active Member
Apr 10, 2020
226
86
28
iam kinda waiting for someone to fork/alt chia into a non premine version, i dont really see any point in the business paper/company/ipo side of it, at least not for such a huge premine gives me ripple vibes. you could probabaly make a chia alt that could be farmed with same plots and iam sure someone is working on it already.
 

rthorntn

Member
Apr 28, 2011
81
12
8
OK, thanks everyone, right I think I'm getting somewhere, so (to anyone with Unraid experience) can I:

  1. Setup Unraid on a file server, setup an array for Plex and maybe use "unassigned devices" for the Chia plots
  2. Run something like "Partition Pixel" on Unraid as just a harvester, point at the "unassigned devices"
Is this optimal, could I have an Unraid Chia farm array with no cache or parity or would "unassigned devices" be the lo-fi way to do it and not cause undue latency...it seems like a low to no overhead array with "allocation" would make admin easier, admin being "balancing" plots easily across drives.

I have a box thats plotting 24/7 now, it has 80TB of HDD so I would be moving the 80TB of plots to Unraid.

I realise as I type this that this might be better posted on the Unraid forums, anyway, here we go!

Cheers
Richard
 

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
I think Swar Plotting Manager can be fed with several destination targets and can 'decide' where to put the plots. I will test tonight. If this is positive, just set all disks independently. Thought about LVM with Bcache with 2x older SSD's as well to reduce write times, but .... You could just easily export the VG and move it somewhere else or ....
 

amalurk

Active Member
Dec 16, 2016
312
116
43
102
mergerfs as destination was a really bad idea at least with default settings, i ended up with 100plots overnight all stuck copying at 2mb/s, straight to a jbod. still using it for farming wich seems fine.

just writing straight to individual disks when plotting now for that solid 200+mb/s speeds, would be sweet to use mergerfs as plot destination if someone can find any settings that work but as of now seems theres too much overhead that goes nuts when transfering several files even with plenty of cpu available.
This shouldn't happen if you have lots of disk the same size and you use the mount option to write to the disk with the most free space. Were you doing something different? Like if you have a JBOD with a bunch of disks partially filled and then you add only one new empty disk, yes all the writes will go to that one disk with that MFS setting until free space drops but if you add the disks all at the beginning or in groups of the same size it should work. Or are you saying you hit a limit of the speed that mergerfs can proxy requests between its pseudo filesystem and the actual disks. It seems ideal for Chia......
 

mirrormax

Active Member
Apr 10, 2020
226
86
28
This shouldn't happen if you have lots of disk the same size and you use the mount option to write to the disk with the most free space. Were you doing something different? Like if you have a JBOD with a bunch of disks partially filled and then you add only one new empty disk, yes all the writes will go to that one disk with that MFS setting until free space drops but if you add the disks all at the beginning or in groups of the same size it should work. Or are you saying you hit a limit of the speed that mergerfs can proxy requests between its pseudo filesystem and the actual disks. It seems ideal for Chia......
i had 6 disks in it at the time should have been more than enough all empty same size and mergerfs dispersed the plots evenly. this was my mount settings mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=110G /mnt/farm* /mnt/mergefarm

in the mergefs documents there a lot of performance tweaks but i did not have time to start testing them all, are you using it for dest dir of plots? iam outputting like 200+plots a day and it could not keep up, individually the disks are doing it fine.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
iam outputting like 200+plots a day and it could not keep up, individually the disks are doing it fine.
Man and I thought I was doing a lot with 70, but I am dwareved by you guys - 200 plots/d here, Petabyte's of storage there ... :O
 

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
We need a file system that works at block layer to megw multiple drives and store multiple copies of the file system index to prevent single disk failure to bring down the whole system. Mergefs wastes space.
 

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
I see the first 8TB's touching 150EUR, 6TB closing in ~100EUR's and 4TB's touching 6x's again .... Even 3TB's are back below 50's into low 40 EUR's. Just everything >= 10TB stays 'elevated'

Btw, .... when your MergerFS doesn't know where to put any plots anymore. Came home. Lost a pool out of sight. All jobs related to it stuck. Could see remotely that something is wrong. Total I/O-overkill. 34 parallel plots just gone. Pool dead. I thought ... and we just talked about it. Reboot. Show must go on.

MergerFS.png
 
Last edited:

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
I gave up on all pooling solutions. The only way forward is a python or bash script that first checks for free disk space on all your designated plot storage mounts and copies them one by one. I know it sounds dumb but once your caught up it really is the most fool proof way to handle it.