THE CHIA FARM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
That the protocol rules are not as clear as they should be is worrisome. If assumptions are true that the 'size' of the farm decides at race-collision, then this gets very unattractive due to protocol design. Big farms would finance themself and small farmers' fields are just drying out. But, no documentation, no certanty. Always curious if someone doesn't provide proper documentation.

Btw. Passed 800 plots and still 0 coins. Running out of space ~1000. So slaughtering some cows in the future or .... idk. Whoever I talk to is unlucky for 2 weeks now. Big rise of the network has it's impact.
I believe they have several papers explaining their algorithms. They are not some joke coins builder copy pasting code but serious software company. I listened to their talk but didn't have time to drill into the math in their white paper. There are time lords in the system to coordinate such events but I don't know for sure.

Is there anyone in the community experienced and resourced enough to build a chia pool?
 

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
Further to my experience (problems) space-wise. The GUI itself must have a very negative impact on the size of the temporary plots. I set up 2 absolutely identical test machines. Same CPU's, same RAM, 2x 756GB ioDrive2 drives in Raid0, target storage a 1.92TB Intel SSD. With farming in Linux (Ubuntu Server 20.04) in CLI, I always can have 8 plots running in parallel with a specific setup. On the system plotting with GUI, I can be happy to get 6 parallel plots running. Otherwise, the md0 drive hits it's space limits quickly. I also note, in CLI mode, the max size of a temporary plot file, doesn't exceed 239GB. Temporary plot files generated on the system running the GUI, always hit 256GB.

Also, the same setup plotting through CLI only, generates a plot in 8 hours (2x E5 2630v3 with 2x ioDrives in Raid0). With the correct sequence and after a 12hour warm-up phase, I'm able to plot 24 plots every day (1 plot per hour). So with 3 machines, I'm able to generate 72 plots a day. Not much but .... Running the same plotting sequence with the GUI, leaves me with 12 hours per plot or 50% more. Of course, I exchanged hardware. Still the same.

So the (buggy) GUI adds to some curious 'ideas' ...
They specifically call out don't use gui for serious plotting. It is all about staggering. Use plotman and you can squeeze more. It took me 3 weeks to go from Gui to plotman so in every step I discover more how system works and get more efficient.

As a matter of fact, there is really no reason to use expensive flash for plotting. It is not heavy on IOPs.

How do you guys create your storage pool? Zfs? I am having hard time managing so many drives and I keep on loosing mdadm arrays. Mdadm is very counter intuitive, not a replacement for hardware raid which maintains array configuration. Mdadm looses it after restart or if drives disconnect temporarily.

I only do raid 0 or Jbod to combine my small 3tb disks to come to more mangable size of 8tb.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
How do you guys create your storage pool? Zfs? I am having hard time managing so many drives and I keep on loosing mdadm arrays. Mdadm is very counter intuitive, not a replacement for hardware raid which maintains array configuration. Mdadm looses it after restart or if drives disconnect temporarily.
Yes, run Z1 pools
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
How do you guys create your storage pool?
I use mergerfs to pool all my drives to store media video files. it works great.
It is not raid, it you lost a drive, only the bad drive data is gone.
Just plot more plots to replace the lost plots.

I don't use mergerfs for the Chia harvester storage.
I use big drives ( 10tb and up , to save power ).

trapexit/mergerfs

take 5 min to setup , no raid rebuild problem.
 
  • Like
Reactions: gb00s and Rand__

msg7086

Active Member
May 2, 2017
423
148
43
36
How do you guys create your storage pool?
I run JBODs to minimize the risk and maximize the space. Imagine, if you run 6x 10TB drives, you got 60TB of raw space. If you make a Z1, you'll lose 10TB from day 1, so you'll be farming 50TB all day long. However if you JBOD, you'll be farming 60TB from day 1, until your first drive is broken and you are down to 50. Obviously, having JBOD makes more sense.
 
  • Like
Reactions: Bradford and Bert

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Yeah but i didnt want to have 48 drives to distribute plots on :p
And since i still (lazily) farm on windows and definetly dont want a jbod attached to that i need to share it out, so tnc was simplest
 

msg7086

Active Member
May 2, 2017
423
148
43
36
Yea different settings for smaller drive. I only use 14 and 16TBs so it's no problem to me, and I run my harvester on Linux.
 

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
I have small drives, 3TB etc and I bought bunch of 8tb and 5TB drives so manually managing these drives are not an option for me as I am looking at 50+ drives. I am not sure mining will be feasible in the long run so I don't want to put more money on it. I also haven't earn anything although I have more than 1000+ plots and not planning to buy big drives until I make some chia.


MergerFS seems to be the right choice for me. I also heard greyhole but I noticed it is not used heavily and I am worried about starting on that. At least @Marsh has been using MergerFS and he seems to be fine with it so I go with that. @Marsh can I use mergeFS for my USB drives which are susceptible to disconnects etc?

Btw, I use xfs since this is what we use at work as fast/reliable file system. What do you use? ext? The fragmentation in linux world is crazy.
 

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
Dont mount mdadm during reboot and from /etc/fstab. Set it up and mount it with a script from cron job @reboot (root)

Never fails here ...
I tested what happens to usb drives where I set up "linear" array. I stopped the power on the drives without unmounting the drives. Brought it back and assemble failed! I was not able to figure out how to reassemble it. Perhaps that was because I didn't save the configuration in the mdadm.conf file. Now I cannot even stop the failed array, it is stuck like that.

I am pretty sure it is an operator error but mdadm is too tricky for novice. I have 0 such issues with raid card based solutions.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I also haven't earn anything although I have more than 1000+ plots

...for my USB drives which are susceptible to disconnects etc?
You are checking that your lookup times are not too long (>5s) are you? I think mayne USB drive are quite susceptible to that from what i read
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
MergerFs is pooling software , mergerfs would still start if drive is missing.

I came from old school , I don't trust USB interface for longer term storage.
No USB storage for Chia.

I lost couple days of works in the beginnig , troubleshooting Fusion-IO SSD card and xfs problem.
SSD card would just corrupted .
I don't want spend more time troubleshooting, so I switched to linux EXT4 format, all is well now.

I use MergerFs to pool 8 x 14TB drives , it is a simple solution.

Add: I also have 2 pools of 16 x 6TB using Mergerfs. 4 pools of 16 x 4TB drives
 
  • Like
Reactions: gb00s and T_Minus

msg7086

Active Member
May 2, 2017
423
148
43
36
I tested what happens to usb drives where I set up "linear" array. I stopped the power on the drives without unmounting the drives. Brought it back and assemble failed! I was not able to figure out how to reassemble it. Perhaps that was because I didn't save the configuration in the mdadm.conf file. Now I cannot even stop the failed array, it is stuck like that.

I am pretty sure it is an operator error but mdadm is too tricky for novice. I have 0 such issues with raid card based solutions.
I would strongly suggest AGAINST using arrays on USB drives. A glitch on USB port or power adapter and your full array is in danger. If you set up 10x3TB in a linear array, one glitch and you may lose 30TB of plots.
 
  • Like
Reactions: NateS

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
You are checking that your lookup times are not too long (>5s) are you? I think mayne USB drive are quite susceptible to that from what i read
Yes tested that and it worked fine but I am going to keep an eye on it

MergerFs is pooling software , mergerfs would still start if drive is missing.

I came from old school , I don't trust USB interface for longer term storage.
No USB storage for Chia.

I lost couple days of works in the beginnig , troubleshooting Fusion-IO SSD card and xfs problem.
SSD card would just corrupted .
I don't want spend more time troubleshooting, so I switched to linux EXT4 format, all is well now.

I use MergerFs to pool 8 x 14TB drives , it is a simple solution.

Add: I also have 2 pools of 16 x 6TB using Mergerfs. 4 pools of 16 x 4TB drives
Yes it seems like XFS is pickier but more resilient for corruption. I have no option but using USB drives. This all I had left with :) I can shuck them but I want to first make some chia.

I would strongly suggest AGAINST using arrays on USB drives. A glitch on USB port or power adapter and your full array is in danger. If you set up 10x3TB in a linear array, one glitch and you may lose 30TB of plots.
Yes looks like it. I am going to give a try to mergerfs now.

I may need help on mergerfs. It has lots of options. I don't want to apply trial - error while putting my plots over there. Which policy I should use for mergerfs? I am considering ff as it will start filling drives one at a time.

Also I noticed I should use by-uuid in fstab for stable mounting.
 

Marsh

Moderator
May 12, 2013
2,646
1,497
113
For storing plot files, just take the mergerfs default.
Plots and farmer do not have preference on which hard drive hosting the plot files.


For media files , I want to keep TV show separate from Movies files. In case , I need to restore a single bad drive.

example: my media storage

/dev/disk/by-id/wwn-0x5000cca23b10fcf0-part1 /mnt/disk1 ext4 defaults 0 2

/mnt/disk* /storage/media fuse.mergerfs nonempty,direct_io,defaults,allow_other,minfreespace=20G,moveonenospc=true,use_ino,category.create=eplfs,fsname=mergerfsPool 0
 

Bert

Well-Known Member
Mar 31, 2018
845
399
63
45
sorry for the dump question: should I mount by disk or by UUID of the xfs partition?