I have fallen in love with MooseFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zunder1990

Active Member
Nov 15, 2012
227
80
28
I started using it about a month ago after being a long time ZFS user. I have 15 disks spread between 7 servers for a total of 101TB of raw disk space. With 5 of the servers being single board computers holding 1-2 hard drives each. Each drive is formatted with XFS. Moosefs will take care of all data sync and the keep up wit the number of copies. For normal media files I have it set to two copies so moosefs will make sure there are 2 copies on different servers. and for more important data I have it set to 3 copies. This is can be set per folder level or inherited from the mount. It has no problems handling drives of different sizes. With my current hardware of all HDD and no SSD I have no problems hitting 2+gbits of reads and writes which is more than enough for my needs.

It also has a nice webUI
1659308293567.png
1659308323609.png
1659308349239.png
 

tjk

Active Member
Mar 3, 2013
487
202
43
Nice! Do they have clients for the servers that want to access files or will it work with native nfs clients?
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
Nice! Do they have clients for the servers that want to access files or will it work with native nfs clients?
All of the my linux boxes like promox, sickchill, radarr, and plex has a fuse client that will connect directly to the storage nodes. There is a native windows client but it locked behind a pay wall. So my windows clients connect by SMB to a storagenode that is also running the fuse client.
 

bleomycin

Member
Nov 22, 2014
54
6
8
38
This is cool thanks for letting us know about it! Their website mentions the option for tiered storage. Do you know if this operates similar to an ssd cache drive in unraid where data will be written there initially and moved off on a schedule or as the cache drive runs low on space? I need to be able to transfer files over initially at 10Gbit/s but subsequent reads can be slow. I'm also a little confused. Is this writing data equally amongst all drives meaning all the drives need to be online all the time, or is it like unraid where only 1 disk needs to be awake at any one time depending on the files being accessed?
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
Do you know if this operates similar to an ssd cache drive in unraid where data will be written there initially and moved off on a schedule or as the cache drive runs low on space?
You set the tiers on the folder as an moosefs attribute and it called storage classes. The example they use on the website is say you have two storage classes, SSD and archive. You put reports in folders based on year so the current year would be on SSD class and past years would be on archive. Then in the new year would change the attribute on say 2022 to archive and set 2023 to ssd class. This would then start to move the 2022 folder onto archive class.

I'm also a little confused. Is this writing data equally amongst all drives meaning all the drives need to be online all the time, or is it like unraid where only 1 disk needs to be awake at any one time depending on the files being accessed?
This is my understanding and what I have observed.
Say you have copies set to 2 on a mount or folder and you are connecting to the moosefs system with the fuse client. The client will write to two nodes/disks at the same time making sure they are min of two working chucks for that file before letting the client know that the write was a success. At this time I am not sure how it picks which disks getting written to by the client. During the file copy from the client and after if needed moosefs will move chucks between all of the disks online to try to keep all of the disks at about the same used %. It is my understanding that if you use min copies of more than 2 that client will still only write to two and the chuck servers will take care of making the 2 +n copies required.

Part 2 of the question about disk being online, Moosefs treats an offline disk as a failed disk and will start moving data and making copies to get back up to the storage goals. If you know you are about to work on a node/disk you can put it into maintenance mode witch would pause moosefs from treating the server as offline. There is a built in time out(I think it is 72 hours but it can be changed) that the node can be in maintenance and offline before moosefs starts to think the server is died.

I am not sure how moosefs handles disks that are spin down as I never bothered looking into that as my work load access disks so much that they would need really get a chance to spindown.
 
  • Like
Reactions: bleomycin

bleomycin

Member
Nov 22, 2014
54
6
8
38
I am not sure how moosefs handles disks that are spin down as I never bothered looking into that as my work load access disks so much that they would need really get a chance to spindown.
Awesome, thanks for the explanation that was extremely helpful! I'll keep this bookmarked for possible future projects as it is very cool and had no idea it existed, it just doesn't fit my exact needs right now. This is the type of stuff that I visit this forum for, not a chance I'd have stumbled across mossefs any other way.
 
  • Like
Reactions: zunder1990

Mithril

Active Member
Sep 13, 2019
432
148
43
Yeah, I have learned my lesson on any form of "must pay for" storage in my personal life. At bare minimum I want to be able to boot up *nix/BSD or even a windows ISO (which will work 100% for 30 days from Microsoft if you needed to in a pinch even without a license) for file recovery. But "need to pay for a client, or use a workaround for network access" is an issue.

Even for a small or medium business, I wouldn't want the headache for production/important data.

As to XFS, it looks like it relies fully on the underlying block device for all device level redundancy/resiliency. Until fairly recently I had some level of faith in that, but there's a great L1 techs video (and I reproduced their results) on why that's at best illusionary with consumer drives, and even some enterprise drives. And then you still have all of the failings of traditional RAID.

All that being said, I'm still somewhat tempted to mess around with this as storage tiering that works is something I'm interested in, but I may try to see if I can build on top of ZVOLs as the block devices. I may have mad some bad assumptions and its interesting enough to want to test.

Now, to fire up the cloning machine to handle this backlog of projects...
 

Mithril

Active Member
Sep 13, 2019
432
148
43
Care to share the link to that video @Mithril ?

I will rewatch later to refresh my memory but they key take-away I remember is that with modern hardware and software raid combined with modern drives, there's little to no protection for any uncaught data errors (be that bitrot, incorrect/failed writes, etc), *some* enterprise drives may handle it better (report an error) when able, but without the correct support it doesn't matter.

One thing to keep in mind is that modern drives are CONSTANTLY doing ECC and other correction of one kind or another as part of normal operations, such as translating the effectively analog level of a NAND cell or disk sector into a digital value. (Complete side rant, there's no such thing as "digital signals" in the real world unless you are detecting single electrons and ignoring quantum effects, it's all just nice "frictionless plane" levels of abstraction. The faster the signal needs to switch, the harder it gets to keep the line between 1 and 0 clear). Since the error correction needs to be fast, it can't be too sophisticated so it's entirely possible to reconstruct an incorrect value on a read retry and most drives will pass along the first "correct" value for a sector. Much like memory bitflips, this will often go unnoticed. many files are to some degree fault tolerant (a glitched frame in a movie doesn't usually make it un-playble, for example) or may even have some level of built in correction as well.

For things where you are more focused on uptime, I think RAID still has a place. But for data integrity (which I personally view as the first job of a NAS or backup solution), I still have not found anything better than ZFS that doesn't have a serious pricetag attached. ZFS does come with drawbacks, you will inherently have lower performance, deduplication is resource heavy and thus situational, and tiering/caching is also situational (and arguably does not work like what people often think of)
 
  • Like
Reactions: Stephan

zunder1990

Active Member
Nov 15, 2012
227
80
28
Any price info?
It's always a bad sign when you have to "get a quotation" to buy a product.
I have only used the free open source version, for my use case the biggest thing that comes with the pro version is the native windows client while that would be a nice to have it is not required.
 

MrCalvin

IT consultant, Denmark
Aug 22, 2016
87
15
8
52
Denmark
www.wit.dk
Pro version deliver "Data Redundancy with Erasure Coding" (whatever that means), but I sounds like something you don't wanna run without.
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
Yeah, I have learned my lesson on any form of "must pay for" storage in my personal life. At bare minimum I want to be able to boot up *nix/BSD or even a windows ISO (which will work 100% for 30 days from Microsoft if you needed to in a pinch even without a license) for file recovery. But "need to pay for a client, or use a workaround for network access" is an issue.

Even for a small or medium business, I wouldn't want the headache for production/important data.

As to XFS, it looks like it relies fully on the underlying block device for all device level redundancy/resiliency. Until fairly recently I had some level of faith in that, but there's a great L1 techs video (and I reproduced their results) on why that's at best illusionary with consumer drives, and even some enterprise drives. And then you still have all of the failings of traditional RAID.

All that being said, I'm still somewhat tempted to mess around with this as storage tiering that works is something I'm interested in, but I may try to see if I can build on top of ZVOLs as the block devices. I may have mad some bad assumptions and its interesting enough to want to test.

Now, to fire up the cloning machine to handle this backlog of projects...
I am not you read that much about it. Only the windows client, linux is free and open source, for windows access you can mount the moosfs system on linux and share it out over SMB.

"As to XFS, it looks like it relies fully on the underlying block device for all device level redundancy/resiliency." I am not sure you read anything about the system. Moosfs handles this really well, you are able to tell the system how many copies of the chucks/files you want storage. When a drive files moosfs will automatically start coping data to get back up to the storage goal.

You can use ZFS under moosefs and I did at first but it is so SLOW, I am getting 5-10x speed by using XFS at the drive level.
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
Pro version deliver "Data Redundancy with Erasure Coding" (whatever that means), but I sounds like something you don't wanna run without.
Erasure coding is like raid 5/6 where it will use parts of file/chuck to rebuild all of the file/chuck. What I use in the open source version is full copies of the file/chuck think more like raid 1. Yes it uses more raw disk space but it is fine for my needs. For really important files you can tag with min of copies of like 4-5 or more so moosefs will store the files/chucks on 4-5 disks/servers.
 

MrCalvin

IT consultant, Denmark
Aug 22, 2016
87
15
8
52
Denmark
www.wit.dk
I am not you read that much about it. Only the windows client, linux is free and open source, for windows access you can mount the moosfs system on linux and share it out over SMB.

"As to XFS, it looks like it relies fully on the underlying block device for all device level redundancy/resiliency." I am not sure you read anything about the system. Moosfs handles this really well, you are able to tell the system how many copies of the chucks/files you want storage. When a drive files moosfs will automatically start coping data to get back up to the storage goal.

You can use ZFS under moosefs and I did at first but it is so SLOW, I am getting 5-10x speed by using XFS at the drive level.
The bad performance must be a matter of enable/disable synchronous I/O. If you disable on ZFS (with is the same as XFS default) I assume the speed would be fine.
Not fair to compare ZFS with synchronous-I/O-ON and XFS synchronous-I/O-OFF...has to be the same ;-)
 

Mithril

Active Member
Sep 13, 2019
432
148
43
I am not you read that much about it. Only the windows client, linux is free and open source, for windows access you can mount the moosfs system on linux and share it out over SMB.

"As to XFS, it looks like it relies fully on the underlying block device for all device level redundancy/resiliency." I am not sure you read anything about the system. Moosfs handles this really well, you are able to tell the system how many copies of the chucks/files you want storage. When a drive files moosfs will automatically start coping data to get back up to the storage goal.

You can use ZFS under moosefs and I did at first but it is so SLOW, I am getting 5-10x speed by using XFS at the drive level.

If I had a *nix only ecosystem that wouldn't be an issue, but for me I need NFS and/or SMB (and it's not just Windows clients, I have other devices that connect to my current NAS). If I were to use this as my NAS solution, I'd need to solve that; but that's a bit "cart before the horse" :)

Ah ok, so ZFS VS XFS was a *user choice*, gotcha. I'm not sure how well moosefs would be able to actually do what it claims reliably if XFS itself would fail to "notice" data issues, unless it's adding it's own parity/checksum or reading both/all copies of files and comparing (and in the case of mirrored files, which one wins? :D )

This actually reminds me a bit of "StableBit Drivepool" for windows, except that is a "per machine" solution; but both use an underlying file system and do per folder duplication work. I've used that in the past since it doesn't handle the network share (the machine it runs on does) and leaves the files as "native" on the actual drives. I ran into too many annoyance to keep using it, thus my continues search for my ideal: Bitrot protection, redundancy, deduplication, compression, and tiering. Very much a "cake and eat it too" :D
 

zunder1990

Active Member
Nov 15, 2012
227
80
28
If I had a *nix only ecosystem that wouldn't be an issue, but for me I need NFS and/or SMB (and it's not just Windows clients, I have other devices that connect to my current NAS). If I were to use this as my NAS solution, I'd need to solve that; but that's a bit "cart before the horse" :)

Ah ok, so ZFS VS XFS was a *user choice*, gotcha. I'm not sure how well moosefs would be able to actually do what it claims reliably if XFS itself would fail to "notice" data issues, unless it's adding it's own parity/checksum or reading both/all copies of files and comparing (and in the case of mirrored files, which one wins? :D )
For me in the past I only used NFS for linux clients but I have changed all of those over the moosefs fuse client. For me smb is the only non native protocol that I have to support.

Check chuckserver will checksum every chuck it stores on a schedule and will store it on the master's metadata table. If the checksum does not match the table then the chuck will be marked as damaged. After it is marked as damaged it will find another copy of that chuck and copy it back over to replace the damaged chuck.

Since moosefs does it own checksum you dont really need disk level protection like you get from zfs or raid.

The files in the moosefs system are not stored directly on the native disk, each file is broken up into 64mb chucks, after a chuck is written a chuckserver the system may move the chucks around to help balance out. So when it comes time to read a large file you may end up reading from chucks on may different servers/disk with should help with speed. If a file is smaller than 64mb chuck for that file will be smaller too.