unRAID Behemoth Build (Stage 2)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Funny, their material says 30 drives Max, pool and parity, no more than 30.
As for mine, I run 64 currently, have another 16-bay DAS sitting there as well.
Nice, what OS are you runnin, and how much usable storage is that, thats alotta drives for sure.
 

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
As for mine, I run 64 currently, have another 16-bay DAS sitting there as well.
Dude! You need to get out more o_O But, in the mean time, yeah, probably not for you.

The change in cached drive total count is recent.

Plus...I have to mention.....there's folks out there that run Unraid on bare metal, and then run more Unraid instances in VMs....and have 100+ drives connected at a time. Which just goes to show that where there's a will, there's a freak out there somewhere that will figure out a way. :cool:
 
  • Like
Reactions: TubaMT

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
1588749213688.png
Damn that was easy to move, not a single hiccup, took about an hour to deinstall the old drives, tray all of them, install the enclosures, and recable power in the new box. Another 45m waiting on the cache to balance as I removed drives one by one, <3 unraid.

Its late, so I'll get pics..sometime
 

TubaMT

Member
Jul 26, 2014
112
21
18
Dude! You need to get out more o_O But, in the mean time, yeah, probably not for you.

The change in cached drive total count is recent.

Plus...I have to mention.....there's folks out there that run Unraid on bare metal, and then run more Unraid instances in VMs....and have 100+ drives connected at a time. Which just goes to show that where there's a will, there's a freak out there somewhere that will figure out a way. :cool:
This sounds awesome! How exactly would you go about doing that? Do you add the unraid VMs with drives to the bare metal Unraid unassigned devices? Or do you just add them to shares? Really interested in this, as I'm looking to switch to unraid soon and don't like the 30 drive array limitation.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
This sounds awesome! How exactly would you go about doing that? Do you add the unraid VMs with drives to the bare metal Unraid unassigned devices? Or do you just add them to shares? Really interested in this, as I'm looking to switch to unraid soon and don't like the 30 drive array limitation.
Note that unraid doesnt officially support it, that said, you actually just run unraid like you normally would just in a VM.
You use the esxi passthrough for a USB drive or USB bus with a flash drive for unraid and then the the HBA of the drives you want running in it.
Here's an example passing through 16 drives (on 2 HBA) and a usb controller:
1588857797269.png
 

TubaMT

Member
Jul 26, 2014
112
21
18
Note that unraid doesnt officially support it, that said, you actually just run unraid like you normally would just in a VM.
You use the esxi passthrough for a USB drive or USB bus with a flash drive for unraid and then the the HBA of the drives you want running in it.
Here's an example passing through 16 drives (on 2 HBA) and a usb controller:
View attachment 13970
Ahh! So just running multiple HBAs and USB controllers passed through to the unraid VMs in ESXi? I've seen people run unraid VMs through ESXi and proxmox but never thought to run multiple VMs of it. Sometimes solutions to potential problems are right there in front of you lol!

I'm guessing there is no way to connect or link all of the Unraid instances to one Unraid dashboard or one share. You would have to have the different Unraid instances as different shares?
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
You can connect them individually via SMB or NFS, but not as a single large data pool if thats what you're asking.
The instances would be separate.
 
  • Like
Reactions: TubaMT

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
You can connect them individually via SMB or NFS, but not as a single large data pool if thats what you're asking.
The instances would be separate.

And from the references I found on the Unraid forums (I'm spending a lot of time there lately as I plan and build my updated machine), each instance had its own license. One person had something like 4 Unlimited licenses, with 30 pool disks on each.
 
  • Like
Reactions: TubaMT

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Correct, I'm currently running one unlimited ie) pro. and my backup box is on trial been running it like 6 months :D.
But I'm finally at the end, when I move I'm gonna have to buy my second license finally.
 
  • Like
Reactions: TubaMT

TubaMT

Member
Jul 26, 2014
112
21
18
Thank you both for the information. It's nuts that they have this artificial limitation, especially when I have seen Linus Tech Tips, IIRC, receive a custom Unraid license where they had more than the 30 allowed drives in an array. But I guess they want people to buy more licenses and use clever methods to get around the limitation lol.

I'm still oscillating between Unraid, OMV, and Windows 10/Server. All my data is currently on a Windows box but I'd like something more NAS oriented I think for my upgrade.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Eh for me its not an issue, I don't have plants to have more than 20 drives currently for my desktops physical size limitation.
For all the features it has and how it works its ideal for me and alot of other users.

The limit has to due with support and demand rather than just being an artificial limiter (same for a multiple pool/array feature).
You can have more than 30 drives with the cache, but you can only have up to 28 data + 2 parity drives even on the 'unlimited'.
This product is intended to market to the hobbist/home lab user who generally is not getting into 30+ drive territory only a small handful of the users want or need more than that.

If you are thinking about 30+ drives unraid just might not be the solution for you, check our freenas, ceph or linux ZFS if so.
 
  • Like
Reactions: TubaMT

TubaMT

Member
Jul 26, 2014
112
21
18
Eh for me its not an issue, I don't have plants to have more than 20 drives currently for my desktops physical size limitation.
For all the features it has and how it works its ideal for me and alot of other users.

The limit has to due with support and demand rather than just being an artificial limiter (same for a multiple pool/array feature).
You can have more than 30 drives with the cache, but you can only have up to 28 data + 2 parity drives even on the 'unlimited'.
This product is intended to market to the hobbist/home lab user who generally is not getting into 30+ drive territory only a small handful of the users want or need more than that.

If you are thinking about 30+ drives unraid just might not be the solution for you, check our freenas, ceph or linux ZFS if so.
Thank you for all the info! Super super helpful! :)

Ya I'm totally fine with the 30 drive limit right now. I just am looking farrrrr into the future, but by then 20 TB drives will probably be more viable. But only 2 parities for 20 TB drives would be kind of scary lol But I think I saw some talking about them implementing the ability to have multiple cache AND array pools which would be really cool. I don't know if they'll increase the limit but being able to split the arrays just to have more parity drives.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
To be fair, 2 parities for 28x 2TB or 4TB is scary much less 8TB+.
I know its been suggested, but the last time multiple array pools was mentioned they said there wasn't anything on the roadmap planned.
I cant imagine needing more than the current max cache of 24 drives, the main data array I could see useful though, they have steadily increased that over the years, IIRC they started at 16 drive max then to 24, 28 and now the current max of 30.
 
  • Like
Reactions: TubaMT

TubaMT

Member
Jul 26, 2014
112
21
18
To be fair, 2 parities for 28x 2TB or 4TB is scary much less 8TB+.
I know its been suggested, but the last time multiple array pools was mentioned they said there wasn't anything on the roadmap planned.
I cant imagine needing more than the current max cache of 24 drives, the main data array I could see useful though, they have steadily increased that over the years, IIRC they started at 16 drive max then to 24, 28 and now the current max of 30.
I saw this comment from one of their administrators:

" On 3/18/2020 at 4:13 AM, Gdtech said:
Anybody know when multiple cache or array pools be available ?
Multiple cache pools being internally tested now. Multi array pools not in the cards for this release."


When they eventually release multiple array pools it'll be great! But ya, it'd be great to have more than max 2 parity drives too. But at least you only lose the data on the failed disk, not the whole data pool, if your parity also fails.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
I saw this comment from one of their administrators:
Multiple cache pools being internally tested now. Multi array pools not in the cards for this release."
Ooo nice good to know, I might consider adding a hdd cache for a surveillance storage to get a raid 1 out of it (just running unassigned disk direct attached currently).

When they eventually release multiple array pools it'll be great! But ya, it'd be great to have more than max 2 parity drives too. But at least you only lose the data on the failed disk, not the whole data pool, if your parity also fails.
That was actually one of the two biggest reasons I went with unraid, that you DONT actually lose the whole data pool even if 3+ drives fail.
The second was the expandability by a single drive vs ZFS/freenas, I didnt want to have to replace every drive with a larger one or create a new pool with an equivalent amount of drives (which I likely would have done 6 disks vdevs with 1 parity if I went with ZFS).
Didn't particualrly like the idea of having to drop 800$ or so to make a new vdev instead of expanding by one 8TB drive.
 

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
So part of the changes I opted for a pair of DC HC510 SAS drive I picked up from @redeamon that I'm running preclear on now, I'm hoping this will speed up the parity check a bit so I can justify running it more than once a month, 19 hours is just too long.
Did you complete the change over on this and get that DC HC510 set up as a Parity drive, and does it spin down on its own?

I've been reading conflicting info on the Unraid forums (so much conflicting stuff there, and I don't know enough to be able to sift out what's really accurate) and it seems like the LimeTech devs are (or were) saying SAS drives won't spin down and they're not going to address it anytime soon, if ever, because SAS isn't a priority for them. But there's also users on there saying that, for them, SAS drives spin down just fine on their own out of the box.

So, I'm not sure what to make of it. I'm seeing some right priced SAS drives out there, but don't want to invest if they aren't going to spin down. My parity drives just don't need to be up that much.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
I did complete the change it reduced the parity check time by about 3-4 hours (averages about 16 hours to complete now).
Due to the length I did reduce the bi-monthly backup to only a monthly (happens on the 1st for my main, and the 15th for my backup box).
I'm of the personal belief that drives should always be spinning rather than put them in standby (I have fairly high usage so that makes a diff too, even in early morning hours).
So spin down isn't an issue for me, that said I am able to manually spin down the SAS drives, so they at least accept the command and I can audibly hear them whine down.

I got my SAS drives when they were the same or cheaper than the SATA 6+ months ago ($130/drive for 8TB SAS).
My drives are model: HUH721008AL5201 for reference.
 
Last edited:
  • Like
Reactions: edge and Markess