Storage Strategy for Large Plex Libraries

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I’d like to gather some insights from those with experience managing large media libraries that are exposed by a unified media server, specifically pertaining to Plex/Emby. I’d also appreciate any insights on types of setups that may be adjacent to this idea.

I rapidly filled up about 70 TB of storage, out of 108 TB usable space, on my Plex NAS (which lives on a Synology DS1821+ filled with 18 TB drives) within a year between ongoing weekend hobby BD rips of my physical library and various media hoarding. I’m currently thinking about expansion for the future. For consideration is how to do this while maximizing storage space between split libraries. This Plex library is further split up among sub-libraries per genre. It is all stored within on main directory (“Libraries”). My initial target is probably to double the available storage.

Is it better to:
  • Expand the storage by adding additional expansion units/additional NAS?
    • An issue I see here is efficient usage of available storage, as certain sub-libraries will fill up faster than others. Generally connectivity between the pools won’t be an issue, whether over an eSATA expansion unit or an additional NAS over the network.
  • Build a bigger NAS with more storage capacity?
    • An issue here is eventually a bigger NAS will run out of space as well, thus circling back to the original issue.
  • Expand the storage by adding additional expansion units/additional NAS, and tie it together with MergerFS, so that only a single directory is exposed to the media server?
    • I don’t have a lot of experience with MergerFS beyond the conceptual level, and will appreciate comments. My general understanding is MergerFS will dynamically allocate files to whatever available storage there is. Wouldn’t this fragment sub-libraries?
  • Use a cloud storage provider such as Google to store the bulk of the library in the cloud, while using MergerFS to expose a single directory to the media server?
    • This introduces complexity by having part of the storage in the cloud, and no longer available locally. The scenarios I’ve read about have most files pushed to the cloud, then re-downloaded when a file is requested by the media server, then re-uploaded again after the file is no longer needed after a set period of ti There is also an upload limit for most cloud providers, e.g. Google is 50 GB/day.
For expansion units, it would be two Synology DX517 (eSATA). Additional NAS would be another DS1821+, a Synology 2422+, or a new TrueNAS server. For a cloud based direction, I’d use a TMM to tie everything together via MergerFS.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
I organize my media files like this (I don't use plex, just a windows fileserver and kodi on nvidia shields):
"must watch" -> stuff many people recommend or has a high rating on imdb/metacritic
"interesting" -> similar to "must watch", recommendations from forums/people with similar tastes in movies/series
"maybe" -> stuff with an imbd rating of >6.5
"the rest" -> stuff that probably nobody will ever watch (the biggest part of the media collection :D)
An issue here is eventually a bigger NAS will run out of space as well, thus circling back to the original issue.
I build my first "server" with 4x 3tb hdds, now I'm using 14x 16tb hdds. "Running out of space" will always be a problem until you start to sort things out.
Expand the storage by adding additional expansion units/additional NAS?
I don't know how expensive this will be. When I looked up synnology (& qnap) nas/storage extensions in the past they were pretty expensive and vendor locked...
Build a bigger NAS with more storage capacity?
This is what I would do (because I like to tinker with hardware).
Something like a supermicro 846 will allow you to use 24x 18tb (~380tb raw cacpcity). This "should" be enough for soem time :D
You could also use gpus to improve transcoding performance.
 
  • Like
Reactions: ReturnedSword

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The classical approach would be to have a file server like the aforementioned 846 with an expander backplane which gets expanded with new JBODs as space requirements increase.

You need a lot of space and a lot of cash, but then its only a matter of which of the two runs out first;)

The alternative approach is similar only you'd not use jbods but would use distinct boxes and join them over the network with something like ceph or some other distributed file system...


Realistically i guess its more sensible to define a target amount that you think is the maximum you want to have in the mid future...
Lets say 500TB - thats 25 20TB drives which means you can almost fit it in an 846 or easily fit it in an 847 (36 bays).

Ok, lets be realistic and say you get 18TB drives, use ZFS to keep the data safe, so Z2 with 12 drives per vdev, thats 20 usable drives x 18TB so 360 TB
Thats going to keep you running for 3.5 years on the 846 or 5.4 years on a 847.

Or you double that.

I dont think going forth with Synology will provide the kind of expandability you're looking for in the long term...

If you want to go cloud then I'd say cloud all the way, i.e. all content online only, and you access only what you actually watch (basically implement your own Netflix), a mix mode with downloading things to watch locally seems silly.
O/c you can push rarely used files to the cloud as cold storage, thats another matter.
 
  • Like
Reactions: ReturnedSword

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
If ZFS (RAIDZ2 or RAIDZ3) is not part of the equation with this much data, OP will soon be datahoarder of slightly corrupted files. ;-)

Noise a concern? If not, big Supermicro case with many slots and just use TrueNAS. As long as it has a backplane or whatever that supports staggered spinup.

Better imho would be a bunch of MD1200 because then server can be anything as long as it can talk SAS2. Also no problems with overloading the PSU or having to supersize it, because all HDDs want to start at once. Mount your stuff below a directory /mnt/stuff like /mnt/stuff/shelf1 /mnt/stuff/shelf2 ... and export via CIFS or use as Plex repo. Fill up the MD1200 with disks and when full, buy and connect next one. They can be daisy chained with up to four in total. At RAIDZ2 that would be 12 disks total = 10 disks data + 2 disks parity (size-wise) or 40 disks data in total or 40*18 TB = 720 TB. If you get a SAS 3008 8e with two ports and suitable cable (quadratic SAS to rectangle) you can install 8 MD1200 for a total of 720*2 = 1.4 PB. But for all that is holy, at that point consult a doctor. ;-)

Cloud... never. MergerFS ugh no, just use mountpoints correctly. eSATA = toy. Synology = no ZFS.
 
Last edited:

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
I organize my media files like this (I don't use plex, just a windows fileserver and kodi on nvidia shields):
"must watch" -> stuff many people recommend or has a high rating on imdb/metacritic
"interesting" -> similar to "must watch", recommendations from forums/people with similar tastes in movies/series
"maybe" -> stuff with an imbd rating of >6.5
"the rest" -> stuff that probably nobody will ever watch (the biggest part of the media collection :D)
Are you organizing this manually? It would be great actually if there was a way to automatically move files that aren’t viewed as often to slower warm storage, and keep frequently viewed content in hot storage.

An issue with data hoarding is I have literally hundreds of DVDs and Blu-rays that I got some way or another (some retail, some gifts, most garage sales). That’s not counting my music CD collection, hah. A hobby used to be ripping them. My former workstation had 4 Blu-ray drives :confused:

I build my first "server" with 4x 3tb hdds, now I'm using 14x 16tb hdds. "Running out of space" will always be a problem until you start to sort things out.

I don't know how expensive this will be. When I looked up synnology (& qnap) nas/storage extensions in the past they were pretty expensive and vendor locked...
There are some software on Windows, such as Drive Pool, but they don’t rise up to my expectations. Aside from my workstations, all my systems run some form of Linux.

There are quite a few people who swear by the ease of expansion with Unraid or OMV. I think it’s a nice concept with MergerFS JBODs + SnapRAID, though I would prefer a proper RAID-like storage system out of habit.

This is what I would do (because I like to tinker with hardware).
Something like a supermicro 846 will allow you to use 24x 18tb (~380tb raw cacpcity). This "should" be enough for soem time :D
You could also use gpus to improve transcoding performance.
I have a spare SM 846 laying around doing nothing, but a chassis I really set my heart on was the Chenbro NR40700/IBM 3448. The top load layout would be great for density, and the 120mm fans would be great for noise. I’m a bit sensitive to noise. A gentle constant whir is fine, but I cannot stand the jet intake noise level on SM chassis.

I am transcoding on a Lenovo P330 with P1000.
 
Last edited:

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
The classical approach would be to have a file server like the aforementioned 846 with an expander backplane which gets expanded with new JBODs as space requirements increase.

You need a lot of space and a lot of cash, but then its only a matter of which of the two runs out first;)
How quiet can SM 846/847 be made with fan swaps? I have a spare SM 846, and the fan noise bothers me a lot.

I wish I can say money is not an issue, but I’m not above spending money within reason if it brings me enjoyment. The one big cost isn’t even the chassis and platform, it’s the hard drives. Consolidating as many drives into one system is nice though from the networking and babysitting more systems standpoint though. I completely understand concerns about HA or availability, but that’s an intention for a future project.

The alternative approach is similar only you'd not use jbods but would use distinct boxes and join them over the network with something like ceph or some other distributed file system...
I only I stand Ceph from a conceptual point. My understanding, though I’m sure I am completely wrong about this, is that Ceph is a bit… complicated to set up and maintain. That would be before the hardware costs of multiple nodes and the network fabric. I was thinking more along the lines of TrueNAS as I do already run/have run FreeNAS, though those systems are no longer expandable (looking to retire them).

Another issue is how my network is set up. Currently I have a bunch of stuff in the garage, connected by a bonded 2x1 Gbps connection to my office, which is on the opposite side of the house, where the ISP drop point also comes in. It is capable of being upgraded to 10 Gbps with switch upgrades.. which I’m also working on. Been searching for suitable Mikrotik/Brocade switches off an on. Mikrotik seems to have supply issues, while due to being distracted I haven’t kept up on eBay listings for Brocade switches long enough before someone buys it from under me.

Realistically i guess its more sensible to define a target amount that you think is the maximum you want to have in the mid future...
Lets say 500TB - thats 25 20TB drives which means you can almost fit it in an 846 or easily fit it in an 847 (36 bays).

Ok, lets be realistic and say you get 18TB drives, use ZFS to keep the data safe, so Z2 with 12 drives per vdev, thats 20 usable drives x 18TB so 360 TB
Thats going to keep you running for 3.5 years on the 846 or 5.4 years on a 847.

Or you double that.
Wouldn’t a 12-wide Z2 vdev be a bit too wide? In my existing FreeNAS my vdevs are 6-wide with a 2 vdev vpool.

I think a Chenbro NR40700 would be perfect, but unfortunately it’s very hard to get them since the Chia craze.

My target would probably be to double my existing media storage, so about 200 TB, preferably with room to grow to 400-500 TB. I wouldn’t buy all the drives at once. *If* I go with TrueNAS I probably would add a vdev a year, perhaps two vdevs if I’m feeling ambitious and can get away with it with the SO.

Can you share your thoughts on vpools with many vdevs? Of course there’s no hard limit on how many vdevs a vpool may have, but is there a point where it would be unwise to add additional vdevs?

I dont think going forth with Synology will provide the kind of expandability you're looking for in the long term...

If you want to go cloud then I'd say cloud all the way, i.e. all content online only, and you access only what you actually watch (basically implement your own Netflix), a mix mode with downloading things to watch locally seems silly.
O/c you can push rarely used files to the cloud as cold storage, thats another matter.
You’re right Synology/prosumer NAS units have a lot of constraints, mainly the large initial cost that could be spent on hardware/drives if I go with building my own NAS. I’m also not happy with Synology using scare tactics/removing functionality to push people towards buying their rebranded drives. I have to admit though, that the Synology (which I originally bought for my father, but ended up buying him a Thunderbolt DAS instead) is extremely easy to admin/maintain. For some that is a huge draw.

For using cloud storage as a Plex library, there are quite a few people who do this, though it’s not as common. It would certainly require having fast enough internet where the experience would not be impacted. Take a look here [link], which is the basis of my idea to move the bulk of my library into the Google cloud.

I’d prefer to keep frequently accessed files local though, and haven’t seen an example with this scenario yet.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
If ZFS (RAIDZ2 or RAIDZ3) is not part of the equation with this much data, OP will soon be datahoarder of slightly corrupted files. ;-)
Bitrot is a concern though not an overwhelming one. I could say I can just re-rip everything, but that mentality may change due to the extra work required.

My existing FreeNAS is on quite old hardware, and can no longer be easily expanded. I’m considering a new TrueNAS build though as part of the equation.

Noise a concern? If not, big Supermicro case with many slots and just use TrueNAS. As long as it has a backplane or whatever that supports staggered spinup.
Such as a SM 846/847 JBOD? I have a spare 846 but it is quite “loud,” at least to me. Would you suggest any other high density SM
chassis besides a 8

Better imho would be a bunch of MD1200 because then server can be anything as long as it can talk SAS2. Also no problems with overloading the PSU or having to supersize it, because all HDDs want to start at once. Mount your stuff below a directory /mnt/stuff like /mnt/stuff/shelf1 /mnt/stuff/shelf2 ... and export via CIFS or use as Plex repo. Fill up the MD1200 with disks and when full, buy and connect next one. They can be daisy chained with up to four in total. At RAIDZ2 that would be 12 disks total = 10 disks data + 2 disks parity (size-wise) or 40 disks data in total or 40*18 TB = 720 TB. If you get a SAS 3008 8e with two ports and suitable cable (quadratic SAS to rectangle) you can install 8 MD1200 for a total of 720*2 = 1.4 PB. But for all that is holy, at that point consult a doctor. ;-)
This is a great idea. I wasn’t aware of this disk shelf. However, after initial research quite a few people have mentioned MD1200’s are very loud:eek:

Cloud... never. MergerFS ugh no, just use mountpoints correctly. eSATA = toy. Synology = no ZFS.
Can you please expand on your thoughts about Cloud and MergerFS?
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
No one has mentioned backups of this data. that just adds to the cost. that is a lot of time to re-rip your BluRaycollection...

Chris
You bring up a great point. At this moment I would be willing to re-rip, though I haven’t encountered the pain yet. This thought may change…

Ideally I would have a HA setup, or at least cold storage on an older system. Unfortunately, my existing FreeNAS is quite old and can not be expanded further. At least without throwing tons of money at an obsolete setup. For future consideration I would like to have a secondary NAS to push a backup copy to. Perhaps when my 18 TB drives are retired from a main system, but as of now 18 TB is pretty much the biggest drives that can be bought at a reasonable price. Would you consider a full backup in the cloud though? Let’s say rclone to cloud?
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
You need a ceph cluster, it worked for me ;).
Can you please share some details on your Ceph setup? I don’t know anything about Ceph besides reading high level conceptualization about it over the years. I don’t have exposure to Ceph in the last since none of my clients use Red Hat.

My general understanding is Ceph needs a lot of hardware, and a robust network fabric to work. And that it is complicated to maintain. Can nodes be dissimilar? For example, is it possible to build a small Ceph cluster, then expand in the future by adding bigger and bigger nodes?

Also worried about data recovery if something goes wrong.
 

Stephan

Well-Known Member
Apr 21, 2017
920
698
93
Germany
Such as a SM 846/847 JBOD? I have a spare 846 but it is quite “loud,” at least to me. Would you suggest any other high density SM chassis besides a 8
You could get rid of the plexi glass air guide and cool the CPU with a cooler with a fan. Replace mid fan assembly with a 3D-printed brackett and put Noctuas in there. Replace backside fan(s) with something slower. Case should be now much quieter.

This is a great idea. I wasn’t aware of this disk shelf. However, after initial research quite a few people have mentioned MD1200’s are very loud
You need a serial port cable and then set fan speed to 20. Will cut loudness to 1/16th.

Can you please expand on your thoughts about Cloud and MergerFS?
Cloud has small entry but big exit traffic costs. Also why store your stuff on other people's computers? Also see Atlassian, one person at the company starts the wrong script and all your data is gone. Periodic integrity checks also much harder. Costs in the long run (5 years) probably higher or much higher than your own solution.

MergerFS is a solution to a problem you don't have. At two selfs max at the moment, I think you can manually distribute stuff to A or B using mount points.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Your wish for silent(ish) operations might be a stumbling block here...
Hard drives produce heat,. many harddrives a lot of heat. Too hot hard drives tend to die sooner or even suddenly.

O/c you can silence chassis as @Stephan said, but you'll need to find a balance between heat and noise. This very much depends on the location of the box(es) and your environmental conditions. Ideally you'd find a place where you're not directly prone to the noise - the garage might be ideal if you can cool it adequately (talking from experience here).

Upgrade to 10G should be no issue in the mid term, its prolly a question of what you were looking to pay for switches, but there should be plenty around $200-300...

Re #of vdevs - i don think it will be an issue in the sizes you're looking at (which is less than 10 realistically).
Size of the vdev - 12 (10+2) should be fine, this is more a matter of how many disks do you have to replace at once than a real technical issue distributing data over too many drives. Its within the iX recommendation iirc, so i wouldnt worry about it. O/c you can also do 9+3 or smaller ones or 9 +2 + hotspare if you're willing to reduce the available space. In the end the goal is to fully use the storage box with this, so whatever you got...
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
One thing that I looked into the past were 2 items. StorSimple and morrodata. Both used a vm local and have the data in the cloud.

This would use local hdd/ssd as a local cache.

Chris
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
You could get rid of the plexi glass air guide and cool the CPU with a cooler with a fan. Replace mid fan assembly with a 3D-printed brackett and put Noctuas in there. Replace backside fan(s) with something slower. Case should be now much quieter.
One concern I have about fan swaps in a chassis with 80mm fans is 80mm Noctua has much less airflow. With a board swap I likely would be using a HSF anyway instead of a passive heatsink. I also don’t have any experience with 3D printing so I don’t know how difficult it would be to accomplish making a custom bracket.

You need a serial port cable and then set fan speed to 20. Will cut loudness to 1/16th.
I did some more reading about disk shelves, specifically the MD1200. A lot of reports that performance isn’t that great. What would be the benefit of going with a disk shelf such as IBM, NetApp, HGST over a much simpler JBOD chassis connected over SAS?

Cloud has small entry but big exit traffic costs. Also why store your stuff on other people's computers? Also see Atlassian, one person at the company starts the wrong script and all your data is gone. Periodic integrity checks also much harder. Costs in the long run (5 years) probably higher or much higher than your own solution.
There are a few providers with that explicit data buffet policies, or unenforced policies. Fair point though, it would be better practice not to rely on such services as their TOS may change, or the company itself may go under.

Interestingly I was affected by the Atlassian outage. A client has a Jira sub and it was a big deal that we couldn’t access. Ok, crossing out cloud as the main data store, though I may still consider it as an off-site backup.

One big consideration is how much I’m willing to spend on what essentially is a hobby. In the past I had 12U’s filled with 36 disks across 3 FreeNAS servers, but I downsized greatly after retiring old systems. The disks were 2 TB which was massive… 13 years ago. I did a quick calculation on how much 36-48 EXO X18 would cost, and it’s not pretty :)

MergerFS is a solution to a problem you don't have. At two selfs max at the moment, I think you can manually distribute stuff to A or B using mount points.
I believe in the blog post I shared, MergerFS with some cron scripts are used to move frequently watched files around. The less admin work I have to do for something watched mostly by the SO and brother the better. Building stuff is the only gratification I get out of it :p
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Your wish for silent(ish) operations might be a stumbling block here...
Hard drives produce heat,. many harddrives a lot of heat. Too hot hard drives tend to die sooner or even suddenly.
There was a time directly after the fan walls phase for enthusiast PCs (I still have a box of old brand new 80mm Delta fans from then) that I got into silent PCs via helping out here and there at silentpcreview. I’m no longer so uncompromising on passive/near-passive builds, but the quieter the better. While I may not like it, I’m fine with hearing airflow, just not the high speed whine in server rooms. That being said, if I do build out of a used server chassis meant for a data center I understand it’s tough to shoehorn a hard requirement of quietness in.

I definitely would be worried about disk temps, especially in a dense drive wall. Have you had any experience with server chassis with 120mm fan walls? Such as the Chenbro NR40700 or their IBM equivalents? I understand why most server chassis don’t have 120mm fans due to space constraints and some standardization between 2-4U chassis.

O/c you can silence chassis as @Stephan said, but you'll need to find a balance between heat and noise. This very much depends on the location of the box(es) and your environmental conditions. Ideally you'd find a place where you're not directly prone to the noise - the garage might be ideal if you can cool it adequately (talking from experience here).
I do have some stuff in my garage, however the garage is unfinished and here in Southern California our summers can get a bit hot. Actually now that I think about it I’m not sure if using the stock chassis fans would even be able to adequately cool down the drives in the summertime :confused: Late Fall/Winter/Early Spring no problem though as it can get fairly cold.

Upgrade to 10G should be no issue in the mid term, its prolly a question of what you were looking to pay for switches, but there should be plenty around $200-300...
I’m mostly looking for used Brocade switches, or new Mikrotik. My main issue is I haven’t put toward the time to actively camp eBay listings for Brocade switches. I tried to take the easy way out with Mikrotik but they are having supply issues for quite a while now.

Re #of vdevs - i don think it will be an issue in the sizes you're looking at (which is less than 10 realistically).
Size of the vdev - 12 (10+2) should be fine, this is more a matter of how many disks do you have to replace at once than a real technical issue distributing data over too many drives. Its within the iX recommendation iirc, so i wouldnt worry about it. O/c you can also do 9+3 or smaller ones or 9 +2 + hotspare if you're willing to reduce the available space. In the end the goal is to fully use the storage box with this, so whatever you got...
A rough zpool idea I’ve had for a 24/36 bay chassis would be 12-wide vdevs. A big concern is resilver times with larger drives. Previously I’ve designed around 8-wide z2 vdevs.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
One thing that I looked into the past were 2 items. StorSimple and morrodata. Both used a vm local and have the data in the cloud.

This would use local hdd/ssd as a local cache.

Chris
I think I’ll have to shelve the cloud as a main store idea. There is a way around Google workspaces to get unlimited storage, but it is a bit ethically gray zone, as well risking losing the entire drive if Google deems it to be a breach of TOS (mainly excessive usage in the petabytes from what I’ve seen).

I had a quick look into StorSimple and it looks like Microsoft has EOL’d it. The pricing is also excessive.

Morro Data looks like it uses commercial cloud as its backend. That would inevitably add another layer of SLA. I’m not sure I would be willing to sink that much into a hobby project.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Yeah switch prices are ridiculous nowadays ... totally @fohdeesha fault o/c;)

I have not used modded fan walls in my boxes due to the lack of a 3d printer (and too cheap to get one printed), but then i am usually not too worried about temps in the first place. Thats what a z2 (and hopefully backups) are for. I know difficult at the sizes you are going for;)

You could build a separated a/c'ed cold room in the garage for the stuff, there are a couple of examples around... if you got the server in your office you'll have to deal with that heat in the summer as well.

12 wide vdevs should be fine, but the only thing to be sure is to simply test it;) Depending on your timeframe getting 12 drives now (opposed to all at once), then tinkering with the 846 while you still run the Syno's as primary storage might be a more sensible approach.
 
  • Like
Reactions: ReturnedSword

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yeah switch prices are ridiculous nowadays ... totally @fohdeesha fault o/c;)

I have not used modded fan walls in my boxes due to the lack of a 3d printer (and too cheap to get one printed), but then i am usually not too worried about temps in the first place. Thats what a z2 (and hopefully backups) are for. I know difficult at the sizes you are going for;)

You could build a separated a/c'ed cold room in the garage for the stuff, there are a couple of examples around... if you got the server in your office you'll have to deal with that heat in the summer as well.

12 wide vdevs should be fine, but the only thing to be sure is to simply test it;) Depending on your timeframe getting 12 drives now (opposed to all at once), then tinkering with the 846 while you still run the Syno's as primary storage might be a more sensible approach.
My office is adjacent to my home gym, and while it doesn't have an AC there, the room is quite cool year round. The garage, on the otherhand... can get quite hot. I don't have sufficient space in there to build a sub-room, as while it is a "two car" garage, the available space on the side is quite a bit smaller than homes built in the 1990's, or even 1970's, so effectively it's a one car garage. I have space for two racks, but not enough space around them to build anything else.

I'm one of those crazy people who want to fill out all the drive bays at once. I mean, if I wanted to add storage later, I'd just go with something like Unraid right? hah! :cool:

I've been hoping Fractal Design would release a new storage-oriented tower. The Define 7 XL/Meshify 2 XL has plenty of room in there, but it seems like Fractal just like to re-hash the same thing over and over again, unlike the old days with their Array line which was a Define series optimized for storage. While I'd rather have stuff in racks, a pedestal type tower would be able to use bigger fans. One can only hope.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
How about getting an actively cooled rack then?

You also could try TrueNas Scale when the clustering part becomes part of the gui (it works now iirc but its manual config work), that might enable scaling over multiple nodes as well