designing The Poor Man's Drobo v1 (16-33drive NAS)

Your comments on my build philosophy (votes can be changed)

  • Good strategy, good plan to implement your complex needs

    Votes: 0 0.0%

  • Total voters
    13
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Slloooooooow progress. I'm not even planning on buying hardware until summer 2017, want to have all my ducks in a row for what hardware to buy first. (also wont have time to configure/test/etc)

This is a place for public feedback on "the whole system". I'll have a "topic or two to solve first" before wanting to jump too far ahead, but if i'm designing myself into a corner here's where to give me a warning/correct gross misunderstandings I may have.

For instance until i've hammered down the SAS Expanders and PSU's I wont even be shopping for motherboards yet. I have separate topics for that/discussing specific topics elsewhere, but i'll be compiling/updating critical parts to this thread when they get narrowed down.


Build’s Name: The Poor Man's Drobo (named because Drobo-like progressive upgradeability will be designed on the lowest possible budget while actually exceeding Drobo performance, capacity, protection levels, and data recoverability/no proprietary formats. Designed for disks to be added, upgraded in size in place, and primary hardware to be upgraded over time as well. Zero concern for cosmetics, willing to tolerate some increased hassle at the beginning to save money for drives like lacking hot swap bays and server rated hardware.)

Operating System/ Storage Platform: TBD, probably virtualized Linux with SnapRAID as primary storage array

CPU: TBD at least quad core to start

Motherboard: TBD probably consumer level at first

Chassis: 2-4 common inexpensive probably even used ATX cases to start (with 8-11 effective bays each, so 3x11 or 4x8) literally bolted together to keep cost overhead per drive down, as it is acceptable to power down server to change drives for now, option to upgrade to hot swap trays later. (when I have the $ for a fancy 36 bay hot swap and can put motherboards in rackmounts i'll be happy - first I have to get enough data online though)

Drives: minimum expect 16 hard drives to start (common 3.5 SATA some decased from USB I already have) of 3-8TB sizes, expecting 19-21 hard drives + 3 opticals as a likely configuration, max configuration allowed for on this build about 29-30 hard drives+3 opticals. External Ultrium LTO6 drive. One hot swap tray to ease loading data into or out of the array. (ie copying a failing drive to its physical replacement) 1-2 drives treated as hot spares/any indication of failure while i'm out of town results in a drive copy command to reconstruct data and if successful pointing the SnapRAID array to use the hot spare until I can come home and physically replace it. (failure to copy means a SnapRAID recovery has to run of course to rebuild it thats what its for)

RAM: TBD prob 24-32gb to start mostly for virtualization needs

Add-in Cards: still deciding which SAS HBA and SAS Expander cards to use but will probably be using one 24port to 36 port Expander card for drives, though triple HBA isn't ruled out. Eventually 10gb Ethernet HBA when I get a 10g switch.

Power Supply: Multiple PSU's (2-4x) each powering separate drivesets, and able to power off a stripe of drives that wont be needed for awhile (under server virtualization/consolidation some are only intended to be accessed by one session) and force a form of staggered spinup whether the drives want to or not. (since SAS and SATA both tolerate hot swapping shouldn't bother them at all long as I don't do it before data is done writing) Also reduces cost-per-reliable-watts as 1kw and above PSU's seem to climb rapidly in price.

Other Bits:
GOAL:"minimum overhead cost per drive" maximized to what a single common 120vac 15-20amp house circuit allows. This seems to be about 32-33 drives. I'm aware more is possible but at increasing cost-per-drive or attention to detail (choosing specific drives over others/cant use "any") so if I need that much data i'll be splitting into two fulltime servers possibly even before I reach the max configuration. Though i'm hoping 32x8tb 256tb raw is enough for anybody. Wanting a MAID like configuration. (array of idle disks to save power and because it's no different than powering up manually the drives I want to use to serve data - some will sleep more than others and spent all night off anyways) Has to use drives I already have til they die despite higher power use/hope to put smaller ones to sleep when not in use so it don't matter anyway.

GOAL: Basic server virtualization though not to the extreme, what was originally going to be like 2-3 systems i'm trying to force into one for power and cost savings. When I start needing much more than four cores a second server might happen instead of a dual socket board as the cost-overhead-per-chip is less for two mobos instead of one dual socket. This will just be what I call a financially optimized build where second or even third servers could be powered up as needed. Intended to run things like firewall, routing, home media/data server, and some basic virtualized sessions from thin clients. "Learning server/home lab" until I upgrade to something fancier/first learn and see how things work, what bottlenecks in the real world usage, etc.

GOAL: Optimized for progressive upgradeability, that's one reason for using SnapRAID. The Budget Drobo without the downsides. I can literally get a new server board, plug in the same drives, and it shouldn't matter at all. If my hardware becomes deficient (ie expanding virtualization needs) it will later get upgrades, don't worry.

Usage Profile:

Home media/file/backup server - the eventual plan is to use this for serious video production (my original post about a 300gb storage server) but I dont want to trust it with irreplaceable footage until i've tested it, so until then it will just serve unimportant blurays and such for at least a few months. Also a single always on server that drive backups on house workstations will mirror to, and a general NAS files sharepoint for groupware/collectively accessed projects from three workstations (to start).

House firewall - undecided which/still learning but probably pfSense

House QoS router/switch - plan to have this prioritize the right packets to VOIP and certain uses over others (console gaming latency cuz i'm not the only one in the house and i'm not currently paying for the ISP :) )

Virtualized sessions - via Xen or VMware ESXi and VNC or similar, replacing a few computers in the house with thin clients, fairly light use to start and for awhile ("learning lab")

Tape preparation/writing - compiling up data to be written to Ultrium tapes in a RAIT array. (redundant array of inexpensive tape - each set of 10 tapes or so will have 1-2 parity tapes providing an alternative restoration method if the mirror set fails) This was originally meant to be on a physically separate server (and may be again) but this avoids some data redundancy of data that is both to be served and written. Talks with the SnapRAID author show that tape writeable parity sets should be viable to restore lost volumes from though it will be the first thing to test.

Home automation experiments/security footage motion flagging/recording type stuff (mild use/"learning lab", this is being played with for a separate side business project with another house member) - just like the tape prep server this can also break away into a separate box if it ends up being a problem/too much virtualized sessions going on

v1, v2, v3?
I have to get my feet wet somewhere so this is just v1, as I learn from any bottlenecks or mistakes eventually drives at least will be migrated into a v2 solution likely with different newer hardware. I'm also trying to be careful enough that I don't hit "limits" too fast/that if I just keep slapping drives in or upgrading existing drives it will continue to serve as a reliable Drobo-esque bitbucket for years.

This wont be the only server in the house, just the only "always on" 24/7 multipurpose one for now. I plan to separately be learning FreeNAS ZFS but that is less suited for the "easy Drobo-like upgrades" and maximal space utilization SnapRAID is great for. Heard too many stories of total datapool loss to want to start with ZFS as a learner. At some point if loads are too high, the single overutilized server becomes two always on servers or even three. If more than 32 drives are needed one file server becomes two on separate house circuits.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
This is another post where I don't know where to start...

When you're talking about low budget, how much money do you have in mind?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
- Describe what 'low' cost to you is. Is it lowest retail price, lowest price for old old generation of server gear ie: $100 or less CPU, Mobo, Etc

- Hard to know what to buy if you're not buying for 2-4 more months as prices change that fast for this used gear.

- Trying to do 2-3 'cheap' ATX style cases is a mess and you'll regret it. Save the $$ and buy a SuperMicro 847 (36 bays), if that's not gonna happan get 2x 846 (2x24). If I was stuck on a big atx style case or a few of them I'd do a custom disk shelf/storage within the Thermaltake Core

- Get an expander backplane. Now you only need 1 HBA for all drives, grab a 2nd for a spare if you like.
 
Last edited:
This is another post where I don't know where to start...

When you're talking about low budget, how much money do you have in mind?
That sounds depressing. :) I wont re-defend myself ad nauseum, nor do I want to burn out the good will of others suggesting things (even well meaning) on something I consider less important. Please dont misunderstand I AM thankful for advice, I might be right or I might be wrong in some cost cutting or cost postponing measures, but some things i'm determined to try regardless of advice to the contrary (physical drive mounting issues) because some of these things are TEMPORARY. Just like guys who build a computer literally into a pizza box,
I know all the reasons that is not ideal. It's starting to reach a point where "I almost have to do it this way just to show people you can" unless radically new information is presented though.

Low budget is just not spending more than I have to. The main budget i'm trying to lower is "costs beyond the drives"/the cost overhead per drive. I expect to spend a few thousand, but each $100 I save can buy another 3tb to put on line early. The first need for new space for film shoot footage could be as soon as this fall which is why I want the server done this summer.

I already have something like 20x 3tb drives theyre just all in USB cases and need to have data migrated off before repurposing them as nearline storage. I'm budgeted to buy probably 3 more 8tb, the LTO6 drive, an SAS Expander, and a compatible SAS HBA controller of model to be decided.

I'm aware there's a bunch of uncommon or unusual stuff I may be considering doing, i'd prefer criticisms to be sharply deliniated to a specific issue. A few things i'm aware of that may be controversial:
- Disagree with multiple PSU plans (my main defense is "despite overclockers having done it in the past")
- Dont think I should power down a partial rack of drives within a case (this may be a legit issue! so school me, the only difference was either for instance two servers/mobos and powering one whole system down when not used, vs one server/mobo with all the same drives and powering down the unused ones)
- Disagree with physical casing issues/multiATX and lacking hot swap bays (my main defense is "I have 11 drives with a consumer mobo now inside a tower now and i'm not complaining? Doubling that doesn't seem world-ending?")
- SAS issues (being discussed in a separate thread with no complaints about the advice i've gotten so far and things seem straightforward to me?)
- Virtualization or use issues (this is totally open to discussion because I havent yet asked questions on it, but someone talked me into trying to merge 2-3 servers into 1 so i'm taking their word on it that it's not hard to do) Usage issues

If something other than that please list a one line comment.

Biggest thing to remember is that "if plan X doesnt work I fall back to something else". Long as I don't break hardware in the trying. I may or may not buy someone else's external SAS2 Expander chassis if a great used deal turns up. If it doesn't, i'm planning what i'm planning though. Also i'm aware some people will want to talk me into that I cant possibly even survive without dual 22 core Xeons, 256gig absolute minimum ram, and quad hot swapped 2000w power supplies but i'd prefer to come to that conclusion on my own/I think some people are overestimating the usage profile/task scheduling, or consistently missing my remarks about rolling upgrades as needs show themself.

Some things i've already changed plans on were like dumping SATA port multipliers to commit to SAS after realizing it wasn't as expensive as I thought.


- Describe what 'low' cost to you is.

- Hard to know what to buy if you're not buying for 2-4 more months as prices change that fast for this used gear.

- Trying to do 2-3 'cheap' ATX style cases is a mess and you'll regret it. Save the $$ and buy a SuperMicro 847 (36 bays)

- Get an expander backplane. Now you only need 1 HBA for all drives, grab a 2nd for a spare if you like.
Just not spending more than I have to before it's necessary to. I know I keep saying that but i'm not sure how else to put it. My previous "video server" right now has 11 drives mounted in it already (though i've disconnected power to some theyre still in there) plus had as many as 20 external USB drives connected until I ran out of letters and had to mount drive a half dozen or so drive paths in windows. Now that's a wiring mess! All i'm wanting to do is migrate much of what I have into some internal cases and put it under an SAS Expander so far as I decase drives. Moving to a tri-ATX case is an improvement from the mess I have now! I'm sorry if I cant afford to go instantly to the perfect solution but i'll be happy to have an improvement!

Eventually i'll migrate the same physical disk sets probably into someone's all in one SAS Expander backplaned hot swap case. That will be a further improvement! That's all i'm trying to do is rolling improvements over what I can afford to do at the time while further adding some capacity and as drives die replacing/upgrading those drives in size. Looking at things like a 36 bay SAS case on ebay for $800 shipped is nice'n'all, but I still don't see how it's lightyears beyond the ATX cases I already have unused on the shelf when $800 will buy me dozens of terabytes. All I get is hot swapping. I wont pay $800 now just to hot swap drives for the first few years. I dont see how everyone can insist 22 cold mounted drives is the end of the world when I have 11 and am not complaining at that, I need a different new reason why it's such a problem/some people are sounding like my dad when he was all sure i'd regret when I got a manual transmission car when he kept reminding me "but you have to shift all the time", "dont you realize you have to shift all the time?", yes I know that, thats what I expected when I checked "manual transmission". I'm just saying I need different reasons why X is bad beyond what i've already considered, contemplated and analyzed for each issue.

Plus at the point in the future where I decide I have and am willing to pay $800 just to hot swap drives nothing prevents me from upgrading at that time when I want the job to be easier. If I need to that bad I can buy hot swap trays and add it to the ATX case too, if I did that someone would still be saying my idea to save money sucks though. Part of my reason for choosing SnapRAID is how easily I can upgrade the hardware, point it at the same drives, and have life continue as normal. I'm much less confident in my ability to make FreeNAS do the same even though it's probably possible.

I'm already probably using an SAS Expander, just not a built in one in a backplaned hot swap case. The one thing i'm still hemming and hawwing on is whether to put the server mobo in with the drives, or to mount an SFF8088 to SFF8087 adapter on the ATX-multicase. I'm favoring that just to ease upgrading or cold swapping the server. Alternately if I don't do it immediately that will be one of the first upgrades I do so I can separately work on the server vs drive case enclosures more easily.

For used gear recommendations, not seeking immediate used gear advice, just general strategy advice. Maybe cheap 24 bay cases will turn up by summer and i'll abandon chunks of my strategy for instance, or buy some ex-server mobo then available. This is more of a worst case "I can always fall back on this" plan if I dont find the deals I want in time. I'm just putting up the total picture of the build and will slowly update things if/when the big strategy changes.
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
Sorry, I think you're going to be waiting for replies... I can't see people spending time and energy to provide you help and opinions when you're not listening, and could care less unless it fits 'your rules' to change your mind/build.

- You ask people to tell you about specific issues with your build not general advice yet your build has no definitive anything except 3TB drives, so how can we provide you any feedback on a million ideas if they can't be general and we don't know WHAT you're doing exactly?

- You then state you're not going to listen to people and "do your own thing" to "show them" too, that doesn't help.

- Things don't progress linearly as you have them envisioned in your head. This is going to hold true with cramming 22 drives in a 'on the shelf' tower case with no hot swap/cages, it's going to hold true not only in mounting those drives but powering them, cooling them, running cables to them, etc... What you don't realize is the COST associated with cabling is HUGE. You're easily going to blow $100+ on cables $100+ or lots of time making HDD mounts, you're going to spend $$ On a SAS expander, etc... You can find the 847 for $500-600 shipped if you watch. It may sound stupid expensive but it COMES WITH 1000w+ rated power supplies (2), it COMES WITH the built-in expander, comes with built-in power, comes with built in appropriate cooling, etc...

By giving you generalizations you can calculate your numbers, savings, etc... this is why we give 'general' advice so that the person posting can make an educated decision on what route to go, from what you're saying it sounds like you are not even spending your own time figuring your own project out, and instead are posting here asking us to do it, and then asking us to tell you SPECIFICALLY what/where/why to change something. This isn't learning, helping, etc, for you or anyone else reading the thread now or in the future.

You also don't see people cramming 20+ drives, using numerous ATX cases, multiple power supplies, a snake of wiring etc... because most people with 20+ drives value their data, and understand all the 'weirdness' can make way for many many more mistakes. Mistakes that may crash your system, ground it out, fry it, catch it on fire, and other avoidable mistakes. It's like putting too narrow of a tire on a rim it may work for a short time but eventually thew hole thing will be torn apart, and a total loss.

If you want a free thermaltake core (the huge huge case) you could probably make mounts and fit 20+ drives in there, fits 2x PSU, etc... PM me and you can pick it up anytime, I purchased it brand new and never used it. It's yours if you can get it from me. No shipping at all for this huge item.


Good luck!
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I have to agree with the other guys, if you want a number of drives just buy a proper storage case to start. As mentioned you would be surprised of the cost of all the little things like cables, they really do add up, few $ here, few $ there and suddenly it's $100+

I am also of the opinion that at the end it's cheaper to use 8tb drives and a lot less bays. So much less to know wrong and way smaller. 3tb (unless they are 2.5" where they have some use) are really just throw them in the bin material to most people. Which may mean you get them free but then you can at least afford a decent case.
 
Sorry, I think you're going to be waiting for replies... I can't see people spending time and energy to provide you help and opinions when you're not listening, and could care less unless it fits 'your rules' to change your mind/build.

- You then state you're not going to listen to people and "do your own thing" to "show them" too, that doesn't help.
Well that's a complete inversion of tone there. >_< I thought it was pretty obvious from my links what I found frustrating. Backblaze built their storage pods doing things "many people" would consider kind of silly (down to wrapping drives in rubber bands to control vibration in a 45 drive case) yet has "showed people" a uniquely low cost way to solve the same identical problem of absolute minimum overhead cost per drive. All i'm doing is trying to do the same, design up my own 'Backblaze' where the main limit is just normal 15-20amp house plug power. I considered their public posts how they solved their problems to be uniquely cool too, I mean I wouldnt be accusing them of "doing their own thing to show everyone" in a detrimental tone.

I'm still waiting to figure out "why its such a big deal" that the only proper case is an $800 36 bay hot swap external backplane case. Is the only answer to that "so I can hot swap drives"? Is not wanting to spend $800 just so I can hot swap drives proof i'm being belligerant and ignoring all advice?

I'm trying to understand where the magic occurs, if I build three servers with 11 drives each instead will people still be telling me to buy an $800 case leaving it 2/3 empty? There is some jump I dont understand. When I looked at the numbers for SATA port multipliers combined with reports of problems, it was self evident to me why SAS was superior and I "learned" quickly. Whatever it is that is obvious to you all about why an SAS backplane case is the only possible working solution is not to me.

Let me also try the exact opposite - is there any situation where you would tell me the opposite of just throwing the drives into a large ATX case?
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Absolute nothing stopping you have external SAS to another case and use an expander, it's done all the time in a professional way with external disk shelf.
I think people are really just saying why bother when there is a far easier way to manage it.

First I am sure you can find what you need for less than $800 but aside from that how much does 3 cases cost, 3 SAS expander instead on 1 (or none if it's included in your proper case backplane), 2 SAS cards instead of 1, 3 psu instead of included one(s), external and internal SAS cables etc.

By all means go ahead and do it, will work but will it be worth the effort and cost ?

Maybe I am jaded not from building everything from PC to server and storage systems but this is why it's even nice to use mainboard with inbuilt SAS, less cards and components the better for assembling, risk of problems, power consumption etc.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
Well that's a complete inversion of tone there. >_< I thought it was pretty obvious from my links what I found frustrating. Backblaze built their storage pods doing things "many people" would consider kind of silly (down to wrapping drives in rubber bands to control vibration in a 45 drive case) yet has "showed people" a uniquely low cost way to solve the same identical problem of absolute minimum overhead cost per drive. All i'm doing is trying to do the same, design up my own 'Backblaze' where the main limit is just normal 15-20amp house plug power. I considered their public posts how they solved their problems to be uniquely cool too, I mean I wouldnt be accusing them of "doing their own thing to show everyone" in a detrimental tone.

I'm still waiting to figure out "why its such a big deal" that the only proper case is an $800 36 bay hot swap external backplane case. Is the only answer to that "so I can hot swap drives"? Is not wanting to spend $800 just so I can hot swap drives proof i'm being belligerant and ignoring all advice?

I'm trying to understand where the magic occurs, if I build three servers with 11 drives each instead will people still be telling me to buy an $800 case leaving it 2/3 empty? There is some jump I dont understand. When I looked at the numbers for SATA port multipliers combined with reports of problems, it was self evident to me why SAS was superior and I "learned" quickly. Whatever it is that is obvious to you all about why an SAS backplane case is the only possible working solution is not to me.

Let me also try the exact opposite - is there any situation where you would tell me the opposite of just throwing the drives into a large ATX case?
o_O I thought I did explain where "the magic occurs" and the reasoning behind going with a specific made case.


Comparing yourself to backblaze... I guess that could work? I mean their PODS weren't exactly known for being quality, and working too well the first couple revisions, and even at that they were not "cheap". Or are you comparing yourself to BackBlaze because you have VC funding now and can fix the issues, lol :) Either way it's kind of ridiculous because you're going to have 1 shelf maybe 2 sometime in the distant future... you have absolutely 0 need to be "like backblaze" in any such way, you have the choice to "do it right".

In the end it's up to you... stringing tower cases together with SAS expanders will work fine but why when you can do it right and it fits your needs? Why not?
 
Absolute nothing stopping you have external SAS to another case and use an expander, it's done all the time in a professional way with external disk shelf.
I think people are really just saying why bother when there is a far easier way to manage it.
I totally get that, i'm just frustrated that I can't do things "the right way" already. I'll give you an example.

When I last investigated this 'problem' of storing tens of gigs of video files I was led to FreeNAS and ZFS. Which rapidly turned into an endless series of "you cant""you cant""you cant" until I gave up, and just literally stuck external USB drives off Windows saving files that way. Which led to a situation where I had massive bitrot, data corruption, and after several drives died without warning I gave up trying to do more and put the drives on the shelf.

I was frustrated that the FreeNAS advice all amounted to "you cant afford to do anything - do it perfect and hire a professional and pay him $100,000 a year to manage your problem" which is ridiculous when I explained the kind of shoestrings I was starting with.

Basically ANYTHING is better than the original solution I came up with of just slapping NTFS formatted external USB's into windows until I had major corruption and no longer even knew which takes were still good or not. I felt some people i'd asked for advice from there made things harder than it had to be by insisting on the way they did it as the only right way which they could afford and I could not and they were dismissive and saying I was ignoring advice even when I later changed the question.

I very much need progressive upgrades I can pay for piecemeal as cash becomes available. There are lots of priorities and dropping huge monoliths of money all at once like $10k on a proper server just makes it impossible. "Best" is buy all brand new 8TB drives up to my desired size and maybe even just slip them into a 12 bay Synology chassis or something for simplicity then add a second later. "Feasible" is using SnapRAID, starting with the something like 20 drives of 3tb I already have, slowly migrating what data I can salvage from the past off to tape then once cleared using those drives in my array. SnapRAID lets me fill drives to 100% full without complaint, can use drives already full of data, lets me progressively upgrade size or add drives in place, add or subtract levels of parity, and even if there is a "snapraid failure" none of the data has been put in some proprietary format (like it is with ZFS or Drobo for which no recovery tools exist), it's just standard Ext3 or whatever. And it looks alot simpler to manage than FreeNAS ZFS - less planning, less potential for mistakes. So that was my first barrier.

The next big plans change was shifting from SATA port multipliers to SAS driven, especially since things like the PCIe card Expanders were way cheaper than I thought. That wasn't a hard sales job to do since the financial difference was minimal, and i'd also read about issues with port multipliers creating problems (despite Backblaze being willing to use them) and this site was totally the site that sold me on SAS for which i'm endlessly thankful as I doubt I would have considered it otherwise.

The third shift when I can afford it will be putting drives in a proper external chassis for simplicity. That WILL happen, but when i'm working at lower hourly income than i'd like I had to ask the question, am I willing to spend hundreds of dollars to reduce the hassle of swapping a handful of drives over the next 3 years or so? Not really, not if that's the biggest benefit. When I start scaling out for possible v2 or v3 options then probably YES, i'd already looked at some "unexpected growth, other side projects, paying work" scenarios and once i'm looking at 48+ online drives (a maxed out SnapRAID config, 42 data 6 parity for the video archive which is 336tb data if using 8tb drives, plus some other drives that would be shared outside SnapRAID which it's less suited for/simple RAID1 mirroring for things like many small files) by that point paying work makes me not want to play sysadmin, powering off anything interrupting work, and similar.

I also at that point have to look at upgrading to the latest Ultrium drives out by then because there's literally a limit how many tapes I can be baking out every day esp when I need a 2-4 mirrorsets depending on importance. Yet someone could make just as big of an issue talking me into LTO7 already instead of going with LTO6 which costs less per gig. It's just right now the cost overhead per gig is the biggest barrier to my project, rather than convenience or array performance. Doing it slower and less conveniently still means at least having a capability, actually being able to DO IT rather than just being impossible and wasting/the lost opportunity of abandoning video footage we wont have an easy chance to get again.

First I am sure you can find what you need for less than $800 but aside from that how much does 3 cases cost, 3 SAS expander instead on 1 (or none if it's included in your proper case backplane), 2 SAS cards instead of 1, 3 psu instead of included one(s), external and internal SAS cables etc.

By all means go ahead and do it, will work but will it be worth the effort and cost ?
3 cases - I already have, bolting together the main frame and just leaving off the "sides" between them so cables go between them. Allowing me to use one 24 port expander for the three altogether, and another thread shows low as $80 for the recommended Intel SAS2 6gig expander, maybe bit more for the 36 port HP one. PSU's still deciding but tolerable looking ones with Active PFC seen on sale for $30-35 for 500w sometimes made me think three could power it for $100. Leaves a fair bit of money for cables, unless the 4 star rated cables on amazon are actually junk/fraudulent ratings.

Vs a totally bare case for $800.. or even lets say the case was $300, still bare, no PSU in what I was seeing on ebay. It saves me buying a few cables and an $80 Expander... maybe? I mean if I can reliably score a used Expander case for less than past searches indicated, while believing the used PSU's they will have will outlive a couple of new in box ones or/and not cost too much to replace if they crap out, then sure i'm all for it.

I mean I just see it as an extension of the kind of philosophy that built Backblaze, or even Google Google "Corkboard" Server, 1999 - people using things so ghetto that any professional would scoff at them. Yet the proof being in the pudding that they did it anyway, and such bootstrapping methods put them into the game, and worked well enough at what was most important to do, until they could grow to where they are now. When I said I wanted to "show people" THATS what I was talking about, appreciating the almost ghetto chic of figuring out just how little you can basically even get on with, and surprising the people that insisted you would have needed ten times the budget to even start work. That what you did was impossible. I'm confused why that would be perceived as a hostile "not wanting help" attitude.

Anything that's an upgrade from the crap I have now with a tangle of external USB drives (which looks about like the google corkboard server) is a step in the right direction. SnapRAID and SAS are two big critical keys to making it happen simply because they are future proof. When I don't know exactly how big I will grow or how fast I will need new drives to suck up video data (because the opportunities to shoot the footage are not under my control) I pick the scalable option over the best option. Because if the other attempts of my video startup completely fail i'll feel like an idiot having spent thousands of college loans on cases and future proof servers and 128gigs ram and other stuff that doesn't have a secondary use. Whereas if I slowly inch up what I need, when I need it, and properly protect the data I already have and will be getting, and have a plan of where to go with it and how to implement it so this time i'm not painted into a corner like last time, I wont regret anything i'm buying - I needed those hard drives to store footage anyways afterall.


I mean when it comes down to it, I understand that other people dont get "ghetto chic". Maybe they look at the google corkboard server and scoff, or how the backblaze storage pod is put together and say "what contemptable amateurs, spend the money and do it right". Maybe I can convince people that there is something cool about doing more with less, or maybe i've utterly failed (as it seems to be based on the tone of responses to certain queries so far) but for me it's the difference between doing it and not doing it at all. I either go ghetto, or I go home, because I wont be able to afford "the right way" for a decade in worst case (post school/post paying off critical previous loans), and I will throw away 100% of every opportunity to shoot and save and keep footage now. I'll accomplish nothing, be nothing, and mostly just wait... for ten years... until I get a chance to go at this again.

When having the hardware to properly save, serve, and verify the integrity of the data seems so close within reach, yes i'm inclined to ask people to "see it my way" because the budget to do the better way just isn't there, and maybe if they help me brainstorm up some other cleverly "cheap but yes it actually works" way to achieve the goal, perhaps others stuck in a situation like mine with creative needs and a smaller than needed budget will learn about the build and want to emulate it in the future.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
Your post completely baffles me. People have given you the why you're simply not caring, not doing additional research, not doing your own excel spreadsheet with information provided, etc...If it is frustrating and you can't understand then you need to say that, and actually listen to advise. Last time you posed the question you were frustrated and gave up and did it how you thought would be fine, it wasn't.... why you're going down that same road again...? baffling...

If you can't understand, it's too frustrating, etc... no one cares if you can't do something... but at the same time you need to realize this, and if that's the case then yes, in the end it will cost you more.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
When I last investigated this 'problem' of storing tens of gigs of video files I was led to FreeNAS and ZFS. Which rapidly turned into an endless series of "you cant""you cant""you cant" until I gave up, and just literally stuck external USB drives off Windows saving files that way. Which led to a situation where I had massive bitrot, data corruption, and after several drives died without warning I gave up trying to do more and put the drives on the shelf.

I was frustrated that the FreeNAS advice all amounted to "you cant afford to do anything - do it perfect and hire a professional and pay him $100,000 a year to manage your problem" which is ridiculous when I explained the kind of shoestrings I was starting with.
So you want to store huge amounts of video files. Go and buy your multiple cases but don't try to chain them all together to build one huge storage pool, because that will most likely end very badly.
Think about doing storage tiers. How about keeping just one Box running full up, one Box in standby and one off(PXE bootable) Ask yourself the question what percentage of data you need "instantly", what percentage can have an delay of 1 minute max and can the rest of the files reside on cold storage that may take a few minutes. Size your storage arrays accordingly on your three nodes. I bet there are some projects that automatically reshuffles your data according to their access pattern.
 
  • Like
Reactions: K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
@MBastian s reply is probably the closest to what you are looking for. The others have mentioned very valid points. Cost and maintenance of mad scientist type solutions keeps ballooning. Those cases are designed with a specific usage in mind and are not designed to be bolted on together.

I have used for a short term a second case as a JBoD expander by running 2 8087 breakout cables from the main box to the second case. Though they were in a stable place where no one except me could access them, it was not a very comfortable feeling trusting my data to a hacked solution.

I have run in the past a huge lian like tower with 18 500gb-2tb drives and a few usb drives attached to it with a core i5 and 6 gb ram. You dont need 22 core xeons for a storage server. I would have spent way more in fans and cables for the case than what the case, motherboard and cpu cost. There is a reason why people recommend specific type of cases.

you don't have to get an $800 supermicro, but even a cheaper Norco or chenbro or rosewill case would be a better option. There are some tower cases that can support up to 16 drives without hotswap. You can get them for a couple of hundred bucks NIB.

The last purchase I made was a supermicro sc836 with 48 gb ram and dual xeon with redundant 1200 watt powrsupplies. Cost me 500 bucks on eBay. It is fully capable of running freenas corral out of the box with capacity to spare for virtualization.

It all boils down to what your use case is. Are you looking for a solid solution to maintain your data or are you looking to build a frankenmachine just because you can. Both are perfectly fine. This is the right forum to get assistance for both. But i have noticed that you get a good response when your questions are specific. Make a problem statement and request for assistance.

Just my 2c as a new member here.
 
  • Like
Reactions: T_Minus

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
This is really what I think you are trying to achieve. Addonics Product: Storage Tower XIII Another solution would be somethine like a Rackables SE3016 where you replace the backplane with something newer but you are left with a cheap case with room for drives and a board.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
LOL @cperalt1 I just moved one of those Rackable units and a dozen+ 300GB SAS 15k RPM drives, cannot believe how heavy everything is compared to "new" stuff, then I moved an even older 2U that weighed 2x as much as my 846 (w/out drives)...lol!!

Sorry to get off track there...

The free chassis I offered up too can technically be stacked with others as well.

Of course they make purpose made 4U / Towers but they aren't cheap!

If you're only thinking 11 - 33 drives TOTAL then why not scour ebay for a SM 846 with SAS2 backplane/power/etc for <$400 to your door, and then save up and get a 2nd one for single SAS cable JBOD connection when you need more capacity... less than 400$ to get started 'right' and then a 2nd or 3rd whatever chassis as jbod to add capacity/etc. You cuold even power down the JBOD when not in use :)
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
@Twice_Shy I have an SC847 36 bay and a decent CPU/Mobo combo with HBA that would get you started. PM me if you're interested in talking about it. Might save you some time and money as your drives grow and still be at a price point you're happy with, although since you don't have an actual price in mind I won't make any promises.
 
Your post completely baffles me. People have given you the why you're simply not caring, not doing additional research, not doing your own excel spreadsheet with information provided, etc...If it is frustrating and you can't understand then you need to say that, and actually listen to advise. Last time you posed the question you were frustrated and gave up and did it how you thought would be fine, it wasn't.... why you're going down that same road again...? baffling...

If you can't understand, it's too frustrating, etc... no one cares if you can't do something... but at the same time you need to realize this, and if that's the case then yes, in the end it will cost you more.
To being frustrated and giving up it was over things less technical and more human. For instance I asked for advice on building a 96TB storage server. I was told this would need 96GB of RAM and a server grade motherboard. So I then reasked how about building three NAS boxes 3x32TB each to get there (because I didn't care if it was monolithic or not) so I can use a consumer level $40 board like you see all the time on sale but by that point people had decided that only enterprise grade solutions would work and i'm being told to hire six figure professionals, because at the time basically nobody but professionals were building ZFS boxes over about 32-40TB. I was then almost stigmatized because I bet if I would have just asked "help me build a 32TB ZFS box" and not told anyone I would build three of them I could have gotten some helpful answers. I felt that people were overcomplicating the solution.

Which if anything is repeating seems to be the case again. Maybe the mistake was taking someone's advice and suggestion here to consolidate my servers? I need 12 drives of 3tb in my "tape preparation server" for writing off LTO6 tapes. That one can be turned on and off when i'm writing tapes. That leaves me a system starting with 3 drives and growing probably to at least 8 (which with new drives is already 64TB at least/probably enough), but maybe 12 drives over time for the file server (plus some virtualization learning). Yet i'm afraid because I asked the "96" question now (the mistake of thinking 24 drives in one case is cheaper) I can't even ask the "3x32" question (returning to the original plan of separate servers) when i'm regretting ever asking the consolidation question if i'm going to get negative responses for it and make people upset thinking i'm a deliberate blockhead. The whole point of consolidation is to save money not spend more.


Yet I feel that now i'm going to already have been stigmatized as someone with unrealistic expectations that 12 drives in a cold swap case will be too much to handle when I have 11 already. Now if I even start asking about "building three 12 bay NAS's" with SAS i'm going to be "that guy who never listens to advice". Part of why I asked this "big picture" contemplation was to see if things like server consolidation even made sense or not, separate from any other attempted multicase hacks and such. That or/and maybe I should at least postpone server consolidation for 2-3 years (and just run two separate NAS boxes for now) because it's more complexity than will be gained in any saved money. (really if people should be talking me out of anything it's server consolidation it seems like?)


Let me give an example of why I think the math even if I was doing a 24 bay case made sense. I'll publically go through the process hoping someone can point out what error i'm making. Please pick it apart or tell me what i'm calculating wrong?

[minor edit made]
Amazon.com: CableDeconn 18' Mini SAS 36P SFF-8087 To 4 SATA 7Pin 90 Degrees Target Hard Disk Data Cable 0.5M: Computers & Accessories SFF 8087 to quad SATA cables are only $9 each, six of them drive 24 drives so it's $54, one will be coming off the HBA internally the other five off the Expander
Amazon.com: Thermaltake SMART 700W Continuous Power ATX 12V V2.3 / EPS 12V 80 PLUS Active PFC Power Supply PS-SPD-0700NPCWUS-W: Computers & Accessories 500w PSU with active power correction for $40 or 700w for $55, and i'll bet two 700w's would run it as well as three 500w's so $110-120 worth of PSU that can be easily replaced with similar in the future
ATX cases - free, I used to build computers and literally have like 15-20 still in the garage
Intel RES2CV240 SAS Expander - one user got one for $80 so I was assuming I could hopefully do likewise. I'll say $80.
Amazon.com: CableDeconn 0.7M Internal Mini SAS 36-Pin To SFF-8087 Cable - Black: Computers & Accessories cable for HBA to Expander about $9
Amazon.com: CableCreation Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter in PCI Card Bracket: Computers & Accessories SFF 8088 to 8087 bracket if I want to later convert this to an external case (which I probably will later though i'd lose four drives)

So about $300ish shipped all in to make something an external case for 20 drives. About $270 to stick the server mobo in with it with 24 drives. Probably cheaper by $30 if I dont really need 1400w of power for 24 drives, 30w startup surges x 24 is = 720w. So $240 for brand new PSU's, expander, all cables gives me a cold swap case. If the PSU's dont have enough SATA power connectors they're like $5.50 to turn one into four Amazon.com: StarTech.com 4x SATA Power Splitter Adapter Cable (PYO4SATA): Home Audio & Theater

When I peruse ebay I see Supermicro 846 cases for cheaper but with people saying the backplane wont work over 2TB drives. I also considered most of the $230 "no risk" money because the most expensive part of the build the PSU's would just get used in something else to power a few computers anyways. Even if some of the SM cases have used PSU's I don't know if they've been in use for one year or nine years. I don't know how much more life to expect out of it. I cant seem to find a replacement PSU listing easily but I was guessing they probably don't cost $70, and I probably cant use them in a computer if the server build goes to crap and I redistribute the drives to a bunch of desktops in the future.

If somebody could please tell me where i've gone so far off the rails I would very much appreciate it because everything is very straightforward to my brain right now. I don't want to respond to other comments until this core "issue" of my sanity with numbers is solved one way or another.
 
Last edited:

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
It's not wrong, it's just not neat and tidy, too many things to go wrong. I don't know what raid/zfs sets of disks you will do but when one power supply fails I hope it has all and only a complete set of disks not half a raid group for example.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
It's not wrong, it's just not neat and tidy, too many things to go wrong. I don't know what raid/zfs sets of disks you will do but when one power supply fails I hope it has all and only a complete set of disks not half a raid group for example.
EXACTLY THIS.

This is why people on the other forums are trying to steer you toward enterprise gear.

You made a mistake last time and did it wrong and it's coming back to bite you in the rear now, don't make the mistake again thinking that your way in your head as you say is the only correct path.

You want to use ZFS for data integrity and safety but you can't do that when you throw the cheapest consumer hardware at it, and hope for the best it will leave you feeling upset and wanting more.

I personally have no problem with 10-12 disks in 'cold' storage, I have a # of older Norco chassis I got before I got SM that I was 'fine' with, but after dealing with a snake of wires, ghost problems, and other random issues I was BLOWN AWAY how the SuperMicro chassis simply just worked.

If you don't need 24x drives in one chassis maybe a SuperMicro 836 with 16 drives would be better?

Build 1 of them, and then a 2nd as you need.


If you are SERIOUS about data integrity, and keeping your data as safe as possible as well as storage and VM then here's some more suggestions:

- Use ECC RAM (RDIMM is cheapest) ($5 for 4GB or $12-15 for 8GB - start with 16? 32? whatever you can afford ZFS and VMs need RAM -- You can find $35 16GB RDIMM but with 1366 you have 12x per CPU so lots f room to keep it low capacity per-dimm if you wanted.

- Intel 1366 generation is very cheap, and throw in some "L" (low power cpu) and it's not much $$ as-in $100 total for mobo+CPUs.

- Be sure to use Power Loss Protection capable drive for your SLOG device if you go that route, which I assume you will at some point because of your "VM" usage, and a spinning 3.5" pool mainly.

- SuperMicro PSU are $15-$100+ depending on which model you get. I know everyone here runs what comes in their servers and eventually if needed will try to get a "SQ" (Super Quiet) modle as it makes it more silent in the house. The Gold 1200w are CHEAP CHEAP CHEAP, so cheap that most places include or throw them in with bare chassis.

- If you buy a SM Chassis from a recycler you can very likely have them "throw in" some SAS cables to get you going.

- $25-$40 for HBA if you go with expander chassis you don't need an expander ($80 to 125) and you don't need 3x hBAs so right there you've saved approx $160 between an expander and 2 more HBAs. Put that toward the chassis it's a huge chunk!


Rosewill has an internal 4U chassis that's not too bad, they have a 12bay hot swap that I have and like but for the cost I would never buy it again, and go with a SM, but if you want 3x 120MM fans for silent cooling, ATX power supply, etc, it fits the bill.
 
  • Like
Reactions: Twice_Shy

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Digging in the STH archives (Q2 2010!) you will see posts on the "Big WHS". That project was one of the genesis projects for what is now STH and ran from 2009-2011. STH started in June 2009 as a reference point. The build pre-dated the forums as well so it was certainly well before this community was established.

The Big WHS was a build to make a giant Windows Home Server on a shoestring budget. I think I was a Senior Associate at PwC at the time and I had to buy every part incrementally.

Here are a few articles to browse:
The Big WHS Archives - ServeTheHome
The Big WHS: May Update 60TB Edition

If you look at the evolution and the key learnings from that activity:
  • The build started with consumer parts, then migrated to multiple chassis using SAS Expanders. That was still around the SAS1/ SATA2 days where the SATA drive on SAS was less reliable than SAS3 implementations.
  • It involved a lot of trial and error. I was constantly buying components off of ebay, or more expensive sources, and finding they would not work. If you look at the evolution, the "cheap" at the time Norco cases were probably the only thing that managed to survive the iterations. There were also fun trials such as mating an ebay redundant PSU to the Norco case which resulted in a small electrical fire.
  • Failures. I had an Areca card die and take the entire thing down as an example. This was also at the height of WD 2TB drives failing and the Seagate 7200.11 popularity (I actually had low failure rates on mine.) Sometimes a chassis would not come up. Beyond the expense, the time involved dealing with failures was surprising. Remember, if you have a 30 drive array, and drives fail at 5% AFR, you can plan for at least one drive failure per year. When that happens, and the array is healing, large arrays may take a long time to recover.
  • The used IT equipment market has gotten a lot better. Back then, there were a lot of hard to reconcile parts and less availability to make custom projects like this.
If you wanted to build a 30(ish) drive bay system. Here is my advice after going through the Big WHS evolution which eventually lead to the STH you see today:
  1. You can easily get above 33 drives assuming you are using <6000rpm drives and have staggered spinup enabled.
  2. It seems like your build will cost more than $700. Here is your yardstick for building your own and making it cost less Supermicro SRX4200 36-Bay Storage Server 1x Intel Xeon X5650 3GB RAM / 0 HDD | eBay that is $700 and already has a Westmere-EP Xeon and HBAs installed. You can upgrade to V3 at a later date but that will cost $400+ and will not be difficult. Adding RAM and swapping CPUs will be dirt cheap.
  3. Even shucking external hard drives for 8TB WD drives 30x 8TB hard drives will cost you $6000. $700 up front is not bad.
  4. Optical drives - maybe use your workstation and then push data over the network for this. You will not want to be near 30+ spinning disks.
  5. Remember 30+ spinning disks create a lot of vibration/ heat/ noise. You do not want to be near them.
  6. Keep complexity as low as possible.
  7. For reference, I now have ZFS servers that I can use as replication targets for Proxmox ZFS. All Proxmox ZFS is setup as two drive mirrors. FreeNAS is used mostly for bulk storage with RAIDZ2. I can then use ZFS send/ receive to push backups to one another. I do have one Ceph pool in the STH hosting cluster. That was a nightmare with <5 nodes in the cluster. Ceph is super cool, but it is also harder to get running well due to complexity.
I fully appreciate what you are trying to do. Check the STH archives. I went through the process when 60TB was "big" using 1TB/ 2TB drives. STH did largely spawn out of my desire to help others after that experience.

Also, check the STH main site tomorrow morning. We are covering a NAS vendor announcement that will show you where this market is headed.