designing The Poor Man's Drobo v1 (16-33drive NAS)

Your comments on my build philosophy (votes can be changed)

  • Good strategy, good plan to implement your complex needs

    Votes: 0 0.0%

  • Total voters
    13
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
- Use ECC RAM (RDIMM is cheapest) ($5 for 4GB or $12-15 for 8GB - start with 16? 32? whatever you can afford ZFS and VMs need RAM -- You can find $35 16GB RDIMM but with 1366 you have 12x per CPU so lots f room to keep it low capacity per-dimm if you wanted.
For an more or less single user NAS box on an single 1GBit network interface with relatively few and large files, the OP said he want to store media, you can get away with considerably less memory. Just be sure to read up on the risks(limit zfs_arc_max) and limitations(no deduplication).

Edit: This sums it up pretty good: ZFS ARC on Linux, how to set and monitor on Linux? - Fibrevillage
 
Last edited:

Marsh

Moderator
May 12, 2013
2,645
1,496
113
i'm just frustrated that I can't do things "the right way" already.
Take the free advice as a fun adventure.

I have purchased many 8gb server ECC RDIMM as low as $5-$6 each, 12 x 8 ram stack = $80.
I have purchased dual lga2011 SuperMicro system in this forum and Ebay for $100-$150.
I have purchased E5-2650 v3 production CPU ( not ES ) for less than $100 , happened 2 times.
There are many E5 v1 CPU cost less than a big lunch.

Used Enterprise hardware provided the greatest bargain.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
My old server sounds like what you want to do... A full ATX tower with hot swap adapters for the drives. It holds 12 3.5" HDDs in front, with 2 2.5s for the mirrored boot drives. It worked well for years, and lives on as a backup server.

My new server is a supermicro. Here's why...

The build using consumer parts worked, but ended up costing more than getting a server chassis did. Those adapter bays add up fast. My Supermicro holds 24 3.5 in less overall space and cost much less. Look for chassis using TQ backplanes, no SAS1 2TB limits to worry about with those, but they are less desirable so they are cheaper. More cabling, but not difficult. It's also easy to get power supplies properly designed to power larger disk counts and such. I've had ATX supplies have issues far under their rated power levels. Server supplies are usually able to be driven over the ratings without issues. Keep in mind that a PSU failure can take out everything attached to it. Saving a few bucks here can cost you a box full of drives later.

Total memory is limited as the consumer motherboards generally only take 4 DIMMS and you can't use RDIMM which are cheaper. This wasn't a huge limitation for my storage server, but it's something to be aware of if you want large amounts of storage later. Same with slots for HBAs and such.

Used server gear is so available now that if you search and are patient you can pick up some great deals. My motherboard, 2xCPU, 98GB RAM was $150, total. It's a 1366 based platform, but it's more than enough for my needs. I wouldn't mind a CPU upgrade for transcoding later, but it's a good place to start. And it has nice features like remote management that simply don't exist on consumer platforms.

So, you want lots of storage. Just how much are we talking about? How fast does it need to be? What network speed? Sounds like just a couple of clients. Big arrays at 1Gb/s is easy. 10Gb/s gets more difficult.

You mentioned being able to grow storage as needed. Consider ZFS mirror pools (RAID10). Add or upgrade 2 drives at a time. Faster, easier to expand, still has all the data safety ZFS provides.

The FreeNAS forum can be a little harsh, but they mean well. Frankly, I don't see any reason why my setup couldn't do the 96TB array you mentioned. Perhaps with an external disk box connected to an SAS expander. As well as the light VM duty it's doing now, consolidating boxes in my environment.

It sounds like at least some of your storage needs are for a staging area for tape backup. I would keep that separate, use your consumer level gear you already have for that. Just make a big stripe set (RAID0) for the HDDs, as they are just to store data moving to/from tape. Maybe 8GB RAM should be enough. I would use ZFS just for its checksums to make sure the data isn't corrupted while in transit.

Enterprise gear isn't expensive if you get used gear. It can often cost less than consumer gear in the same generation if you go a little older. It's also designed for 24/7 use and to last years. Having done it both ways, I wouldn't go back for my "production" server.
 
  • Like
Reactions: Twice_Shy

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Experimentation is generally more expensive than buying what you need as is buying cheap and replacing it 4 times (I do it consistently). When you have a real use case to work from it can be a lot easier and cheaper to constrain yourself to your real requirements. Do you need fast space for editing or cold storage for completed projects? The thing you seem to be describing is mostly storage that rarely needs to be powered up. I.E. Finish a project and store it to disk then basically forget it. For years I attempted to keep every bit and along the way realised that after a year or two I simply have no interest in the old data. 1) What are your real storage needs?

I'm not sure what is wrong with creating mirrored vdevs with ZFS 2 disks at a time with whatever is the best price per TB disks for cold storage. I would guess that technology will outpace your budget and that every year there will be bigger cheaper drives and faster ssds so worrying about the long term is a bit of a mugs game... I need to buy X number of drives of Y size now against a future need is usually a poor financial decision.

Similarly for performance nvme is obscenely hard to beat for scratch disk. While building sata/sas striped ssds is doable it seems a lot cheaper and easier just to stage your working data onto your workstation on an NVME drive and then offload it to slow disk as required.

The SC846's can be had cheap and for back end storage you don't really care whether it is SAS1, SAS2, or delivered on a donkey.

I doubt this spoke to drobo, zfs or poor mans 16-33 drive anything but I suspect that you could scope down to something that meets your needs this year and worry about next year when a) you have more money, b) specific need c) technology prices on disk get driven down lower.
 
  • Like
Reactions: pgh5278
It's not wrong, it's just not neat and tidy, too many things to go wrong. I don't know what raid/zfs sets of disks you will do but when one power supply fails I hope it has all and only a complete set of disks not half a raid group for example.
EXACTLY THIS.
I totally agree that it's not ideally neat and tidy, my only request was to realize that neat and tidy is itself a luxury that some can afford more than others. What is "no big deal" when making $80-100/hr skilled professional money (to avoid even a few hours hassle messing about with a server) is a bigger deal when you are making 1/10th that as a student struggling through school without any additional sources of loans, workstudy, and other old debts youre fighting through. Even if time is limited you still have more time than money, so the urge to be extremely frugal is high. Every little thing needs to be really important to justify itself. [EDIT:] And just to be extra clear "maybe you cant afford to do this at all" is not to me an acceptable answer (not that anyone is saying it, just that if you knew some of what i've been thru to overcome obstacles it would be clearer), i've struggled too hard and long to get back into a position where it might even be possible as it is already despite setbacks with literally hundreds of hours of planning stages alone trying to keep figuring out how to do more with less. We might disagree on what 'hacks' are worth it or not or a tested dead end, but I disagree if it's thought that the motivation is wrong which was why the FreeNAS boards frustrated me.

Like when I read on some random post someone here using a couple of zip ties to suspend a drive in a case to reduce vibration by making it free floating I thought that was brilliant, amazingly frugal, and i'm pretty sure I upvoted it with a Like. My guess is most of the board might roll it's eyes at doing it that way though? You are all running hardware well beyond me, and i'm trying to do the impossible on absolutely too little budget but that's the story of my life.

Thank you to people here for still "working with me" to try and help me understand, i'm not saying that begrudgingly either, I hope you realize I don't post these things to make myself out to be the town idiot either. I'm frustrated that there's a communication disconnect and sometimes it feels like i'm being accused of not thinking anything through at all when i've put more hours than I care to admit trying to shave pennies at times. (because if the first build works well others in my group are possibly up for having me build them clones since we will all be working with similar larger datasets in the future)


You made a mistake last time and did it wrong and it's coming back to bite you in the rear now, don't make the mistake again thinking that your way in your head as you say is the only correct path.

You want to use ZFS for data integrity and safety but you can't do that when you throw the cheapest consumer hardware at it, and hope for the best it will leave you feeling upset and wanting more.

I personally have no problem with 10-12 disks in 'cold' storage, I have a # of older Norco chassis I got before I got SM that I was 'fine' with, but after dealing with a snake of wires, ghost problems, and other random issues I was BLOWN AWAY how the SuperMicro chassis simply just worked.

If you don't need 24x drives in one chassis maybe a SuperMicro 836 with 16 drives would be better?

Build 1 of them, and then a 2nd as you need.
In no way do I think the way I think in my head is the only correct path. It just seems like the most affordable and scalable (start anywhere, upgrade piecemeal, and future downgrading/breakapart lets me reuse absolutely everything having taken zero risks) option for me to do NOW while I see whether I can set aside enough to upgrade LATER. If that money is not eaten by new drive purchases in a desperate attempt to save incoming video footage, it could be thrown at the nicer case in as little as six months. If video is coming in fast (to avoid lost opportunity cost) and the system seems to be reliable, I just have to make do and postpone the external case. Had I had something like SnapRAID on my original system I wouldn't have had corruption OR lost data from total drive loss. SnapRAID seems really nice in that even if I were to start saving files on some thrift store special I could migrate the physical drive right into an SAS chassis and it wont miss a beat. Perhaps FreeNAS will do the same but i'm not confident in my ability to set it up in a way to do that.

At this point i'm not using ZFS i'm definately locked into SnapRAID. ZFS was just a way to get to the most important goals of data integrity verification, minor snapshotting ability, and recovery/surviving loss of total drives. Snapraid does all that - without some of the downsides of proprietary non-data-recoverable drive formats, forced underutilization of drives (ie 80%), and seeming greater risk to "total loss from newbie configuration errors". ZFS has other things that would be nice (storage pools) but are not essential or/and there is other software capable of achieving the same goal.

Also I have to use SnapRAID for my RAIT server (there is no FreeNAS equivalent for that) so using the same system for the home server is a good starting point. One thing to learn instead of two.

I'm also about 90% switched back to abandoning server consolidation attempts right now - two separate servers is going to be both easier and safer to start, my original plan of a D2D2T system - a primary 24/7 always on NAS, which migrates/mirrors data to a secondary NAS (which being shut off during normal daily use is further protection from things like lightning strike), which then writes the tapes. If I need files only on the second disk system I can turn it on manually, my workaround hacks are not essential right now.

So the 24/7 online server basically becomes something like 2-8 data disks (possibly up to 12 internally, more than that we may be talking about your external expander cases again) starting with new 8-10tb drives so it's more data/less channels for simplicity, and a quad core cpu for some minor playing around with learning virtualization. (since the system is always on, power use is a concern, therefore just sticking with however much work can get done with four cores - if it's not enough I can build a second server to turn on for virtualization demand.)

The RAIT tape prep server is still as outlined - expected 12 drives, all 3tb (decased/repurposed from what I already had) and will be off when it's not receiving data or writing tapes.

I may skip SAS Expanders under these conditions and just use two HBA cards for now (as the one user in the other thread helpfully suggested I think some Dell cards for as little as $40 or something for 8 ports each) since that covers both hard drives and LTO drive. Perhaps when SAS4 comes out, the SAS2 gear will have dropped even more and i'll score a deal on an external chassis (or two!) that I could still plug right in. I should be able to have enough drives for now to mostly buffer any sudden inflows of data until I can slightly organize it and write it to tape.



If you are SERIOUS about data integrity, and keeping your data as safe as possible as well as storage and VM then here's some more suggestions:

- SuperMicro PSU are $15-$100+ depending on which model you get. I know everyone here runs what comes in their servers and eventually if needed will try to get a "SQ" (Super Quiet) modle as it makes it more silent in the house. The Gold 1200w are CHEAP CHEAP CHEAP, so cheap that most places include or throw them in with bare chassis.
Thank you for the specific suggestions. I wasn't aware SM PSU were THAT cheap/1200w Gold are not normally a low ticket item (now i'm wondering if I could throw one in a high power SLI desktop >_>) , but if those are the going used rates then I can reconsider it I guess. I was going to stuff the NAS out of the way (probably the basement laundry room where it's cool and away from bedrooms) so weren't too worried bout noise. I'm serious about data integrity but my biggest implementation of that is having multiple mirrored tape backups plus parity tape recovery methods including offsite storage and tape rotation/snapshotting strategies. The SnapRAID hash/parity data will be saved to tape with it and i'll be relying on that to test and insure integrity when data is restored or rolled back.
 
Last edited:
Digging in the STH archives (Q2 2010!) you will see posts on the "Big WHS". That project was one of the genesis projects for what is now STH and ran from 2009-2011.
That was one of the things that led me over here actually, either linked from someone on smallnetbuilder or that I read before deciding I needed to sign up.

If you look at the evolution and the key learnings from that activity:
  • Remember, if you have a 30 drive array, and drives fail at 5% AFR, you can plan for at least one drive failure per year. When that happens, and the array is healing, large arrays may take a long time to recover.
If you wanted to build a 30(ish) drive bay system. Here is my advice after going through the Big WHS evolution which eventually lead to the STH you see today:
Oh trust me, i'm completely and utterly sold on the idea that SAS Expanders are THE way to go, certainly longer term, and for any larger scaleout. My primary question was "can I at least get by for 6 months to several years" doing things my way? Is there any reason to believe that the electrical complexity of 8 SAS drives in a case is that much worse for some reason at 12 or 16 drives? Because in my mind more wires is just more wires - they just are what they are in any case. [EDIT:] Also again in my mind, "plan for one drive failure per year". In my mind I wasn't willing to spend too much to be able to hot swap one drive failure per year when that was perceived as the primary benefit.

It's actually even cheaper than I outlined/my reason for listing prices was in part if (apparently "ha ha" now) anyone wanted to clone my design because I already have something like five Antec Basiq 350's or something laying around from years ago along with the ATX cases. So my out of pocket costs to move into SAS territory were literally just going to be HBA, Expander, cables. Then seeing how far I could take that/what the soft upper limit was where you can add drives at either the same minimal cost, or even lowering costs, before it goes back up at the 'Enterprise' side of the U-shaped curve.

My largest plan scaleout was going to be the lesser of "what I can run off one 20 amp circuit common plug power" and somewhere 48-64 drives or so because SnapRAID has a max recommended setup of 42 data + 6 parity, and additional separate drives. (not under SnapRAID, such as for much smaller files - ie millions of audio sample files and such/SnapRAID prefers big data blocks per volume) I thought 30 drives was a fair guess, I wasn't sure about 48 or more.

It's my responsibility to have something that can either rapidly migrate data to tape (simply to preserve it for the future) and have a disk scaleout plan that can keep it online if we are actively editing, doing VFX, doing color grading, whatever. Even for multiple projects. Both SnapRAID and SAS scaleout right up to 48 drives which is alot of damn storage for a single server. The sole thing i've been trying to figure out is what is the "bridging NAS" to take care of my needs from now, until I can set it up "properly" AND need the 100+ TB category beyond what the two servers will fairly easily provide.


One semi related question, does Supermicro standardize and let you upgrade backplanes in the future? Like could I buy a 24 bay hot swap case, and change the backplane from SAS2 to SAS3 later? Or am I stuck with the backplane it came with?

  1. Even shucking external hard drives for 8TB WD drives 30x 8TB hard drives will cost you $6000. $700 up front is not bad.
  2. Remember 30+ spinning disks create a lot of vibration/ heat/ noise. You do not want to be near them.
  3. For reference, I now have ZFS servers that I can use as replication targets for Proxmox ZFS. All Proxmox ZFS is setup as two drive mirrors. FreeNAS is used mostly for bulk storage with RAIDZ2. I can then use ZFS send/ receive to push backups to one another. I do have one Ceph pool in the STH hosting cluster. That was a nightmare with <5 nodes in the cluster. Ceph is super cool, but it is also harder to get running well due to complexity.
I fully appreciate what you are trying to do. Check the STH archives. I went through the process when 60TB was "big" using 1TB/ 2TB drives. STH did largely spawn out of my desire to help others after that experience.
Thanks for the single positive word of enthusiasm. ^O_O^ I was hoping people would be happier at my out of the box thinking or consider something worth cloning and it's a bit of a downer so far. That said, I still think I was right in my original plan to have separate servers - which lets me postpone the need to consolidate 20-30 drives for longer, and thus not have to buy the case until later. A 36tb tape prep server and a 64-80tb primary NAS is plenty to buffer even massive opportunistic video shoots right up to Red Weapon 8k footage and 16 camera mocap stage data if I keep the tapes swapped and writing out nonstop from the tape prep server which I can probably conveniently get three per day (before school, after home, and before bed) without much annoyance.

To your points, #1 - the drives will be bought progressively, not up front. (and my goal is to primarily migrate data to tape when possible anyways because the heavier editing/access doesn't have to be done right away - just saving the raw video footage without it corrupting or becoming lost until we can afford to deal with it) #2 - going to go in the laundry room, it's fine. #3 I dont quite understand all that yet but in the future I can learn about better solutions than even SnapRAID potentially. (ie more automated for instance) My primary current plan is to have mirrored 2.5" laptop drives from a video shoot, which copy onto the main NAS the moment we are back (to empty them) plus the secondary NAS. A tape starts writing right away and were able to run out and get more data if needed cuz we have mirrored disk systems, with a backup to tape starting immediately so the laptop drives can be wiped without much fear.


My old server sounds like what you want to do... A full ATX tower with hot swap adapters for the drives. It holds 12 3.5" HDDs in front, with 2 2.5s for the mirrored boot drives.

The build using consumer parts worked, but ended up costing more than getting a server chassis did. Those adapter bays add up fast.

Keep in mind that a PSU failure can take out everything attached to it. Saving a few bucks here can cost you a box full of drives later.

So, you want lots of storage. Just how much are we talking about? How fast does it need to be? What network speed? Sounds like just a couple of clients. Big arrays at 1Gb/s is easy. 10Gb/s gets more difficult.

You mentioned being able to grow storage as needed. Consider ZFS mirror pools (RAID10).

It sounds like at least some of your storage needs are for a staging area for tape backup. I would use ZFS just for its checksums to make sure the data isn't corrupted while in transit.
Yes, i'd considered adding hot swap adapters to an ATX case. One thing I liked about "my" way was that if my whole project bites the dust, every single thing i've bought can be repurposed as normal desktop gear. There's not as much need for a 24 bay backplane case I mean. Though I think by splitting my servers (not consolidating into one) I can "get by" with a max of 8-12 drives per case for separate purposes and it will be okay. By the time I need an SAS Expander case it should hopefully be several years in the future already.

To the PSU failure, my assumption (if i'm wrong PLEASE correct me) is that anything with proper protection circuitry should avoid that..? I've seen it happen on garbage chinese PSU's but I assumed any "real" brand (Antec, Corsair, Seasonic) wouldn't be killing everything attached. If there is a risk then yes I need to reconsider. (but by the same mindset, is there any risk of the server PSU's doing that, and whats the lifespan of a used PSU anyway? I was under the impression I shouldnt expect a used PSU to go last 5 more years)

My planned storage was dynamic in the sense that I didn't know how much I would need and when, but I wanted to be sure I could scale it out to meet needs. The minimum was "enough to buffer 8k multiple camera cinema footage" and 16 camera motion capture (so i'd estimated at several TB per day possible) and rapidly migrate it to Ultrium LTO6 tapes keeping the tape machine working overtime. (about 7.5TB/day, or 2.5TB triple redundant mirroring max throughput) The maximum was if we start editing all that in real time, doing VFX, or/and working on multiple projects - in the last case, some money would be coming in to help fund the drive expansion. But the idea was to have a plan in place so that we could just grow the storage monolith without dicking around worrying about data corruption, bit rot, and dead drives. What caught me flat footed years ago was having incoming footage, nowhere to put it, and no time to research how to properly protect it outside simple mirroring. (and even had two mirrorsets die on me before realizing with the same data corrupt on both for instance) Just like an external USB on your computer is fine, but 20 on my PC turned into an implosion, I didn't want to be painted into a corner again. So I guesstimated 300TB as an upper figure, though more could even be possible. For reference Star Wars 7 had a total dataset over 1PB.

How fast - not very, but accessing full drive throughput going to several workstations at once would be nice. One workstation might be fed a file at 150MB/second from a single storage drive (which would then be worked at on local flash, then reexported back to the server once or twice during the day) but the goal was to have that possible to three to six workstations (three people but one realtime PC and one processing/slave PC each) at once later on - hence a desire to go to 10gigE before too long. Saturating 10gig is not required (be nice but not required), transfer speeds over 1gig probably needed though, so that forces 10gig to not be a bottleneck. Playing with fibrechannel or infiniband still may happen but the simplicity of just sticking with ethernet will probably win out in the end.


Do you need fast space for editing or cold storage for completed projects? The thing you seem to be describing is mostly storage that rarely needs to be powered up. I.E. Finish a project and store it to disk then basically forget it. For years I attempted to keep every bit and along the way realised that after a year or two I simply have no interest in the old data. 1) What are your real storage needs?

I'm not sure what is wrong with creating mirrored vdevs with ZFS 2 disks at a time with whatever is the best price per TB disks for cold storage. I need to buy X number of drives of Y size now against a future need is usually a poor financial decision.

Similarly for performance nvme is obscenely hard to beat for scratch disk. While building sata/sas striped ssds is doable it seems a lot cheaper and easier just to stage your working data onto your workstation on an NVME drive and then offload it to slow disk as required.

I suspect that you could scope down to something that meets your needs this year and worry about next year when a) you have more money, b) specific need c) technology prices on disk get driven down lower.
At the beginning it's mostly going to be an offline archive. It WILL come out again, but it's an archive not a backup, and it's on tape because that wins the cost-per-gig, the lack of need for regular powering up of drives (can sit for 5/10 years if needed), makes off site backup easy, no dropping a box full of hard drives ruining both copies of your mirrorset (ask the dummy me how I discovered the fragility of hard drives and why cardboard boxes are not a suitable way to move them to the car! a dropped Ultrium tape in case wont even care), etc. I'm totally set on that plan right now - migration of video files to tape, mostly being on disk long enough to serve as a mirror until it's verified as good writes.

The second stage (which will come i'm not sure when after, but will come) is needing more online data for active editing, VFX, color grading, other work. This will store snapshots and backups (not archives) to tape as well for the duration of a project. At the finish of a project, the workfiles we choose will be stuffed into deep cold storage. It's foolish to dump anything representing alot of work like that - but keeping it "alive" on spinning rust is annoyingly expensive, so tape is the way. Lost work is lost money - the cost of long term storage of tape is FAR less than disk esp when you start talking about replacing disks every 5 years and more. It's worth saving and it's worth storing right, just not overpaying for storage.

Spinning disks for archival storage still has issues with needing to be powered up and maintained, migrated to other disks usually after 5 years, vulnerability to lightning strikes, all sorts of things. Tapes you stick in a climate controlled storage facility offsite and literally can come back 30 years. No stiction seizing the drive, no bit rot from not getting a scrub every 3 weeks, none of it. It's not a bad idea to migrate data from one set of tapes to a newer set (and newer standard ie LTO6 to LTO8) every 5-10 years but if you dont it isn't the end of the world. So i'm pretty sold on Ultrium tape for multiple purposes - large media library archive, deep archive, and backups/snapshots.

Drives and tapes will definately only be bought on an as needed basis, and both the media tape library and primary NAS grown as necessary for the need at hand. Minimizing cost overhead per drive and "having a scale out plan" is mostly about the luxury of not having to think much in the future, just buy drives and slap in the case if i'm working 16hrs a day between school and filmshoots, letting SnapRAID do it's thing, and swapping Ultrium tapes 3 times a day. No thinking, no stress, just follow the plan.

NVME will likely be used in future workstations. The plan is to load the files planned to be worked on early in the day (before someone starts) making transfer rates a little less important (if they arent already on the drive) then once or twice per day (ie over lunch and after work) to snapshot working data back to the NAS server which can happen in the background. Then it gets mirrored to the second NAS and written to tape religiously in some revolving use strategy in case we have to roll something back. (the workstations themself probably having either RAID1 or at least a local HD also storing separate from the flash)

To the last comment, the whole point was hyperscalability. :) Starting at 16 drives included drives I already have that would go in the tape prep server, and was a server consolidation plan that I think i'm not going to do anymore because the added complexity doesn't have any critical benefits for this moment but does have some downsides.
 
Last edited:

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
"Poor mans Drobo"
"Lowest possible budget"
"2-4 chassis"
"33 drives"
"Scale out"
"tape prep server"
"swapping tapes 3 times per day"

I'm not sure I can be of much help.

My recommendation would be to sell everything, and buy something like this:
Supermicro | Products | SuperServers | Mini-ATX | 5028D-TN4T

Load that up with larger drives, adding drives as needed. Run your pfsense VM on it along with everything else. It has plenty of connectivity, is super simple, has fewer failure points, and will be much easier to troubleshoot.

If that box doesn't have enough drive slots, use this case with that same motherboard:
Amazon.com: SilverStone Technology Premium Mini-ITX / DTX Small Form Factor NAS Computer Case, Black (DS380B): Computers & Accessories
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
One semi related question, does Supermicro standardize and let you upgrade backplanes in the future? Like could I buy a 24 bay hot swap case, and change the backplane from SAS2 to SAS3 later? Or am I stuck with the backplane it came with?
Yes, you can swap backplanes out, and many people do. The direct attach versions (non expander) have no current limitations. I have an older TQ backplane (individual SATA type connectors) and don't need to worry about size/speed limits. The downside is more wiring, but you only have to do that once.

Yes, i'd considered adding hot swap adapters to an ATX case. One thing I liked about "my" way was that if my whole project bites the dust, every single thing i've bought can be repurposed as normal desktop gear. There's not as much need for a 24 bay backplane case I mean. Though I think by splitting my servers (not consolidating into one) I can "get by" with a max of 8-12 drives per case for separate purposes and it will be okay. By the time I need an SAS Expander case it should hopefully be several years in the future already.
Perhaps not much need, but it's not like you couldn't put standard desktop gear in one. Or you could just sell it if you no longer need it. It's not like it suddenly becomes completely useless/worthless. Server cases also come with better cooling for the drives and such. It's not that you can't achieve similar results in a desktop case, just that the server cases are already built that way.

To the PSU failure, my assumption (if i'm wrong PLEASE correct me) is that anything with proper protection circuitry should avoid that..? I've seen it happen on garbage chinese PSU's but I assumed any "real" brand (Antec, Corsair, Seasonic) wouldn't be killing everything attached. If there is a risk then yes I need to reconsider. (but by the same mindset, is there any risk of the server PSU's doing that, and whats the lifespan of a used PSU anyway? I was under the impression I shouldnt expect a used PSU to go last 5 more years)
Quality of all consumer level PSUs has been going down of late. Even the big names. And even those could fail in a way that would pass a surge into the case. I'd trust a used server PSU over any new consumer brand. For consumer stuff, I don't expect NEW units to last 5 years. Server gear is still designed well because big businesses are the customers and they are willing to pay for it. It can still happen even with the best stuff, that's one reason why people recommend UPS and surge protection. It's one more layer of protection.

If you know enough to be safe around line voltage capacitors, open up a couple consumer PSUs sometime. Compare the "garbage chinese" to an Antec etc.. You won't find a lot of difference in components. Then open up a real server PSU from Supermicro/HP/IBM etc.. More complex protection circuits, higher voltage rated components (less likely to fail in a surge), more robust coils and cores, thicker wiring, the list goes on. Even just weight... I have a decent PC Power and Cooling 700W in my old server. It weighs less than half what a SM 900W does. Now, part of that is the higher rating, but still... Power handling components are an area that hasn't seen as dramatic a shift to miniaturization as other components. It's just physics, if you want to push 100A@12VDC, you need heavy components. Or superconductors I suppose, but I don't want to buy liquid helium for my server. :)

How fast - not very, but accessing full drive throughput going to several workstations at once would be nice. One workstation might be fed a file at 150MB/second from a single storage drive (which would then be worked at on local flash, then reexported back to the server once or twice during the day) but the goal was to have that possible to three to six workstations (three people but one realtime PC and one processing/slave PC each) at once later on - hence a desire to go to 10gigE before too long. Saturating 10gig is not required (be nice but not required), transfer speeds over 1gig probably needed though, so that forces 10gig to not be a bottleneck. Playing with fibrechannel or infiniband still may happen but the simplicity of just sticking with ethernet will probably win out in the end.
If you intend to go faster than 1G, you should consider designing for it from the start. It's a lot harder to do later. Transferring data to/from more than one workstation at once complicates things as seek times will kill you. But if you don't mind doing the transfers in off hours etc, it could work out fine. Some of that can be mitigated with cache, lots of RAM helps quite a bit. SSD caching strategies might be helpful as well, but will need more tuning.

I know you've been discouraged by some of the comments, but keep in mind there are reasons for them. Servers are so specialized that there are reasons they are designed the way they are. And many people have been burned by "buy cheap, buy twice", so they are trying to help people avoid the same traps they fell into.

You have more faith in tape than I do, but the general idea is sound. It is more reliable for long term cold storage, if you keep the tapes in a climate controlled environment. And you seem to be adding redundancy to the tape storage as well, so that's a bonus.
 
  • Like
Reactions: Twice_Shy
Yes, you can swap backplanes out, and many people do.

Quality of all consumer level PSUs has been going down of late. Even the big names. And even those could fail in a way that would pass a surge into the case. For consumer stuff, I don't expect NEW units to last 5 years.
So thats part of why people keep recommending like the SM864 (or whatever it was) case, the cheap ones I saw that are SAS1, can just have the backplane swapped to SAS2 or SAS3 and it's no big deal. (and far cheaper to ship just a backplane i'm sure) Any idea what I should expect to find SAS2 and SAS3 backplanes for right now on the used market? Is there any point buying the older case first and swapping backplane vs just waiting and going with SAS2 from the start?

To PSU's, that's very surprising to me. I'd assumed PSU quality had finally gone up - Seasonic now offers 12 year warranties even BACKDATED to previous PSU's theyve been making on their best line, a pretty good mark of faith. Seasonic PRIME Series Warranty Upgraded to 12 Years I'd also think if makers had design flaws that dumped line current to your drives they would have developed a reputation by now. (chinese ones have thats how we know of them) I can't say I have total faith, just that i'm surprised is all. Nonetheless, i'm sure server gear would have to be more robust by definition.


If you intend to go faster than 1G, you should consider designing for it from the start. It's a lot harder to do later. Transferring data to/from more than one workstation at once complicates things as seek times will kill you. But if you don't mind doing the transfers in off hours etc, it could work out fine.
Thats what I had in mind by going SAS2 to begin with. Though with four ports to the case that's already 24gig bandwidth, the idea was that drives that would just be used as single drives to start, and later used in software raid as performance needs came about. (possibly at the time of a change to freenas zfs) I'm aware of seek times, but the editing workload would be like Workstation 1 is pulling video clips from Drives 1 and 2, and writing output to Drive 3. Workstation 2 is pulling video clips from Drives 4 and 5, and writing output to Drive 6. Etc. I dont mind changing the workflow to work around hardware issues since otherwise everything becomes all SSD RAID arrays on infiniband. A low budget startup on anything has to go for the low hanging fruit of where it's easier to design a system right now.

Whether used for 1gig speed, 10gig speed, or even 40-56gig Infiniband speed (along with going either dual port on the SAS Expander case, or switching to SAS3 backplanes later) I could still reuse the SAS Expander case for all configurations so I wasn't afraid of planning to eventually have that bought.


I know you've been discouraged by some of the comments, but keep in mind there are reasons for them. Servers are so specialized that there are reasons they are designed the way they are. And many people have been burned by "buy cheap, buy twice", so they are trying to help people avoid the same traps they fell into.

You have more faith in tape than I do, but the general idea is sound. It is more reliable for long term cold storage, if you keep the tapes in a climate controlled environment. And you seem to be adding redundancy to the tape storage as well, so that's a bonus.
I understand that, but negative tone can be wearing, or when people assume i'm saying something negative that in no way was meant so (like when I said I wanted to do it a certain way just to show people you could).

For tape, it's not that i'm a believer, it's that the entire nonconsumer industry is a believer. Everyone has standardized around Ultrium for good reasons - it's solved every problem previous more proprietary or repurposed systems (4mm 8mm DLT etc) had and despite analysts saying every few years "the death of tape" is imminent, research has shown even 200TB tape to be lab achievable so it's probably going to still be with us for another decade. Although the "cost per gig" vs spinning hard disks is not continuing to improve as fast as hard drives have been (still cheaper but a narrowing gap each generation but 1 cent/gig for LTO6 right now the value leader and less than LTO7), that's only the first purchase of hard drives, which with a 5 year design life (and no extensions of this ever coming http://www.digitalpreservation.gov/...ngs/5-4_Anderson-seagate-v3_archive_study.pdf ) mean i'm buying two or more hard drives to get into the future.

Electrical usage of stored tapes is nonexistant whereas hard drives are meant to be spun up regularily and to avoid bit rot need regular scrubbing and rewriting, stiction is always a risk after any time parked, regular handling is needed and increasingly burdensome when youre dealing with dozens of drives. Robustness means I can easily mail LTO tapes to someone yet i'm leery to mail hard drives. (though i'm aware it's done all the time i'm always worried about a single drop either shortening or ending it) The one and only failure mode of tape is tape breakage - and part of my way of working around that is creating the RAIT arrays using parity tapes so that for instance if any one tape in a set of ten breaks, it can be replaced by a single parity tape if another mirror set is not available. (and for the same reason that RAID5/6 is usually worth the financial efficiency over RAID1, as well as increased survival since like RAID6 lets any two fail whereas RAID1 sets if the wrong two fail you lose all, using a few parity tapes makes both financial and recovery strategy sense) Just no off the shelf RAIT software so i'm working around it with SnapRAID.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Thanks for the single positive word of enthusiasm. ^O_O^ I was hoping people would be happier at my out of the box thinking or consider something worth cloning and it's a bit of a downer so far. That said, I still think I was right in my original plan to have separate servers - which lets me postpone the need to consolidate 20-30 drives for longer, and thus not have to buy the case until later. A 36tb tape prep server and a 64-80tb primary NAS is plenty to buffer even massive opportunistic video shoots right up to Red Weapon 8k footage and 16 camera mocap stage data if I keep the tapes swapped and writing out nonstop from the tape prep server which I can probably conveniently get three per day (before school, after home, and before bed) without much annoyance.
Just trying to save you a lot of time/ money in the near future.

The other option in all of this is to go the complete other direction. Get some inexpensive 2U storage servers (say 3) and go Ceph or another scale-out storage solution. There are a lot of downsides to this. The one major upside is that you can add storage in chunks, e.g. plan to add one extra server every 60 days so you are not buying more machine than hard drives. Then, at some point, you can start rotating systems out after you have a decent number.

I do know a few 3D video firms using Ceph for this reason.

The major downside is that complexity is going up and you are going to have to plan for more machines. You are also going to spend a lot more if you use replicas versus parity. If you could find a few E3 V3 machines with 8 bays each and at least 10GbE that would be a decent place to start.

Likely a bad suggestion on Ceph, just thinking that gets you incremental expansion.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I wasn't aware of Seasonic upgrading their warranty. That's good to hear. Hopefully the product is able to back that up. I haven't evaluated Seasonic personally, so I can't comment on them in particular. I doubt there are many that would put line current on the low voltage side, it's more an issue of out of spec voltages under load when hit with a surge or similar. You don't need to go to line voltage to kill things, putting 8V on the 5V rail might do just fine. Going low can cause problems as well. Perhaps not destroying electronics problems, but random crashing and other odd issues are commonly caused by PSU problems.

Don't get hung up on the SAS version or speed. You will never achieve it with current HDDs. Even SAS1 is fast enough. The only reason to avoid it is for the 2TB limitation. Swapping my system from SAS1 to SAS2 controllers resulted in the same performance. The HDDs are the bottleneck, not the interface. For a big expander box, sure, it matters there, but for direct attached drives, don't worry about it. You can direct attach 8 drives to a cheap HBA.

For tape, my comment was more about older gen stuff and 30 years in a vault. I don't have much experience with LTO, but older tapes I've seen have problems with less time. To be fair, they were probably not stored in ideal conditions either. All that said, for offline storage it's hard to beat. And I suspect the issues I saw could have been handled with the parity setup you are using.

My only complaint about modern tape is that it's just too expensive for home users to get into it. Sure the tape itself might be 1cent/gig, but the drive is thousands. And it's hard enough to get people to backup when I offer them free space to store it. :)
 

mervincm

Active Member
Jun 18, 2014
159
39
28
I don't mean to be critical here but over time I have learned that your stated problem and your solution are not a good fit.

Be creative and innovate on the go with your spare money and spare data. This sounds like your livelihood here. Its important. Pick a proven solution. No one is suggesting you buy new at retail. The solutions proposed are the way to get you a proven stable supportable solution for a small fraction of what a company would pay new. The more creative and unique your solution, the more hours you will burn on design, build and maintenance. a large percentage of your costs will be unknown, you will be learning as you go, burning hours and $$$. Also, it will work for a smaller percentage of the time, for reasons that are impossible to know completely ahead of time. You will delay up front as build and design take longer. you will have more outages (complex systems have more dependencies and fail more often) and each outage will last longer (complex systems have more variables, rare solutions have a smaller community that can help.) You are WAY more likely to lose everything. No offense, but I doubt you are an electrical and mechanical engineer. take advantage of the engineering done by SuperMicro, you will likely make mistakes, and one of them may prove to be a critical one.

It is incredibly bad use of your time to perform all this research / test / design / build work for a single implementation. backblaze did it because they planned on scaling to hundreds implemented. They also did it because they don't really care if they lose a pod completely. they have N+2 if I recall.

Tape is cheap $/ TB, but tape is not simple to spin up at home. It often involves FC and thats another complete skill set (and gear to buy) thats useless to you otherwise. Also older used tape drives are not reliable / expensive to replace. ask anyone who is forced to use old tape drives at work, lots of issues, lots of hours burned maintaining them and running rerunning / testing backup jobs.

I would be 100% behind you if this was all spare money, in your spare time, with unimportant data, and this was your passion / hobby.

Put this energy into your passion, and you will see your billable rates rise more quickly, and you can hire someone who has tech as a passion to manage it for you :)
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
An intermediate step that will give you the build quality that you need to a lot of hard drives and powering them, but keeping the computer side open for experimentation would be to get a JBOD chassis from any major vendor, and then attach that to one or more computers.
Set up some ebay searches and with some patience you will fine some excellent deals that net you a solution that is far higher build quality than you could do on your own without free materials and/or labor at a very well stocked machine shop.

2 recent examples:
Supermicro 4U 90 bay SAS3 JBOD - $525
4U Supermicro Storage Expander 3.5" 45 Bay Server JBOD CSE-PTJBOD-CB2

Advantages being:
- It just works.
- You have a reliable platform to mix your current collection of drives + room to expand, and you can swap out the small/old drives at anytime
- Can attach it to better servers in the future with minimal hassle
- If something goes wrong, you can get spare parts new or used quickly.

Don't underestimate the total cost of a bespoke solution. As other mentioned above, sure you can toss stuff together and it will work for a while but when something goes wrong you end up spending a lot to get it operation again, and not just $ for physical parts but also valuable time. And as a rule - things always break when it is the worse possible time. If this is holding anything of production value, you don't want to be down messing with a home-brew solution for 2 days to get back in action. Take advantage of the design know how of the professionals and reap the benefits at a uber discount price, and spend your time on the side that will make more of a difference and be more fun to play with - getting the filesystems and overall system setup. With enough drives to play with, you can try a few different scenarios with modest hardware quickly and find what works and build up from there.
 
  • Like
Reactions: Twice_Shy

mervincm

Active Member
Jun 18, 2014
159
39
28
Another comment about Tape, you need VERY fast disk and networks to support it. If you can't stream data fast enough to the tape drive, it has to stop, rewind, wait for data then start up again. This will kill your tape performance and lead to damaged drives and tapes before you know it. This shoe-shining effect just destroys tapes usability and reliability. Ultrium 6 natively accepts data at 150+ MB/sec, it can go higher with compressible data. You can't move data that fast without a 10 gigabit ethernet path from NAS to Tape drive server.
 
  • Like
Reactions: Twice_Shy

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
If he said he was building a poor man's RDX tape library then this would be interesting and much more suited to do on a budget.
 

natelabo

Member
Jun 29, 2016
64
3
8
54
This post is a hoot...
To reiterate what everyone has said but is being nice...
You won't find a cheaper way, there is no amount of sweat equity that will make what you're talking about work, there is no reason for you or anyone to waste time talking about it..

SuperMicro chassis, SAS2 backplane, your software of choice... Go to the classifieds section here... One goes for sale every couple of weeks.

Or because it's fun and I think you need 6 more avenues to obsess about... Lenovo SA120, Windows storage spaces, LSI9200-8e in one those parts desktops you have...




Sent from my Nexus 6P using Tapatalk