Help me build a freenas machine

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hinch

New Member
Jul 22, 2016
19
1
3
44
Hi all. My current storage nas is running out of space (qnap thingie) so I want to build a new server or 2 (or one with additional disk chassis hanging off it) to replace it and give me a little more flexiblity.

I've got a 600x600 rack already that has all my music production interfaces and switches etc in there so I'd like the servers to fit into the rack which means I'm limited to half depth/shallow cases.

Where I'm struggling is working out which are a) the best cases to use b) the best motherboard and c) what is the best solution. It doesn't help I'm in the UK so we don't have half the options you guys in the states have for alot of this stuff.

B and C are where I'm confused more than anything else but the choice of C kinda effects B and A I think so i'm after a little advice.
Am I better just buying one case and filling it with a MB hanging a load of disks off the internal interfaces and being done with it. Or am I better having one case for the motherboard etc and then a hba card over to a disk case with hotswap backplane etc. What are the pros/cons of either solution. Suggestions of hardware would also be appreciated.

If it helps at all disk IO is going to be important to me as its mostly used for music mastering and video rendering / editing. There's no real budget assigned to this but I'd like to keep it below £1k for the base system excluding drives so I'm fine with ebay / slightly older parts if it gives me more options that latest and greatest don't offer.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
B) Depends on what you need (sata ports, pcie slots, ram slots, max ram, cpu).
C) I like my systems to be as simple as possible, so my advice is to put all in one chassis.

Is the 600mm x 600mm a hard requirement?
 

hinch

New Member
Jul 22, 2016
19
1
3
44
b) all entirely flexible priority is 12TB+ of fixed storage but because of what I use it for IO is an issue but getting that capacity in pure SSD is expensive so I'm thinking some form of SSD cache read/write and then spinning rust for long term commital.
C) I'm open to either doesn't have to be simple.

yeah cab size is hard requirement as its full of stuff already i've got about 6u spare moving everything over will be a nightmare.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Well with that size limitation I'd start looking for fitting chassis' first. Then after you found one/two for the amount of drives you want you can see what size MB it takes. Then you decide on OS you want to run, speed you want and price you are willing to pay. Then what to buy will be answering itself;) [more or less only o/c, still a bunch of options, but sounded better that way;)]
 
  • Like
Reactions: Patrick

hinch

New Member
Jul 22, 2016
19
1
3
44
Well with that size limitation I'd start looking for fitting chassis' first. Then after you found one/two for the amount of drives you want you can see what size MB it takes. Then you decide on OS you want to run, speed you want and price you are willing to pay. Then what to buy will be answering itself;) [more or less only o/c, still a bunch of options, but sounded better that way;)]
well i'm pretty much limited as I think I said at first to half depth chassis mostly the empty ones you can buy this is why I wasn't sure if I should look for a chassis with built in drive bays and backplanes etc and use something like an mITX mb or if I should look at a fully open empty case for the mb/ram etc and then a second or third case for the drives with simply a hba adaptor etc. Don't know what works better / best considering the storage capacity and I guess throughput/io I need too.

Now which mb should I look for a smaller bronze/e3 board or is a c3000 atom series / onboard cpu better. should I go with sas connectors onboard and extend them or just look for low IO on board just for boot and get a couple of pci cards for disk shelfs.

Size is the last issue on the plate as far as I see it it, more what is the better approach and which is the better more sensible hardware these days since alot of the sth freenas guides are at least 1 year out of date hardware wise. OS is easy it'll be freenas. So I go back to my original questions :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Size is still your single hard limitation, that makes it the fist hurdle;)
But for the sake of it, lets move to #2 - 12 TB size.
Now thats easy if you don't care about performance (4 drives in a 2x2 mirror). That will not perform so well o/c - but you have not quantified what you need - I for one cannot translate "disk IO is going to be important to me as its mostly used for music mastering and video rendering / editing" into IOPS or MB/s (or even whether IOPS is more important than throughput)

Now if you need IOPS you need to identify whether the software you use can make use of a cache devices (limited working set) or need all 12 TB to be high IOPS capable (which means a lot of disk drives or many ssds for size). Throughput in the other side can be achieved by relatively few disks.
Based on that you can deduct how many drives you need which in turn tells you whether you can use one chassis or need 2 or maybe even 3 (and 2.5" or 3.5")... That then decides whether you can go xeon-d/c3000, e3 or e5/epyc (based on MB sizes available/ supported features)
 

hinch

New Member
Jul 22, 2016
19
1
3
44
As a close benchmark throughput will probably be more important than iops I'd have thought since its only a few files but large files so streaming them out across the network processing them on the desktop then streaming them back this was the thought of having some ssd as cache maybe so the files in use would be called mostly from the ssd and only committing back to the spinning rust effectively after i'd finished working on them.

My initial thought was perhaps lots of smaller 2 or 3 tb disks and perhaps a couple of cache disks infront of them but i'd like to leave room for expansion. As an example the current nas is 4x4tb drives in raid 5 and its at about 80% capacity at the moment and i'm only using wd reds. 2 issues here one is I'm running out of space for example after one trip out I may have 100-200 gb of video files to edit but this may only be 5-6 separate files then the final result set may only be 200gb after all the editing and compression etc is done anyway then at current I'm actually deleting a lot of my source files but i'd like to start keeping them going forward. The other is speed. Current nas has 2x 1gb eth aggregated I'd like to either move up to 10gbe (the switches are 10gbe already) or lots and lots of 1gbe ports so I could aggregate 4+ ports to give me more bandwidth as you can imagine trying to read source files of that size and chop/slice/trim etc them down takes a while across a network.

My initial thought was 1 half depth 1u case with mb + ssd's on there then a hba card or 2 on the pci slots out to expansion for now just one disk chassis but leaving me capacity to bolt in another disc chassis in future. 1u half depth when not having to deal with the issues of a mb I should be able to fit at least 8 drives per chassis even using 3.5" judging from the type of cases I've seen available. but then I don't know if hba is the best option or would a mb with some sas ports on and just route the cable out of one case into another to a simple sas/sata expander backplane be better.

Don't get too hung up on the size issue I'll make it fit I just need to know which hardware I need to make it work then making it fit is the easy part :)
 
  • Like
Reactions: MiniKnight

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Could you do a half depth with mitx mobo, with a boot drive like a sata dom then two hard drives for data. You could then build extra disk shelves converting other 1Us to jobds.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Honestly, to me it sounds as if the best you could do would be to a get a larger nvme drive locally to edit and then something with decent throughput (4 drives+) on the remote end to copy & backup your files. This would enable you to be fairly conservative in your server choice (xeon d/c3000 for the 10g interface) and still would support the primary use case.
Dont play around with LACP (multiple 1G interfaces), that will not help you at all.
 

hinch

New Member
Jul 22, 2016
19
1
3
44
Honestly, to me it sounds as if the best you could do would be to a get a larger nvme drive locally to edit and then something with decent throughput (4 drives+) on the remote end to copy & backup your files. This would enable you to be fairly conservative in your server choice (xeon d/c3000 for the 10g interface) and still would support the primary use case.
Dont play around with LACP (multiple 1G interfaces), that will not help you at all.
Yeah I did consider this however I kinda shot myself in the foot and didn't think far enough ahead when I built the new editing pc last year thought I was being smart and going for a small footprint desktop device as all storage was network based :( So I built an mitx system with only 1 pcie slot which obviously has an nvidia gfx card in it....... see where i'm going here? :(

I like the idea of the xeond or c3000 specifically still struggling to get my head around external disk shelves then though or getting enough sata/sas/whatever interfaces for storage this is the bit I'm really struggling with which is why I kinda wanted some input from experts.
 

hinch

New Member
Jul 22, 2016
19
1
3
44
Could you do a half depth with mitx mobo, with a boot drive like a sata dom then two hard drives for data. You could then build extra disk shelves converting other 1Us to jobds.

and how do i connect those jbods to the mb ? sata or sas expanders? if so which ones are good ? hba ? fibre channel and do it old school san? this is the bit i'm really stuck on
and in doing so will dictate I guess which mb/cpu/ram I end up needing as alot will depend on how many pcie slots I need for interfaces to the external shelves ?

Unless I just get a board with like 10 sata3/sas ports on and run really long cables from one case to another :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
The usual way to run a jbod is using an external sas connector (hba or expander based.
Depending on the number of disk you need and the money you want to spend (for a 16i4e card for example) you'll need 2 to 3 pcie slots for internal and external drives.

If you go Expander then you can attach both on it provided you're not getting bandwith limited.
For example you can get an M1215 + Intel 12GB RES3TV360 (or 6g with reduced bandwith, sufficient for spinners), that will allow to add internal and external drives (external via int2ext adapter, externals SAS cable, another ext2int adapter and then connectivity to backplane).
Or you get a card with ext SAS connector directly and then dont need the first Int2Ext adapter
 

hinch

New Member
Jul 22, 2016
19
1
3
44
ok cool and what kind of backplane would I be needing for a secondary case then?

Assuming i'd have to jump to something like an e3 to give me the pcie slot count i'd need ?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Backplane with expander or extra expander - depends on bandwith you need whether you want to stack multiple ones.
And you can use a 16i4e card on a Xeon-d but that card is $500 I'd say. Or you go lower port count and need expanders (reducing bandwith) or multiple HBAs (meaning more PCIe slots) - how many depends again on the drives you need/want:)
 

hinch

New Member
Jul 22, 2016
19
1
3
44
OK think i'm keeping up so lets assume they'll be 2 disk chassis one full of rust and one full of ssd's lets say for arguments sake 6 drives in each. theory being i'd configure freenas to use the ssd's for their fast cache / common access etc and then tier down to the spinners for long term storage + one drive in each chassis would be a hot spare anyway I guess to be on the safe side.

if I understand it right i'd need 2x M1215 + Intel 12GB RES3TV360 one going to each chassis then some kind of sas/sata backplane to connect to but again if i've got it right they'd effectively be 6 cables then going from the parent machine to each case one for each drive? is there not a way to just have a single cable (think this is what lead me to hba style rather than expanders etc) going to each disk chassis would prefer not to end up in a rats nest of wires but if its the best option I can live with it.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Each cable from hba covers 4 drives (external,can differ internally with TQ type backplane).
For 6 spinners a single 6gbs conception is sufficient (max bandwidth), so one cable to secondary box only :)