The next step, advice on home file server specs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AntFurn

New Member
Aug 14, 2019
13
0
1
Hi - Yes I'm yet another noob here and hoping I can get some advice. I've tried to do some research but now I just feel overwhelmed with too many options. Current system at bottom of this post.

I'm think I've found the case I that suites sliding under my desk (it's the only location I have for it - blame small UK houses):
In-Win IW-MS08 In-Win IW-MS08 - Mini Tower Server Case w/ 8x Hot-Swap Bays (12Gbps MiniSAS HD Connection) - Server Case UK
Which limits me to a microATX mb - I'd like to have more than one PCIe slot for future use so ruling out ITX etc.

Q: What motherboard/CPU combo meets these needs?
Does it really need to be ECC ram?

Note I'm in the UK and surprisingly (to me anyway) a lot of motherboard/CPU/case combos I read about are really hard/expensive to get here.

Thanks for any suggestion,
Antony

Main file server requirements are:
Serving files! mainly to Windows PCs
Providing backup solution for those PCs
Sources for media players content
min 8x HDDs hotswap bays (and therefore SAS?)
easy addition of HDDs to storage pool (and therefore not raid 5)
1x HDD failure tolerant (thinking OMV + SnapRaid)
a couple of p2p 10gbe connections (and therefore 2xSFP+?)
external connection to backup enclosure (USB 3/esata?)

Significant considerations: (The three L's)
Low noise
Low power
Low maintenance
Will run 24/7 and only shutdown for long vacations.

Additional would nice options:
2x HDD failure tolerant for personal data files (photos (~8TB)/source code/3D designs etc)
Be able to run a couple of docker containers
Write cache SSD

Does not need to do:
Transcoding
Mining
directly accessible from internet

Current system:
OMV doing SMB shares to 6+PCs + media players
HP N40L 8gb (non ECC) (getting v/old now and run out of room to squash more HDDs in it!)
5x4tb raid 5 (with less than 200GBs free - hence need for new server)
eSata 4x2tb raid 5 external enclosure (for overnight backups)
UPS connected by USB
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Also a UK StHer, probably in a smaller house than you and also using the InWin MS08 (I'm currently fitting it out with a Ryzen 3000 system as I will be doing batch transcoding on mine and the Ryzen's are frankly awesome at it for the money).

I'm using debian with a big fat RAID array which doesn't really gel with your drive pooling approach, but hardware-wise you can probably get away with a relatively low-power system. However, I think ECC and IPMI are benefits worth paying for; even if the box is under your desk it's still a PITA to hook up a monitor when you can just do it in a web browser, and ECC helps ensure that all the stuff you've got cached in RAM stays true. The drive caddies on the MS08 use 8643 connectors so you'll likely want to use an HBA or motherboard that can connect to those directly.

Basically it boils down to cost though. What's your budget and do you already have an HBA or similar to provide 8+ SAS/SATA connectors?
 
  • Like
Reactions: AntFurn

AntFurn

New Member
Aug 14, 2019
13
0
1
Thanks for the replies/suggestions.

Quick answer first: yes the a CS380 would fit, I'm guessing with the vent holes on the left hand side I wouldn't be able to push it snug up to the wall? I think it's also a case I read about that has some air flow issues, maybe I'm just getting them mixed up. It would be a better choice in some ways: standard atx ps, room for more drives in the 5 1/4 bays, full ATX mb etc.
Question though, how does one plug 8+ SATA connections into a mb that (most anyway) have only 6 SATA connections? I guessing that's where the HBA comes in, will reverse SAS-SATA cables work with mb's that have SAS connectors built in? (this is definitely an area I need to educated myself on) I was trying to avoid HBA(s) if I can because from what I've heard they use a lot of power/add to cooling requirements.

Good to know of other people using the MS08. What power supply did you go for? I have ps out of an old shuttle case that is Flex ATX size and was hoping to use that (though now I look at it I see it's 250w am wonder it that will be enough for 8 hdds + 2 ssds). Hmmm, it scary when you start doing the maths on that sort of power usage 24/7 - even at 250w that's getting on ~£350 a year - ouch. Though of course you would hope the drives spend most of that time spun-down.
Budget wise, I'm happy to spend upto £800+ on the mb/cpu combo if it's got all the connections I need on it and keep the power usage down. I'd hope that would last me another 10years like the N40L has done. I think I've had to plug a vga cable/keyboard into the N40L maybe twice in the last 5 years, I read about IPMI but it doesn't seem to be something I need but it seems to be on most of the mb's when I look for other things I want.

I guess one of my more serious questions is where does a consumer buy these sort of mb's? Amazon/SCAN/ebuyer/Novatech just don't have much of a range to look at. Or am i just looking for a combo of features that doesn't exist?

Thanks, Antony
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
My thread on my initial experiences with the MS08 is here and my current experiences with my Ryzen 3700X build start here.

To answer more directly, I'm using the Seasonic SS-350-M1U in both builds (although I could easily get away with a 250 or 300W model, peak draw has never exceeded 200W even with all cores maxed out and all HDDs active). If your old shuttle PSU is a good one it should still suffice providing you don't take the piss. Load should never actually sustain at 250W - my current system idles at ~70W and half of that is due to the hard drives being spun up all the time (something you might be able to avoid using a drive pool). A great way to spend £15 is on a cheap inline power meter that'll give you a rough indication of how many watts a given plug is pulling, as a skinflint I like to try and be efficient where possible.

There's vanishingly few mATX motherboards that come with 8 SATA ports, let alone the extra two ports you'll probably want for the 2.5" SSDs; a HBA does add to the expense both in terms of initial outlay and operating power but I think at this stage it can't be avoided unless you can live with* something like the Atom-powered A2SDi-H-TP4F. The ubiquitous LSI HBAs will chew about 8-10W in regular use.

My existing haswell setup has done me proud since 2013 and if it wasn't for my expanding requirement for video encoding combined with performance degradation on intel due to spectre, meltdown et al it'd have done me fine until 2023. But the lure of an 8P/16T setup for less than a grand was too much to resist...

I usually end up buying most of my kit from LambdaTek these days - they're one of the few disties in the UK who stock most of the Supermicro and ASRockRack ranges, as well as a whole gamut of enterprise kit besides that. They also don't try and send me opened-and-twice-returned-already broken hard drives like amazon (not a third party reseller but amazon themselves) have done to me twice in the last year.

* I have a A2SDi-8C-HLN4F I use myself - it's brilliant for low-power file serving duties but too slow CPU-wise for many other duties like desktop-style virtualisation or time-ctitical encoding
 
  • Like
Reactions: AntFurn

AntFurn

New Member
Aug 14, 2019
13
0
1
Well the A2SDi-H-TP4F does meet exactly what I what - but wow £1K ouch (before adding ram) and there I was thinking I was being generous with my budget...
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Well the A2SDi-H-TP4F does meet exactly what I what - but wow £1K ouch (before adding ram) and there I was thinking I was being generous with my budget...
Yeah the C3000s get expensive real quick once you go beyond the 8P models or the ones with additional SATA ports. The various Xeon-D options that AntFurn mentions are also good choices for lightweight home server platforms as long as you stick to the 4P models; personally I'd go for the X10SDV-4C-7TP4F model as I think 2P is too limiting these days and the 4P model is only an extra £50 or so. Be warned that you'll need decent airflow inside the case to keep the CPU and LSI chips relatively cool.

I own a X10SDV-4C+-TP4F myself (doesn't have the 16 SAS ports so is much cheaper; I don't think I'd buy one today given its age and comparative slowness but if you paired one with a cheap HBA it might a good middle ground). I purposefully bought one with a fan since it needs active cooling - I couldn't find anyone selling an aftermarket HSF that'd fit this board in the EU.
 
  • Like
Reactions: AntFurn

AntFurn

New Member
Aug 14, 2019
13
0
1
After more contemplating (read: getting over the budget related shock...) I think I can leave 10gbe to a future upgrade. With that in mind, I'm seriously looking at the: A2SDi-8C+-HLN4F | Motherboards | Products | Super Micro Computer, Inc.
That CPU should be fast enough for my needs for a long while, but the question is - can the PCIe x4 socket take a dual 10G SFP+ upgrade in the future. A quick look seemed like all the SFP+ cards are PCIe x8. Just want to make sure the single PCIe x4 slot isn't going to make me feel like I've shot myself in the foot.

Another question, might be for a different forum, why SFP+ preferred over T ? (I assume I could go P2P with T as you would with SFP+).

Thanks again for everyone's input, Antony
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Assuming you only need one 10Gb/s, that's only about 1.25GB/s; as long as you use a PCIe 3.0 NIC, a x4 connection will get you about 4GB/s worth of throughput so you should be able to run a dual-port 10GbE card with ease. Even a PCIe 2.0 NIC in a x4 slot would get you 2GB/s, enough for most people to not notice the bottleneck.

Word of warning though, depending on what you're doing you might be CPU-limited trying to run 10GbE in the atom, at least with a single connection that can't scale across cores (e.g. samba is notorious for this).

Boards with a lot of integrated features like 10GbE or SAS/SATA ports tend to run up the prices pretty quickly, and of course aren't portable to later upgrades and leave you up shit creek if the motherboard goes pop, so I prefer to use PCIe cards when possible.

The lure of SFP as opposed to ethernet is a) it's much more common than 10Gbase-T b) lots of s/h enterprise gear knocking around if you want to get stuff cheap and c) generally lower power usage than 10GBase-T. I waited forever for affordable low power 10GbE ethernet switches and cards to appear, still hasn't happened so I moved to SFP+ myself.
 
  • Like
Reactions: AntFurn