Home storage server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
This project started out as a budget project to reuse old equipment that I had stashed around the house.

This was the spec I started with:

Gigabyte MA-790FXT-UD5P
AMD Phenom II X4 955
Corsair Dominator 4GB 1600
128GB Samsung OEM SSD
Nvidia Quadro FX3500
LSI MegaRaid SAS 8888ELP
Belkin USB 3.0 card
Prolimatech Megahalems
Coolermaster CM690-II



Had to order a few bits, namely a new power supply, backplane, breakout cables, disks and fans.
4x 2TB WD SE drives
2x LSI CBL-SFF8087OCF-06M forward breakout cables
2x ICYBOX 553SK SAS/SATA backplanes
1x Noctua NF-F12 IndustrialPPC 3000RPM PWM
5x Scythe Kama Flow2 1900RPM Fans
550W Superflower Golden Green HX PSU




Went with opensuse (bottle) at the time which worked well for a few years. 6 disks in RAID 5 benchmarked using the gnome disk utility.


I then decided that I needed to increase the potential drive pool so I managed to pick up a brand new intel RES2SV240 expander card for a reasonable sum.


That went in along with another WD 2TB se disk. This is the config the server has run as since October 2014.


A year later, I swapped out the quadro FX card for an NVS310 that I was given in order to save some power. In 2017, the original opensuse bottle install got corrupted by a botched update so I updated it to leap 42.2 and subsequently 42.3 which it ran up until this year. I planned some big network updates this year in order to try and improve the bandwidth available to my main workstations and the server. I swapped out my dinky netgear GS108 for a juniper EX3300 24P. (dirt cheap new open box) The Cisco 1921 also got a VA-DSL-A ehwic added so I could remove the separate ISP modem.


I had planned on upgrading the server mainboard to a spare amd 990fx/FX8320 that I had been given but sadly having done all the work to swap the board over, it only worked for about 24 hours before the board seemed to fail. (This was 2 weeks ago)


The main reason behind upgrading the board was that I needed more pcie lanes in order to run a gpu, raid card and 10gbe NIC.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
I love that "rack"... :) Believe it or not, at one point, I had built something similar for about 20U worth of space. This was probably ~2008 or so, and back then, even basic four post racks were > $400 and 2x4s were cheap... :)
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
I love that "rack"... :) Believe it or not, at one point, I had built something similar for about 20U worth of space. This was probably ~2008 or so, and back then, even basic four post racks were > $400 and 2x4s were cheap... :)
That little wooden rack came from work. I could have had a whole 24U or 47U rack but lacked the space for it.

The 990fx motherboard failure triggered a rather extreme OTT server upgrade:
i5 9600k
Asus Z390 Prime-A
16GB Team Group Vulcan 3000C16



Only issue I had was that my original megahalems (775/1366 only) cooler needed some DIY work in order to fit on 1151. I decided that the best approach would be to swipe a backplate from one of my other coolers and make it work with a mixture of mounting hardware. One noctua backplate combined with some bolts with the same threads as the original prolimatech parts and the noctua black plastic spacers filed down to match the thickness of the prolimatech metal spacers. The only bolts I had which were the correct thread and sufficient length had countersunk heads so I found some suitable washers for them. The upper mounting plates then needed the 775 holes filing out towards the 1366 mounting holes as 775 is 72mm spacing, 115x is 75mm and 1366 is 80mm.



Backplate fitted.


Mounting plates fitted.


Test run with paste to see how the spread looked. IMO, pretty much as good as any stock mounting setup.


Built. (except the mellanox connectx-3 in the top slot is a dud)


I had a hell of job installing a new version of linux. Some kind of issue regarding the integrated intel gpu driver (i915) resulting in complete system freezes once you select an install option from the usb boot menu. After many hours of trying various kernel options via grub, I gave up and put the quadro back in. That immediately fixed the installation issues. Once I had fedora installed and setup using the quadro, I shut it down and took it out and re-enabled the intel igpu and haven't had a single issue since doing so. Had some fun setting up one samba share which turned out to be SELinux. Only one minor annoyance left to solve is that it keeps booting to the emergency console saying login or press control D to boot into default target. (pressing control D does work perfectly each time)

I also took the opportunity to run the disk benchmark on the array as I've not rerun it since I built it nearly 5 years ago. Performance seems pretty decent considering the age of the raid controller. (2007)
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
These arrived in work today. The end is in sight now, just need to get another NIC for the server and some fibre.


At present I have only got one OM3 MM fibre patch cable so I can't test it properly but it was extremely pleasing to see this once I connected my machine up.
 

Jenfil82

New Member
May 9, 2019
20
5
3
Maybe a stupid question but did you consider FreeNAS at some point?
Just wondering as I am evaluating how to build my own NAS for the moment, so only gathering data at the moment :)
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Maybe a stupid question but did you consider FreeNAS at some point?
Just wondering as I am evaluating how to build my own NAS for the moment, so only gathering data at the moment :)
I never really considered it at the time, mainly as I didn't realise it could do more than be a NAS. I wanted my server to be able to be used for background tasks overnight when I want my desktop off. I've gotten used to dealing with linux in that time so it felt easier to stick with what I've gotten used to.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I went all low-power xeon-d and FreeNAS specifically because I wanted a 24x7 box that would run all my background utility VMs/etc. and wouldn't require my desktop.

It runs a few linux VMx, runs plex in a jail, rsync's files to a remote location/etc.

Then you get the benefits of ECC memory and ZFS's bit-rot protection on all of the data, something you don't really get with other systems.

The onboard 10gige was also a bonus because my remote access speeds are the same as my local access speeds.
 
Last edited:
  • Like
Reactions: T_Minus

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Finally finished sorting out the OS. Had a hell of a job dealing with setting up vncserver due to policykit issues making executing gui applications with elevated permissions difficult. Simple solution was to switch over to using x0vncserver instead which made a lot more sense as it means I'm not running a separate desktop. Still suffering from some intel i915 driver issues. Weirdly, the problems returned when I changed screen. Turning off the window compositing in xfce stopped the shed load of errors and it makes little difference to the usability of the OS.

Memory usage seems a tad high but from what I can tell, most of it is the lsi megaraid storage manager server. (1.7GB being used by java)



All that I need to sort now is finding a replacement SFP+ NIC for 10Gbe and re-cable the house. I've been looking at solarflare NICs as mellanox connect-x 3's are pretty expensive still and having had one DOA makes me less keen on them. Another intel x710 is a potential option albeit the most expensive.
 
Last edited:

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Replacement NIC bought. After some lengthy research, I decided to try solarflare and picked up a SFN7122F card nice and cheap to replace the DOA Mellanox CX3.


Things have been running well so far. Ironed out the vivaldi framework ram issues but plex DLNA server seems hell bent on slowly chewing through ram.
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Finally picked up a second OM3 fibre so I can do a test run at 10Gb. Performance in one direction looks pretty good, but not so good in the other direction.

Running my threadripper workstation with the intel x710 as the iperf server net 7Gbps with the 9600k/solarflare NAS as the client. The other way around only got 3.5Gbps. Currently I've adjusted the Tx/Rx buffers on the intel card to their maximums but not made any changes to the solarflare as I'm less familiar with linux driver tweaks.
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
I bought some OM3 fibres from FS along with some LC-LC links for the wall boxes I got from RS a few months back. Wall boxes should be pretty resilient as the fibres will enter at the base of the boxes keeping the connection virtually flush with the wall to avoid breaking them.


Since I now have sufficient fibres for a proper test run, it'd be rude not to. Just a quick test to make sure all ports and fibres work ok, I connected the server with two links using the 3 avago transceivers I have. All working great, the juniper doesn't seem to mind the avago transceiver.


Quick file transfer test from my pc to the server. This is off an nvme ssd to avoid sata bottlenecks. Normal transfers will be a fair bit slower sadly but at least now I can have two pc's copying to the server and two tv's streaming from it without interruption.
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Bought some new fans for the juniper switch to try and quieten it down a bit without triggering a fan failure warning.

2x 40x40x28mm 12,500rpm San Ace 109P0412J3013 fans instead of 18,000rpm. Idle fan voltage is 4.5V so something like a noctua that maxes out at 5000rpm would run too slow at idle.
A pack of molex plug bodies and pins for fan headers (as the fans above come with bare ends)
A pack of molex ATX pins and sockets for making up custom PSU cables in the future
A molex pin extraction tool for the ATX pins
Crimping dies for insulated and un-insulated terminals


Fitted and now tested. All working ok, significantly quieter both at idle and full speed and no fan failure warnings. Best of all, it is still idling at 44 degrees. It does make me wonder if I could have gotten away with the 9500rpm Delta FFB0412VHN or the 11000rpm San Ace 109P0412G3013.


Server cpu temps looking rather chilly at this time of year.
 
Last edited:
  • Like
Reactions: dawsonkm

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Raid controller alarm went off last month. Looking at it, a disk had possibly died as it had simply disappeared.



I ordered two 2TB ultrastars to replace the dead drive and provide a hot spare. Can't moan too much as it's been running pretty much 24/7 for nearly 6 years


I discovered when attempting to replace the "dodgy" disk that the problem wasn't actually the disk but rather the slot on the icydock backplane. This led to finding a replacement case that had no need for a backplane.

After narrowing it down to the fractal design Define XL or a corsair 750D, I found the drive cages for the fractal were almost impossible to buy. So I bought a 750D and 3 extra hard drive cages. All fitted snugly, giving me capacity for 18x 3.5" disks, 4x 2.5" disks and 3x 5.25" bays that can fit another 4x 3.5" drives with an adaptor. Not a lot of point in more than that as the 550W supply doesn't have anywhere near enough connectors for it and at full stretch, I could only run 24x disks in an array without needing either an extra expander, a different raid card or bigger expander.

I'm trying to work out whether to replace the windowed side panel with a solid one by buying another metal one from corsair. (pretty cheap at £9.99) I have no need nor want for a window on a server, but could do with a side panel fan to blow cool air across the expansion cards as they are all passively cooled server cards that get quite hot.


In other news, the replacement sanace fans that I put into the Juniper switch have been working brilliantly and temperatures have remained sensible. It got a bit noisy in the hot patch during the summer but still remained cool enough.
 
  • Like
Reactions: Boris

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
On to the case swap. The 750D is quite a bit bigger than the CM690. I put a new 8TB drive in already as that is going to be used as an rsync disk for certain files from the main array.


Deconstructing the 690 was simple enough but the last 6 years in the loft haven't been kind. That green cloth was clean to start. All of the fans and filters were caked in a thick fine black clingy dust.


The add in cards out. I really should upgrade the RAID controller as it's very very old but I can't be bothered finding a replacement.


All the drives now transferred across. Just the PSU and motherboard and fans to put in.


The board still looked pretty clean considering the black dust caked over most other parts.


Installed with all of the add in cards. I had to swap the position of the RAID controller and the 10Gbe NIC as the SAS cables wouldn't fit around the drive cage. Without changing the controller for a shorter one, I can't fit the 6th drive cage in. (A good incentive to replace it)


Some of the wiring nightmare dealt with. Those silverstone 4x sata power extensions were a godsend and have made things far neater than I was expecting.


With the SSD installed and power connected. I was using wire ties instead of cable ties to test out routes before committing to using single use cable ties. I have fitted cable ties and had to cut and replace them too many times. I realised that I forgot to connect the power connector to the last drive in the left stack which I sorted today. The SATA connector on the SSD is missing as I needed to find another cable.


Had to re-pin a fan as the bodged connector broke during dismantling. I've already cut the cable where it was splice crimped. It had broken at the barrier strip. (black wire broke off)


Crimped new molex mini-fit Jr pins on


Shoved in the case temporarily to blow cool air directly over the add in cards. Currently the array is rebuilding, adding an additional disk whilst retaining one of the new disks as a hot spare. It's been going for 15 hours and is at 30%. I've ordered some Arctic P14's to replace the old scythe fans as the 120mm's seem to look remarkably small vs the 140mm's and they certainly won't flow as much air.
 
  • Like
Reactions: Lix and itronin

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
I bought some Arctic P14's to replace the old scythe kama flow 120mms as the case has provision for the same number of 140mm fans as 120mm so it makes sense to use the larger diameter.


I swapped the front intakes out as well as the top exhaust. They seem to shift a hefty quantity of air at full pelt.


I just need to "swiss cheese" the side panel so I can fit some fans in that to cool the cards.


Tested the array. I think the RAID controller is starting to become the limiting factor as it's ancient.


Copying the data back onto the array was a lot quicker this time around, although the disks in my desktop were the limiting factor rather than the network. It's nice to be able to run concurrent copies from multiple sources.


I've taken the Icydock backplanes apart to try and determine the cause of the one slot failing.

It looks like corrosion of the contacts may have played it's part. Spot which slot is the one that has failed.


Not looking particularly pretty, but most of that is dust stuck to the contact cleaner I sprayed in a month or so ago to see if it was simply a mucky contact.


Having cleaned the PCB, it appears that the pads have corroded around the pins. What's weird is that they aren't soldered. I'm wondering if that was done to try and prevent cracked joints from disk hot swapping.
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
Damn that back side looks alot cleaner than mine did (though I did have SSD in all 4 back side slots).

I made a similar path, but I opted for a drobo first then larger QNAP and then the 750D which I laid out the exact same way.
I didnt like how much I paid for subpar resource amounts and limited drive growth on the prebuild units (though they were easy to use).
On the flip side when I moved to the 750D I was then happy with resources and expansion but was annoyed with not having hot-swap.

I ended up migrating to an antec twelve hundred v3 with supermicro 3x 5.25 to 5x 5.25 enclosures (I trust those backpanes much more than icy dock or the other generics and have had 0 issues even with used ones, currently using about 8 of supermicro's enclosures).

As a recommendation for your cooling issue on the addin cards I used one of the PCIE fan bracket for my cooling (with a 140mm fan it can cool my 10g nic, sas controller, NVME drives, and blows on the mobo sas controller chipset). ~ This is the bracket I bought
It keeps you from having to poke holes in the side depending on your addon card height, all mine are half height so the fan was recessed enough it got plenty of flow.

For a visual for both my setups:

Previous 750D setup:


Antec:

Its a little busy in the antec but airflow is still great since it passes directly though all the hdd enclosures.

At one point in time I actually had one of the 5x hotswap enclosures in the corsair 3x 5.25" slots (I had to bend all the guides for it to get it to fit, cant seem to find a picture though).
 
Last edited:

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
Damn that back side looks alot cleaner than mine did (though I did have SSD in all 4 back side slots).

I made a similar path, but I opted for a drobo first then larger QNAP and then the 750D which I laid out the exact same way.
I didnt like how much I paid for subpar resource amounts and limited drive growth on the prebuild units (though they were easy to use).
On the flip side when I moved to the 750D I was then happy with resources and expansion but was annoyed with not having hot-swap.

I ended up migrating to an antec twelve hundred v3 with supermicro 3x 5.25 to 5x 5.25 enclosures (I trust those backpanes much more than icy dock or the other generics and have had 0 issues even with used ones, currently using about 8 of supermicro's enclosures).

As a recommendation for your cooling issue on the addin cards I used one of the PCIE fan bracket for my cooling (with a 140mm fan it can cool my 10g nic, sas controller, NVME drives, and blows on the mobo sas controller chipset). ~ This is the bracket I bought
It keeps you from having to poke holes in the side depending on your addon card height, all mine are half height so the fan was recessed enough it got plenty of flow.

For a visual for both my setups:

Previous 750D setup:

Its a little busy in the antec but airflow is still great since it passes directly though all the hdd enclosures.

At one point in time I actually had one of the 5x hotswap enclosures in the corsair 3x 5.25" slots (I had to bend all the guides for it to get it to fit, cant seem to find a picture though).
Very useful info there, thank you.

I actually managed to source the second solid side panel direct from corsair this week. I will be modifying it to fit a pair of 120mm fans to provide the airflow across the expansion cards. Good as that bracket would be, it would still benefit from some breathing holes in the side, so I may as well fit the fans into the side
 

Kei-0070

Member
May 3, 2019
40
18
8
Cardiff
I finally tackled the side panel having actually bought the 4.5" hole saw required to create the cut outs for the fans. Not the easiest job but they seem to have come out ok. There are a couple of marks to the black finish, but nothing too obvious.


Fans fitted with silverstone filters


New solid side panel fitted to the reverse


All fitted and running.