Build critique for my new home server build.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tcpluess

Member
Jan 22, 2024
36
2
8
Good day

so I have already made a couple posts here to ask about different stuff.
In the meantime, I built my new homeserver. I operated a HPE Microserver since ~8 years, and it has become too weak and its memory and so on is already maxed out. So I decided to build something new.

I have the components collected over a couple months on different auction platforms and other sites, and now I have the following:

- Supermicro CSE-743AC-668B case. Unfortunately a bit loud. I am looking for better fans. Also I have decided that one power supply is sufficient. I don't have the possibility to hot swap the power supplies, and also one more reason to go for the larger single power supply was that the hot swap supplies usually have the jet engine fans which is not suitable when the server is operated in my apartment.
- Supermicro X12SPL-F board. I wanted this one because in my opinion it has a good number of PCIe slots. Many other boards I saw have less slots, but I want the possibility to add more stuff later. Also this is the more cost efficient board compared to the newer ones.
- 256 GB DDR4 ECC RAM. 2666 MHz "only" but I think this is sufficient. The board could do up to 3200. But I think more RAM is better than faster RAM, as I want to donate the RAM to ZFS.
- Xeon Silver 4310 with 12 cores.
- LSI SAS3008 card that lets me connect 16 drives with up to 12 Gb.
- The tower case itself can hold 8 disks 3.5" each. I added an additional hot swap bay that fits into the 5.25" area where I can put additional 8x 2.5" disks.
- I added an Intel X520 network card for 10 GbE as I have recently installed glass fibre network in my apartment.
- I run Proxmox off a Samsung SSD 970 Pro that fits on the single NVME slot on the main board.
- I collected, over a couple months, from different sources, some HGST enterprise SSDs. I have now 4 SSDs for my ZFS special device, and 2 SSDs for my SLOG device. Even though the SSDs are used and quite old (some from 2014!) they have an insane amount of allowed TBW. The worst SSD I got hat ~100TB written on it, with 35PB allowed max. so the SSD health is for all SSDs I got at 100% still with 0 defects logged. Seems very good to me.
- For the hard disks I was lucky to get free a couple of WD Gold 8TB with "only" roughly 30000 power on hours. So technically even still in warranty period.

I have no idea what the entire hardware cost me, as I traded some parts for others and some stuff I got for free.
I currently run a VPS server which hosts my Nextcloud and Git. These two I want to move to my home server now. For this I will want to somehow access my home network from externally, no idea yet how I will achieve this as I don't have fixed IP. I do have, however, both IPv4 and IPv6.
The new server shall run also opnsense to replace the crappy modem I got from the ISP. Also I will install minio as storage for the Nextcloud. My VPS is very limited in storage, therefore I need something better as I make lots of photos and videos especially during holidays (like from diving trips and so on) and I want to upload and store this stuff. The VPS is not good enough as it is always a hassle with only 80GB (!) of disk space. And I more and more dislike the idea of my private data being not on my own hardware.
Further Wireguard will be installed to access the home network.
A SAMBA server I set up recently and works fine, with my ZFS datasets being compressed and encrypted. The encryption is mainly for the reason because I store my private data on this server, and in case I will one day exchange the hard disks, I can just scrap the encryption key I hope :D
Also I will probably add a graphics card, to run some CAD software that I currently have on my laptop in a VM.
The ZFS I have divided into several datasets, some of them being on the hard disks only (like the music, movies and photo collection), with other data (like my personal files, documents and stuff and also the VM disks) on the SSDs only for ultrafast access and searching.

Certainly this home server is quite a bit of an overkill, but on the other hand, in contrast to the HPE microserver, it will allow for future expansion if necessary and because it is dimensioned large enough it will be very useful for the next couple years. Also the goal was to enjoy the fun of assembling all the stuff together as this is a homelab environment. :D
 

CyklonDX

Well-Known Member
Nov 8, 2022
857
282
63
As a critique,
Its still very expensive platform, prices haven't come down at all since its release - i wouldn't do it - but maybe as time goes on you'll be able to enjoy massive upgrades quite cheaply.

I'd recommend lsi 9400, and one of the 5.25" bays to be used for icy dock for u.2 disks. *unless you plan on using pcie port for that.
I'd disable ipv6 internally. You should keep that vps, and create vpn connection between your box home ~ mount your home zfs onto your vps or create a task that offloads it to your zfs at home ~ that is unless you have static ip with nordvpn or something - still i think vps offers better protection.

If aim is to just do storage, i'd go with 1st gen scalable series or even broadwell v4's. The only use-case for the new one would be good kvm gaming performance, or/and top of the line avx512 compute performance.
 

rtech

Active Member
Jun 2, 2021
306
110
43
- Can you verify that 8x 2.5 can be used for 15mm 2.5 U.2 ssds?
- do you actually need pcie 4.0?
- I am not sure about that chassis i think i would better to get something consumer since it is supposed to be quiet and performance/noise delta between 120mm and 140mm fans is significant.
- why do you want SAS HBA for 8 HDD?
 

nabsltd

Well-Known Member
Jan 26, 2022
430
293
63
- Supermicro CSE-743AC-668B case. Unfortunately a bit loud. I am looking for better fans. Also I have decided that one power supply is sufficient. I don't have the possibility to hot swap the power supplies, and also one more reason to go for the larger single power supply was that the hot swap supplies usually have the jet engine fans which is not suitable when the server is operated in my apartment.
If you are using a CPU heatsink with a fan (which would work well in that case...lots of choices, like the Noctua NH-D9 DX-4189), then replace the 4x fans in the wall with FAN-0104L4. The link is to an eBay search, where you can find them for less than $10 each. This will cut the full-speed fan noise from about 50dBA to about 30dBA. It's easy to pop the fans out of the hot swap carriers and replace them. You can also add an SQ rear fan if you want.