Good day
so I have already made a couple posts here to ask about different stuff.
In the meantime, I built my new homeserver. I operated a HPE Microserver since ~8 years, and it has become too weak and its memory and so on is already maxed out. So I decided to build something new.
I have the components collected over a couple months on different auction platforms and other sites, and now I have the following:
- Supermicro CSE-743AC-668B case. Unfortunately a bit loud. I am looking for better fans. Also I have decided that one power supply is sufficient. I don't have the possibility to hot swap the power supplies, and also one more reason to go for the larger single power supply was that the hot swap supplies usually have the jet engine fans which is not suitable when the server is operated in my apartment.
- Supermicro X12SPL-F board. I wanted this one because in my opinion it has a good number of PCIe slots. Many other boards I saw have less slots, but I want the possibility to add more stuff later. Also this is the more cost efficient board compared to the newer ones.
- 256 GB DDR4 ECC RAM. 2666 MHz "only" but I think this is sufficient. The board could do up to 3200. But I think more RAM is better than faster RAM, as I want to donate the RAM to ZFS.
- Xeon Silver 4310 with 12 cores.
- LSI SAS3008 card that lets me connect 16 drives with up to 12 Gb.
- The tower case itself can hold 8 disks 3.5" each. I added an additional hot swap bay that fits into the 5.25" area where I can put additional 8x 2.5" disks.
- I added an Intel X520 network card for 10 GbE as I have recently installed glass fibre network in my apartment.
- I run Proxmox off a Samsung SSD 970 Pro that fits on the single NVME slot on the main board.
- I collected, over a couple months, from different sources, some HGST enterprise SSDs. I have now 4 SSDs for my ZFS special device, and 2 SSDs for my SLOG device. Even though the SSDs are used and quite old (some from 2014!) they have an insane amount of allowed TBW. The worst SSD I got hat ~100TB written on it, with 35PB allowed max. so the SSD health is for all SSDs I got at 100% still with 0 defects logged. Seems very good to me.
- For the hard disks I was lucky to get free a couple of WD Gold 8TB with "only" roughly 30000 power on hours. So technically even still in warranty period.
I have no idea what the entire hardware cost me, as I traded some parts for others and some stuff I got for free.
I currently run a VPS server which hosts my Nextcloud and Git. These two I want to move to my home server now. For this I will want to somehow access my home network from externally, no idea yet how I will achieve this as I don't have fixed IP. I do have, however, both IPv4 and IPv6.
The new server shall run also opnsense to replace the crappy modem I got from the ISP. Also I will install minio as storage for the Nextcloud. My VPS is very limited in storage, therefore I need something better as I make lots of photos and videos especially during holidays (like from diving trips and so on) and I want to upload and store this stuff. The VPS is not good enough as it is always a hassle with only 80GB (!) of disk space. And I more and more dislike the idea of my private data being not on my own hardware.
Further Wireguard will be installed to access the home network.
A SAMBA server I set up recently and works fine, with my ZFS datasets being compressed and encrypted. The encryption is mainly for the reason because I store my private data on this server, and in case I will one day exchange the hard disks, I can just scrap the encryption key I hope
Also I will probably add a graphics card, to run some CAD software that I currently have on my laptop in a VM.
The ZFS I have divided into several datasets, some of them being on the hard disks only (like the music, movies and photo collection), with other data (like my personal files, documents and stuff and also the VM disks) on the SSDs only for ultrafast access and searching.
Certainly this home server is quite a bit of an overkill, but on the other hand, in contrast to the HPE microserver, it will allow for future expansion if necessary and because it is dimensioned large enough it will be very useful for the next couple years. Also the goal was to enjoy the fun of assembling all the stuff together as this is a homelab environment.
so I have already made a couple posts here to ask about different stuff.
In the meantime, I built my new homeserver. I operated a HPE Microserver since ~8 years, and it has become too weak and its memory and so on is already maxed out. So I decided to build something new.
I have the components collected over a couple months on different auction platforms and other sites, and now I have the following:
- Supermicro CSE-743AC-668B case. Unfortunately a bit loud. I am looking for better fans. Also I have decided that one power supply is sufficient. I don't have the possibility to hot swap the power supplies, and also one more reason to go for the larger single power supply was that the hot swap supplies usually have the jet engine fans which is not suitable when the server is operated in my apartment.
- Supermicro X12SPL-F board. I wanted this one because in my opinion it has a good number of PCIe slots. Many other boards I saw have less slots, but I want the possibility to add more stuff later. Also this is the more cost efficient board compared to the newer ones.
- 256 GB DDR4 ECC RAM. 2666 MHz "only" but I think this is sufficient. The board could do up to 3200. But I think more RAM is better than faster RAM, as I want to donate the RAM to ZFS.
- Xeon Silver 4310 with 12 cores.
- LSI SAS3008 card that lets me connect 16 drives with up to 12 Gb.
- The tower case itself can hold 8 disks 3.5" each. I added an additional hot swap bay that fits into the 5.25" area where I can put additional 8x 2.5" disks.
- I added an Intel X520 network card for 10 GbE as I have recently installed glass fibre network in my apartment.
- I run Proxmox off a Samsung SSD 970 Pro that fits on the single NVME slot on the main board.
- I collected, over a couple months, from different sources, some HGST enterprise SSDs. I have now 4 SSDs for my ZFS special device, and 2 SSDs for my SLOG device. Even though the SSDs are used and quite old (some from 2014!) they have an insane amount of allowed TBW. The worst SSD I got hat ~100TB written on it, with 35PB allowed max. so the SSD health is for all SSDs I got at 100% still with 0 defects logged. Seems very good to me.
- For the hard disks I was lucky to get free a couple of WD Gold 8TB with "only" roughly 30000 power on hours. So technically even still in warranty period.
I have no idea what the entire hardware cost me, as I traded some parts for others and some stuff I got for free.
I currently run a VPS server which hosts my Nextcloud and Git. These two I want to move to my home server now. For this I will want to somehow access my home network from externally, no idea yet how I will achieve this as I don't have fixed IP. I do have, however, both IPv4 and IPv6.
The new server shall run also opnsense to replace the crappy modem I got from the ISP. Also I will install minio as storage for the Nextcloud. My VPS is very limited in storage, therefore I need something better as I make lots of photos and videos especially during holidays (like from diving trips and so on) and I want to upload and store this stuff. The VPS is not good enough as it is always a hassle with only 80GB (!) of disk space. And I more and more dislike the idea of my private data being not on my own hardware.
Further Wireguard will be installed to access the home network.
A SAMBA server I set up recently and works fine, with my ZFS datasets being compressed and encrypted. The encryption is mainly for the reason because I store my private data on this server, and in case I will one day exchange the hard disks, I can just scrap the encryption key I hope
Also I will probably add a graphics card, to run some CAD software that I currently have on my laptop in a VM.
The ZFS I have divided into several datasets, some of them being on the hard disks only (like the music, movies and photo collection), with other data (like my personal files, documents and stuff and also the VM disks) on the SSDs only for ultrafast access and searching.
Certainly this home server is quite a bit of an overkill, but on the other hand, in contrast to the HPE microserver, it will allow for future expansion if necessary and because it is dimensioned large enough it will be very useful for the next couple years. Also the goal was to enjoy the fun of assembling all the stuff together as this is a homelab environment.