New homeserver build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NoClue

New Member
Jan 29, 2023
2
0
1
Hello together,

so I’ve just registered here because I’ve the feeling there are quite some users that could give me reasonable input on this.
I’m playing with the though of getting my own personal homeserver for quite some time. It should be noted that I’m working in the field of consultation in the machine building area. Therefore I would like to test a few things in some of the VMs (even if I likely won’t do any real work stuff there). I had a dedicated servers at a hoster for a few years with Proxmox buth a different vm structure.

I already have a relatively specific setup in mind, but before that I want to talk a bit about the planned purpose of this server.
What should be running on it:
  • As should be obvious from the thread title, the hypervisor I would like to use is likely going to be Proxmox (already have experience with it although only to a limited extent and non-professional).
    Likely own VMs
  • NextCloud - always running
  • PhotoPrism
  • EcoDms, which is basically a document archiving system - always running
  • Bitwarden/Vaultwarden - always running
  • PiHole (probably a separate device however)
  • Syncthing (for some selected stuff only)
  • Portainer
Not really fixed on VMs/containers
  • Heimdall (probably) or Homer
  • KitchenOwl - recipe/food tracking, etc.
  • Jellyfin (for movies/music)
  • Guacamole
In addition to the above
  • A Windows VM for general development stuff (VisualStudio, IntelliJ) - 32GB RAM, 8 cores - only on request and not expected to be needed with more than one of the PLC VMs below at the same time.
  • A Windows VM for Siemens TIA Portal - based on my personal experience this alone would require 8 threads and 32GB RAM. However, this is only started when needed.
  • A Windows VM for Allen Bradley - similar requirements as for Siemens, also only started on request. Might be necessary to run both VMs from Siemens and AB at the same time.
  • Another Windows VM for Beckhoff PLC - requirements less than the other two (4 threads, 16GB should suffice), started on request
  • A Windows VM for all misc stuff related to machinery (usual smaller configuration programs for different kind of equipment and more lightweight PLC software)
Now, with the given stuff above, I thought about hardware. I already have a case that should be used, which is basically a tower. Space however won’t be an issue for sure, that should be said. Also, the system will be placed in my workroom, so noise should be kept at reasonable levels.
Currently, I have the following initial build in mind.
Remark: I’m living in Germany, just as a note.
Motherboard: Supermicro MBD-H12SSL-CT-O
CPU: AMD Epyc 7443P, 24C/48T
RAM: 8x Samsung RDIMM 64GB, DDR4-3200, CL22-22-22, reg ECC
Cooler: be quiet! Dark Rock Pro TR4
System SSD: 2x Samsung OEM Datacenter SSD PM9A3 1.92TB, M.; in mirror
Data SSD: 2x Samsung OEM Datacenter SSD PM9A3 3.84TB, M.2; in mirror
HDD Pool (later): 7x 14TB+ in RaidZ2 with an SSD cache drive of yet undefined size (Toshiba Enterprise likely)
PSU: Corsair Professional Series 2022 HX1000i - redundant won’t be meaningful in the forseeable feature due to existing installation, a UPS is planned however

Open for thoughts on the above.

Some relevant topics from my side currently:
  1. Epyc vs Threadripper Pro - assuming first one is the better choice as I’m NOT planning to virtualize gaming on this system (I intend to have a separate one for that).
  2. Epyc 4th gen - CPUs available, but apparently no mainboards yet? Also, is the performance gain really worth it?
Greetings

EDIT1: Changed 4x RAM for octachannel (512GB); Also changed the Samsung consumer NVME SSDs for PM9A3 Enterprise, also both as mirrors
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Instead of RaidZ1 with 2 drives you can do mirroring.
Instead of consumer Samsung drives find used enterprise drives on ebay.

I don't have any thoughts on CPU due to lack of experience
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
That CPU is more than you will need. If you aren’t going to add RAM, getting 8x 32GB is a better option. You’re going to run tight on RAM before that EPYC. With Genoa you’ve gotta pay more for the motherboard and more for the RAM so it’s not worth it
 

aosudh

Member
Jan 25, 2023
46
15
8
Hello together,

so I’ve just registered here because I’ve the feeling there are quite some users that could give me reasonable input on this.
I’m playing with the though of getting my own personal homeserver for quite some time. It should be noted that I’m working in the field of consultation in the machine building area. Therefore I would like to test a few things in some of the VMs (even if I likely won’t do any real work stuff there). I had a dedicated servers at a hoster for a few years with Proxmox buth a different vm structure.

I already have a relatively specific setup in mind, but before that I want to talk a bit about the planned purpose of this server.
What should be running on it:
  • As should be obvious from the thread title, the hypervisor I would like to use is likely going to be Proxmox (already have experience with it although only to a limited extent and non-professional).
    Likely own VMs
  • NextCloud - always running
  • PhotoPrism
  • EcoDms, which is basically a document archiving system - always running
  • Bitwarden/Vaultwarden - always running
  • PiHole (probably a separate device however)
  • Syncthing (for some selected stuff only)
  • Portainer
Not really fixed on VMs/containers
  • Heimdall (probably) or Homer
  • KitchenOwl - recipe/food tracking, etc.
  • Jellyfin (for movies/music)
  • Guacamole
In addition to the above
  • A Windows VM for general development stuff (VisualStudio, IntelliJ) - 32GB RAM, 8 cores - only on request and not expected to be needed with more than one of the PLC VMs below at the same time.
  • A Windows VM for Siemens TIA Portal - based on my personal experience this alone would require 8 threads and 32GB RAM. However, this is only started when needed.
  • A Windows VM for Allen Bradley - similar requirements as for Siemens, also only started on request. Might be necessary to run both VMs from Siemens and AB at the same time.
  • Another Windows VM for Beckhoff PLC - requirements less than the other two (4 threads, 16GB should suffice), started on request
  • A Windows VM for all misc stuff related to machinery (usual smaller configuration programs for different kind of equipment and more lightweight PLC software)
Now, with the given stuff above, I thought about hardware. I already have a case that should be used, which is basically a tower. Space however won’t be an issue for sure, that should be said. Also, the system will be placed in my workroom, so noise should be kept at reasonable levels.
Currently, I have the following initial build in mind.
Remark: I’m living in Germany, just as a note.
Motherboard: Supermicro MBD-H12SSL-CT-O
CPU: AMD Epyc 7443P, 24C/48T
RAM: 4x Samsung RDIMM 64GB, DDR4-3200, CL22-22-22, reg ECC
Cooler: be quiet! Dark Rock Pro TR4
System SSD: 2x Samsumg 970 Evo Plus 256GB in RaidZ1
Data SSD: 2x Samsung 980 Pro 2TB in RaidZ1
HDD Pool (later): 7x 14TB+ in RaidZ2 with an SSD cache drive of yet undefined size (Toshiba Enterprise likely)
PSU: Corsair Professional Series 2022 HX1000i - redundant won’t be meaningful in the forseeable feature due to existing installation, a UPS is planned however

Open for thoughts on the above.

Some relevant topics from my side currently:
  1. Epyc vs Threadripper Pro - assuming first one is the better choice as I’m NOT planning to virtualize gaming on this system (I intend to have a separate one for that).
  2. Epyc 4th gen - CPUs available, but apparently no mainboards yet? Also, is the performance gain really worth it?
Greetings
Single socket mb fromSupermicro and Tyan are available for around 700$, But it isn't that price wise than previous generation. As you are not planning to play games on this , milan or even rome is just enough.
And for memory, You'd better to occupy all of the channels in EPYC because of the CCD's structure. So maybe you can switch to 8x32,8x64 or even 8x16
 

NoClue

New Member
Jan 29, 2023
2
0
1
Thanks for the replies!

I've upgraded the RAM to Octachannel. Originally, I wanted to leave room for upgrading if necessary, but I know that particular some of the PLC VMs can take a good chunk of RAM and benefit from 64GB in some cases (large projects, especially when simulating). Therefore, I would prefer to have it from the start and don't worry about it later.

Also, I've updated the SSDs for both mirrors to Enterprise ones. Getting smaller ones doesn't really make a lot of sense price-wise.
Therefore one related question: I would prefer separating the boot SSDs. Is there a recommendation for a SSD that's not that ridiculously oversized if all that is stored on it is the hypervisor OS? Would even a SATA SSD for that purpose fine (there are more available for me than m.2 PCIE ones).

> PCIE: Micron 7400 PRO - 1DWPD Read Intensive 480GB?
> SATA: Samsung OEM Datacenter SSD PM893 480GB
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
I'm using a satadom on my h12ssl to boot windows server. (Even with "slow" sata the psot process is the longest part of the boot process.)
So yes, sata ssds are fine for that.
 

aosudh

Member
Jan 25, 2023
46
15
8
Boost as a
Thanks for the replies!

I've upgraded the RAM to Octachannel. Originally, I wanted to leave room for upgrading if necessary, but I know that particular some of the PLC VMs can take a good chunk of RAM and benefit from 64GB in some cases (large projects, especially when simulating). Therefore, I would prefer to have it from the start and don't worry about it later.

Also, I've updated the SSDs for both mirrors to Enterprise ones. Getting smaller ones doesn't really make a lot of sense price-wise.
Therefore one related question: I would prefer separating the boot SSDs. Is there a recommendation for a SSD that's not that ridiculously oversized if all that is stored on it is the hypervisor OS? Would even a SATA SSD for that purpose fine (there are more available for me than m.2 PCIE ones).

> PCIE: Micron 7400 PRO - 1DWPD Read Intensive 480GB?
> SATA: Samsung OEM Datacenter SSD PM893 480GB
Boost ssds i used are all m.2 enterprise drive such as pm983a 900g or something like that. Their stability is are good and price is fair, And it can use up those extra m.2 slots on your motherboard( If there is one or two left)
Sata or u2 drives are OK, but you need to consider the cable management and mounting space
 

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
why not just go with sas ssd's? they have good endurance between 20-40PBW - better than common enterprise u.2 disks
 
Last edited:

heromode

Active Member
May 25, 2020
379
199
43
Instead of RaidZ1 with 2 drives you can do mirroring.
Instead of consumer Samsung drives find used enterprise drives on ebay.

I don't have any thoughts on CPU due to lack of experience
I think you misread, nowhere is raidz1 mentioned. raidz2 with 7x14TB drives is the correct choice.

Toshiba's helium filled enterprise drives are SATA, and that's perfectly fine. I find modern big size second-hand 8TB+ helium-filled SAS hdd's risky. I've had horrible experiences with them. Also internally they are the exact same drives, also performance is exactly the same. Unless your backplane or something demands it, i'd go for SATA enterprise spinners (toshiba or HGST/WD) and NO seagate ever never.

edit, instead of 7x14TB in raidz2 i would recommend 2x 4x14TB in raidz1 if possible. (or one 3x14TB raidz1, plus another 4x14TB raidz1)

That way you have almost the same redundancy, but you can take 4 or 3 disks offline, even choose a different FS for them (bcacheFS wink wink), while still operational..

Two storage pools gives alot more flexibility for the future as it allows you to change things on one pool, copy things over to it from the other pool, then change things on the other pool, step by step.

With a single pool large you will almost always be forced to shut everything completely down for any major operation.

And say you want to change the filesystem in the future on your single large pool? Well, then you will need another 7x14TB just for that. Very expensive, plus cabling and need for atleast 14x sas or sata ports etc etc..
 
Last edited:

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
8TB+ helium-filled SAS hdd's risky. I've had horrible experiences with them
Not in my exp. I had plenty so far, and non of the hgst, hitachi, seagate sas helium 8TB+ disks failed yet on me (at home, or at work).
In my exp sata disks die much faster due to spin-downs, and ups.

~
better to run parity numbers of disks - 4 disks per pool in raidz1, and you can run a mirror if its very important data; in this config you can suffer 1 disk loss while still have decent space left.
 

heromode

Active Member
May 25, 2020
379
199
43
~
better to run parity numbers of disks - 4 disks per pool in raidz1, and you can run a mirror if its very important data; in this config you can suffer 1 disk loss while still have decent space left.
agreed there, 4 disks in raidz1 is the minimum for space/cost, and mirrored z1's for important data + convenience.

but 2x2 disks in a mirror is a great way to maximize your expenses and risk, while minimizing your capacity.

edit: iv'e yet to figure out why there can't be a way to run parity raid CRC on two disks, instead of a mirror, so you could have two disk with redundancy and 66% of combined capacity, instead of a mirror with 50% capacity?
 
Last edited:

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
you would loose more than 50% of disk capacity if you wanted to do that.
As you need to have repair data to recreate the partition table index, and files of each disk. Not willing to calculate it but i presume you would loose around 65-70% of all disk space if you wanted to do that (so a 2x 1TB would likely be giving you only 700GB to write on, and you would suffer penelty in write performance.)
 
  • Like
Reactions: heromode

heromode

Active Member
May 25, 2020
379
199
43
you would loose more than 50% of disk capacity if you wanted to do that.
As you need to have repair data to recreate the partition table index, and files of each disk. Not willing to calculate it but i presume you would loose around 65-70% of all disk space if you wanted to do that (so a 2x 1TB would likely be giving you only 700GB to write on, and you would suffer penelty in write performance.)
right.. my simple mind just figure 2 bits + 1 parity bit = 66%.. but one can always dream :D
 

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
going back to OP, i would recommend unraid instead of proxmox.
*(you would be running most of your stuff in docker anyway - much better than vms for everything.)
(unless you want to play with vgpu, as for home it makes little sense - but i may be biased i hate on debian every time i get.)

Overall that box is overkill for described use cases.
for transcode i would recommend not using cpu for that - but a gpu- since you are going quite 'current' why not get A4000 or A2000 for that.
 

aosudh

Member
Jan 25, 2023
46
15
8
why not just go with sas ssd's? they have good endurance between 20-40PBW - better than common enterprise u.2 disks
There are two significant drawbacks for SAS ssds. The first is they need to be attached to an extra controller, Which is costly if you want to have some of the newest raid controller. The second is that the performance of those cheap sas ssds are incredibly bad compared to some gen4 or even gen5 nvme ssds, And x8 bus also limits the gross performance of the entire raid system
 

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
Well comparing them apples to apples (enterprise ones)

You get what you pay for
*same gen

(both tested on R640 with 30% fan speed and controlled environment)

TypeU.2SAS SSD
ModelHUSPR3280ADP301HUSMM1680ASS200
Capacity800GB800GB
Max Write1400MB/s800MB/s
Max Read2600MB/s1150MB/s
Avg Write1100MB/s780MB/s
Avg Read1400MB/s1100MB/s
Endurance TBW4.38PB36.5PB
Temp onLoad 100% 30min59'C42'C
Temp Median Mixed 24h40'C29'C
 

aosudh

Member
Jan 25, 2023
46
15
8
Well comparing them apples to apples (enterprise ones)

You get what you pay for
*same gen

(both tested on R640 with 30% fan speed and controlled environment)

TypeU.2SAS SSD
ModelHUSPR3280ADP301HUSMM1680ASS200
Capacity800GB800GB
Max Write1400MB/s800MB/s
Max Read2600MB/s1150MB/s
Avg Write1100MB/s780MB/s
Avg Read1400MB/s1100MB/s
Endurance TBW4.38PB36.5PB
Temp onLoad 100% 30min59'C42'C
Temp Median Mixed 24h40'C29'C
Your nvme ssd seemed to be too old .Now a days the price of gen4 or top level gen3 ssds are worthy compared to their price and their performance. Another thing is that the most significant improvement on nvme ssds is the multiqueue 4k instead of sequential read and write.
And at commen homelab workload, you would never run out of your TBW except for you are running speedtest programme all day long.
 

aosudh

Member
Jan 25, 2023
46
15
8
Well comparing them apples to apples (enterprise ones)

You get what you pay for
*same gen

(both tested on R640 with 30% fan speed and controlled environment)

TypeU.2SAS SSD
ModelHUSPR3280ADP301HUSMM1680ASS200
Capacity800GB800GB
Max Write1400MB/s800MB/s
Max Read2600MB/s1150MB/s
Avg Write1100MB/s780MB/s
Avg Read1400MB/s1100MB/s
Endurance TBW4.38PB36.5PB
Temp onLoad 100% 30min59'C42'C
Temp Median Mixed 24h40'C29'C
But you are right, You get what you pay for
 

CyklonDX

Well-Known Member
Nov 8, 2022
784
255
63
you would never run out of your TBW
If you need endurance, there's a reason;

For me i use nvme as cache for zfs or tempdb.
After a year on zfs with 80TB dataset media center for household (movies, books, kvm's for games, ai spark training etc) i used up 800TB in writes at home on nvmes. There was once a time where i used it as memory for kvm's i ate through desktop nvme over 2 months. (for those who wonder if it was bad - it wasn't it was good enough for most especially since i wanted to use applications that can take over 768GB of ram - and prices for ram were crazy at that time -- and nothing beats showing off to people on net - and confuse them with sandy bridge running over 2TB of 'ram'.)
 
Last edited:
  • Like
Reactions: aosudh

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I think you misread, nowhere is raidz1 mentioned. raidz2 with 7x14TB drives is the correct choice.

Toshiba's helium filled enterprise drives are SATA, and that's perfectly fine. I find modern big size second-hand 8TB+ helium-filled SAS hdd's risky. I've had horrible experiences with them. Also internally they are the exact same drives, also performance is exactly the same. Unless your backplane or something demands it, i'd go for SATA enterprise spinners (toshiba or HGST/WD) and NO seagate ever never.

edit, instead of 7x14TB in raidz2 i would recommend 2x 4x14TB in raidz1 if possible. (or one 3x14TB raidz1, plus another 4x14TB raidz1)

That way you have almost the same redundancy, but you can take 4 or 3 disks offline, even choose a different FS for them (bcacheFS wink wink), while still operational..

Two storage pools gives alot more flexibility for the future as it allows you to change things on one pool, copy things over to it from the other pool, then change things on the other pool, step by step.

With a single pool large you will almost always be forced to shut everything completely down for any major operation.

And say you want to change the filesystem in the future on your single large pool? Well, then you will need another 7x14TB just for that. Very expensive, plus cabling and need for atleast 14x sas or sata ports etc etc..
No, I did not misread.

It currently states:
System SSD: 2x Samsung OEM Datacenter SSD PM9A3 1.92TB, M.; in mirror
Data SSD: 2x Samsung OEM Datacenter SSD PM9A3 3.84TB, M.2; in mirror


When I posted it said "RAIDZ1" instead of mirror. Look at the quote from user: aosudh above, it shows RaidZ1.


That's all, and why I made my comment.