Hi There,
I'm posting this message in the hope that here I find some opinions, tips, alerts on building a new server. It's a loooong message, so please bear with me, I really need opinions to make an edutated choice.
My intention is to build a server for a (very) small company, in order to replace 2 old HP servers and going virtualization to get the extra benefits we all know and love.
That box will have the following VMs:
- 1 Windows 2019 Server for domain controller (40Gb disk, 2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,4Gb RAM, 2 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for reverse proxy (20Gb disk,1Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for some webapps (40Gb disk, 4Gb RAM, 2 vCPU) **future**;
- 1 Freenas Server (*Gb RAM, 2 vCPU?) **future**.
Why so many SQL Server machines? Well, each machine will have it's own business software installed, which will be accessed from desktops. I opted for segregation because sometimes updating the software causes some conflicts amongst those softwares that must be resolved (past proven experiences), and so separating them will give much more flexibility and peace of mind with major software updates. SQL Servers will have around say 200 small databases (around 50-200Mb each).
The **future** marked ones are just nice-to-have planned migrations from existing physical servers (except for freenas, there is none yet), and for all machines I'm planning to start with as less resources allocated as possible (hence the 2Gb, 1 vCPU), and then take it from there and increase resources as needed.
Since I believe that the world should recycle more, and there is a lot of good refurbished hardware around nowadays, my idea was to build a nice server on a limited budget with refurbished parts, instead of buying an excellent, better, more efficient server but also much more expensive.
So after this long story, I'm thinking about going supermicro. Don't have any experience with them, but I always read very good feedback and good experiences with them. So the world can't all be that wrong, and the hardware specs seem to be very good, anyway.
Now, the builds I've been thinking are at the moment resumed as follows:
Build-1 (already built)
mobo: Supermicro X9DRD-7LN4F
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: included with fan
ram: 128GB DDR3 ECC (don't know speed ratings)
chassis: supermicro 2U (I believe it's SC826BE26-R920LPB)
rack kit: included
caddies: 12x HDD 3.5" (front), 2x SDD 2.5" (back)
psu: 2x Supermicro 920W 1U Redundant Power Supply
This build is complete, selling with 1xSDD 120Gb and 2x 6Tb SATAIII HGST 7200rpm.
The price is around 900€
Build-2 (by components, build the server myself)
mobo: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus
This will be build by me, no disks included, the price will put me at around 1100€.
Now for the disks, the first build includes some, but I would prefer not to trust them for critical data, and I would prefer to buy some new ones, since I'll also have my fixed idea of raidz2.
So additionally, I would think about:
- 2x Intel SSDs 120Gb (consumer level) for boot VM software (proxmox in my case) in zfs raidz1;
- 4x Ultrastar 1Tb SataIII 7200rpm drives for VM pool in zfs raidz2 (containing VM images).
- **future** additional 4x Ultrastar 1Tb, passed through to freenas VM to create a NAS (it's still too much space, as nowadays I see no more than 5Gb of files to store.
As a bottom line, I'm going rack because ... first of all, mobo sizes, chassis organization, and I would like to get a rack cabinet to better organize the hardware (server, switch, patch panel, etc). I don't have a rack cabinet yet, but I must start somewhere ... and the switch I'll use is also rack mount compatible.
So please throw your opinions, tell me what you think, the good, the bad, what you would change or alerts about these choices, anything ...
Thank you.
I'm posting this message in the hope that here I find some opinions, tips, alerts on building a new server. It's a loooong message, so please bear with me, I really need opinions to make an edutated choice.
My intention is to build a server for a (very) small company, in order to replace 2 old HP servers and going virtualization to get the extra benefits we all know and love.
That box will have the following VMs:
- 1 Windows 2019 Server for domain controller (40Gb disk, 2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,4Gb RAM, 2 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for reverse proxy (20Gb disk,1Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for some webapps (40Gb disk, 4Gb RAM, 2 vCPU) **future**;
- 1 Freenas Server (*Gb RAM, 2 vCPU?) **future**.
Why so many SQL Server machines? Well, each machine will have it's own business software installed, which will be accessed from desktops. I opted for segregation because sometimes updating the software causes some conflicts amongst those softwares that must be resolved (past proven experiences), and so separating them will give much more flexibility and peace of mind with major software updates. SQL Servers will have around say 200 small databases (around 50-200Mb each).
The **future** marked ones are just nice-to-have planned migrations from existing physical servers (except for freenas, there is none yet), and for all machines I'm planning to start with as less resources allocated as possible (hence the 2Gb, 1 vCPU), and then take it from there and increase resources as needed.
Since I believe that the world should recycle more, and there is a lot of good refurbished hardware around nowadays, my idea was to build a nice server on a limited budget with refurbished parts, instead of buying an excellent, better, more efficient server but also much more expensive.
So after this long story, I'm thinking about going supermicro. Don't have any experience with them, but I always read very good feedback and good experiences with them. So the world can't all be that wrong, and the hardware specs seem to be very good, anyway.
Now, the builds I've been thinking are at the moment resumed as follows:
Build-1 (already built)
mobo: Supermicro X9DRD-7LN4F
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: included with fan
ram: 128GB DDR3 ECC (don't know speed ratings)
chassis: supermicro 2U (I believe it's SC826BE26-R920LPB)
rack kit: included
caddies: 12x HDD 3.5" (front), 2x SDD 2.5" (back)
psu: 2x Supermicro 920W 1U Redundant Power Supply
This build is complete, selling with 1xSDD 120Gb and 2x 6Tb SATAIII HGST 7200rpm.
The price is around 900€
- I Like: price, already working, some disks included, lots of caddies.
- I'm worried: don't know memory specs yet, 2x 920W may cost heavily on electricity costs
Build-2 (by components, build the server myself)
mobo: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus
This will be build by me, no disks included, the price will put me at around 1100€.
- I Like: I pick my components, (hopefully) all compatible (specially mobo and ram).
- I'm worried: power supply may not be enough, electricity costs of such a build, limited caddies for what I may plan for the future (bellow), and of course the price which is higher.
Now for the disks, the first build includes some, but I would prefer not to trust them for critical data, and I would prefer to buy some new ones, since I'll also have my fixed idea of raidz2.
So additionally, I would think about:
- 2x Intel SSDs 120Gb (consumer level) for boot VM software (proxmox in my case) in zfs raidz1;
- 4x Ultrastar 1Tb SataIII 7200rpm drives for VM pool in zfs raidz2 (containing VM images).
- **future** additional 4x Ultrastar 1Tb, passed through to freenas VM to create a NAS (it's still too much space, as nowadays I see no more than 5Gb of files to store.
As a bottom line, I'm going rack because ... first of all, mobo sizes, chassis organization, and I would like to get a rack cabinet to better organize the hardware (server, switch, patch panel, etc). I don't have a rack cabinet yet, but I must start somewhere ... and the switch I'll use is also rack mount compatible.
So please throw your opinions, tell me what you think, the good, the bad, what you would change or alerts about these choices, anything ...
Thank you.