New build for a VM server

zecas

New Member
Dec 6, 2019
19
0
1
Hi There,

I'm posting this message in the hope that here I find some opinions, tips, alerts on building a new server. It's a loooong message, so please bear with me, I really need opinions to make an edutated choice.

My intention is to build a server for a (very) small company, in order to replace 2 old HP servers and going virtualization to get the extra benefits we all know and love.

That box will have the following VMs:
- 1 Windows 2019 Server for domain controller (40Gb disk, 2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,4Gb RAM, 2 vCPU);
- 1 Windows 2019 Server for SQL Server (40Gb disk,2Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for reverse proxy (20Gb disk,1Gb RAM, 1 vCPU) **future**;
- 1 Linux Ubuntu Server for some webapps (40Gb disk, 4Gb RAM, 2 vCPU) **future**;
- 1 Freenas Server (*Gb RAM, 2 vCPU?) **future**.

Why so many SQL Server machines? Well, each machine will have it's own business software installed, which will be accessed from desktops. I opted for segregation because sometimes updating the software causes some conflicts amongst those softwares that must be resolved (past proven experiences), and so separating them will give much more flexibility and peace of mind with major software updates. SQL Servers will have around say 200 small databases (around 50-200Mb each).

The **future** marked ones are just nice-to-have planned migrations from existing physical servers (except for freenas, there is none yet), and for all machines I'm planning to start with as less resources allocated as possible (hence the 2Gb, 1 vCPU), and then take it from there and increase resources as needed.

Since I believe that the world should recycle more, and there is a lot of good refurbished hardware around nowadays, my idea was to build a nice server on a limited budget with refurbished parts, instead of buying an excellent, better, more efficient server but also much more expensive.

So after this long story, I'm thinking about going supermicro. Don't have any experience with them, but I always read very good feedback and good experiences with them. So the world can't all be that wrong, and the hardware specs seem to be very good, anyway.



Now, the builds I've been thinking are at the moment resumed as follows:

Build-1 (already built)
mobo: Supermicro X9DRD-7LN4F
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: included with fan
ram: 128GB DDR3 ECC (don't know speed ratings)
chassis: supermicro 2U (I believe it's SC826BE26-R920LPB)
rack kit: included
caddies: 12x HDD 3.5" (front), 2x SDD 2.5" (back)
psu: 2x Supermicro 920W 1U Redundant Power Supply

This build is complete, selling with 1xSDD 120Gb and 2x 6Tb SATAIII HGST 7200rpm.
The price is around 900€

  • I Like: price, already working, some disks included, lots of caddies.
  • I'm worried: don't know memory specs yet, 2x 920W may cost heavily on electricity costs



Build-2 (by components, build the server myself)
mobo: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus

This will be build by me, no disks included, the price will put me at around 1100€.

  • I Like: I pick my components, (hopefully) all compatible (specially mobo and ram).
  • I'm worried: power supply may not be enough, electricity costs of such a build, limited caddies for what I may plan for the future (bellow), and of course the price which is higher.



Now for the disks, the first build includes some, but I would prefer not to trust them for critical data, and I would prefer to buy some new ones, since I'll also have my fixed idea of raidz2.

So additionally, I would think about:
- 2x Intel SSDs 120Gb (consumer level) for boot VM software (proxmox in my case) in zfs raidz1;
- 4x Ultrastar 1Tb SataIII 7200rpm drives for VM pool in zfs raidz2 (containing VM images).
- **future** additional 4x Ultrastar 1Tb, passed through to freenas VM to create a NAS (it's still too much space, as nowadays I see no more than 5Gb of files to store.


As a bottom line, I'm going rack because ... first of all, mobo sizes, chassis organization, and I would like to get a rack cabinet to better organize the hardware (server, switch, patch panel, etc). I don't have a rack cabinet yet, but I must start somewhere ... and the switch I'll use is also rack mount compatible.


So please throw your opinions, tell me what you think, the good, the bad, what you would change or alerts about these choices, anything ...

Thank you.
 

amalurk

Active Member
Dec 16, 2016
189
42
28
98
Since most of your VMs have SQL databases inside of them that are accessed from desktops, you want the VMs all on SSDs. The total GB size of your VMs isn't that large so SSD won't be overly expensive. Do a raid 10 of four 400gb SSDs like the Intel S3610 or S3700, S3710, S4610. Consumer drives are not for many VMs with SQL databases in them. RAID 10 is going to perform a lot better than ZFS for these databases.
 

j_h_o

Active Member
Apr 21, 2015
470
111
43
California, US
  • Why not just use Hyper-V Server since you're mostly running Windows guests?
  • You have very little data. I'd probably just run the VMs on pre-allocated VHDX on a single S3700 or S3610, then schedule periodic (15 min?) snapshots onto a second spinner/disk for backup. You haven't mentioned anything about SLO/what is acceptable downtime and recovery, or anything about IOPS but given the power constraints and small-business nature, I think a single disk (or RAID10 as amalurk said) might be easier/simpler for recovery, when something goes wrong. (Having an entirely separate 2nd copy of your VHDXs is better than having redundancy in your storage.)
  • You can probably just use the Community edition of Veeam Backup/Replication to accomplish most of this, for free.
 

zecas

New Member
Dec 6, 2019
19
0
1
Since most of your VMs have SQL databases inside of them that are accessed from desktops, you want the VMs all on SSDs. The total GB size of your VMs isn't that large so SSD won't be overly expensive. Do a raid 10 of four 400gb SSDs like the Intel S3610 or S3700, S3710, S4610. Consumer drives are not for many VMs with SQL databases in them. RAID 10 is going to perform a lot better than ZFS for these databases.
I've made a calculation that going for the initial set of SSD for VM OS boot and 4x 1Tb HDD for ZFS, the total cost would put me on around 400€. Now I know that isn't much for what nowadays would be considered a storage cost for enterprise, but I believe SSDs for everything would put me well above that value, but I'll check it out, as I could opt for lower capacity SSDs, as I believe a total 1Tb space would be enough for my needs.

I have another old server, an HP ML350 G5 with 3x 75Gb 10K SAS drives, in RAID5 configuration. What I love about that setup is that in a recent past, one of the drives just died. I just took the caddy out, put back in another one with a healthy drive, and it just re-built the raid to a healthy status. Without any service interruption, which was a very pleasant experience. What I don't like about HP? The simple fact that to do a BIOS update I need a support account, which seems to me very wrong, specially with old servers.

On the other hand, it just feels that RAID can be a tricky solution, because if the controller of the server itself dies, one can be in big problems to access the data in the raidded disks. Whereas for RAIDZ, it feels a more robust and more up-to-date solution, as ZFS has nice features and it can be accessed on other machine if necessary, only need all drives connected and an OS that supports ZFS.

But also going ZFS with SSD drives can be tricky if it brings a performance penalty when it starts cleaning up blocks for write operations, as for the nature of ZFS. I'm searching and reading more info about that, so I haven't concluded anything yet and most probably there are safe ways to overcome this.



  • Why not just use Hyper-V Server since you're mostly running Windows guests?
  • You have very little data. I'd probably just run the VMs on pre-allocated VHDX on a single S3700 or S3610, then schedule periodic (15 min?) snapshots onto a second spinner/disk for backup. You haven't mentioned anything about SLO/what is acceptable downtime and recovery, or anything about IOPS but given the power constraints and small-business nature, I think a single disk (or RAID10 as amalurk said) might be easier/simpler for recovery, when something goes wrong. (Having an entirely separate 2nd copy of your VHDXs is better than having redundancy in your storage.)
  • You can probably just use the Community edition of Veeam Backup/Replication to accomplish most of this, for free.
I thought about Hyper-V, ESXi, XCP-ng, proxmox ... because of license costs, in the long run updating Hyper-V would require a windows server update and its associated costs, ESXi may be free but at least the vCenter console requires a license, so I ended up pointing for the last 2 of them, and then chosen proxmox because I liked the way it worked out for me.

Cutting a long story short, the company had a small server, 4Gb, 1x HDD disk with all info in it. It was configured with daily backups for an external drive, and all zipped data was around 10Gb total (and I'm giving a margin here). The server stopped working, and I'm helping out to put a solution back up.

I planned going VM to take the benefits from such a solution and also to help upgrading the server in the future. A virtual OS would be better to migrate to a new server machine, versus an OS installed on the bare metal. Reinstalling an OS and software/services is a pain and very stressful when recovering a system.

I cannot say this is a critical system. Obviously the company need to work everyday, but there is no problem to wait a couple of days in case of a disastrous failure, which anyway I want to reduce with a virtualization solution (I plan to have at least daily VM backups for quick recovery). In the case of server failure, the work continues as there is allot more to do in the company then using the software that requires those servers and SQL Servers.

Nowadays they are using the VMs I setup in a temporary solution (until a proper server arrives and I migrate them), I've been asking since day-1 if the new system is behaving properly and have had no complaint on speed. SQL Server is serving data from regular SATAIII drives (Western Digital HGST), so I'll give a look on current IOPS to serve me as a base.

So looking at all this, I'd say the requirements are not high, or at least not as high as people usually find on current enterprise requirements. Anything better than what they have now will be great, but I also have to think about the server solution, if going for refurbished hardware can get me a good environment instead of choosing new but much more expensive hardware, and the costs of energy that also are making me think about it.


So what are your thoughts? Keep your opinions coming, I would really like to read and learn from them.

Thanks
 

zecas

New Member
Dec 6, 2019
19
0
1
Now, the builds I've been thinking are at the moment resumed as follows:

Build-1 (already built)
mobo: Supermicro X9DRD-7LN4F
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: included with fan
ram: 128GB DDR3 ECC (don't know speed ratings)
chassis: supermicro 2U (I believe it's SC826BE26-R920LPB)
rack kit: included
caddies: 12x HDD 3.5" (front), 2x SDD 2.5" (back)
psu: 2x Supermicro 920W 1U Redundant Power Supply

This build is complete, selling with 1xSDD 120Gb and 2x 6Tb SATAIII HGST 7200rpm.
The price is around 900€

  • I Like: price, already working, some disks included, lots of caddies.
  • I'm worried: don't know memory specs yet, 2x 920W may cost heavily on electricity costs


Build-2 (by components, build the server myself)
mobo: Supermicro X9DRH-7TF
cpu: 2x Intel Xeon E5-2620 v2, 2.1Ghz, 6 Core (12 Cores total)
heatsink: Supermicro SNK-P0048AP4 (heatsink with fan)
ram: 128Gb - Samsung M393B2G70BH0-CK0 (8x 16Gb RDIMM 1600MHz 1.5v, model in supermicro's mobo list of tested dimms)
chassis: Supermicro CSE-825
rack kit: Supermicro MCP-290-00053-0N
caddies: 8x HDD 3.5"
psu: 1x Supermicro 560W, 80plus

This will be build by me, no disks included, the price will put me at around 1100€.

  • I Like: I pick my components, (hopefully) all compatible (specially mobo and ram).
  • I'm worried: power supply may not be enough, electricity costs of such a build, limited caddies for what I may plan for the future (bellow), and of course the price which is higher.
About the price of these 2 builds, you think they are within reasonable values? I mean the hardware have what, 4-5 years, but is it worth the price?

Also am I to expect a big difference in electricity bill with such a server? Having 2x 920W does not mean it will pull that power continuously, but I have no idea of the expected consumption with such hardware.

Thanks.