2nd Domain Controller, or vMotion / HA

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,188
599
113
Poland
I'm not convinced. I saw bugs being recognized only after a certain workload was running and failed after some time of 'successful' upgrades of the hosts. Hosts failed altogether. All virtualized domain controllers down. Clients in the dark. Good night. Worst case, the backup systems are on the same host OS :cool: All seen ... 'und Mutti kommt ins Schwitzen'.

If all DC's are virtualized, at least put one of them on different host OS's ...
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I agree, but its still a matter of preparedness.

In the end everything that can happen to VMs can also happen on physical boxes (o/c you have an extra layer but thats the price for increased flexibility and utilization of the hw)

So its not about "if everything is virtualised then do this or that" but rather making sure that you have as much redundancy or rebuilt capability as required for your (operational/business model / use case) and if all else fails you can work around it - planing and preparedness are the keywords here - regardless of physical or virtual implementation:)
 

gb00s

Well-Known Member
Jul 25, 2018
1,188
599
113
Poland
I get the point and before we totally hijack the thread I leave it at that. I just read virt environment, VM's with DC + Backups and thoughts of HA and thought I might point my thinger in a diff direction to lower the probability of a total failure in a 'fully' virt environment.
 
  • Like
Reactions: Rand__

StevenDTX

Active Member
Aug 17, 2016
493
173
43
I can tell you that in the hundreds of domain controllers in my company, only a handful are physicals, and those are in environments where we dont have full redundant VM environments.
 
  • Like
Reactions: gb00s

gb00s

Well-Known Member
Jul 25, 2018
1,188
599
113
Poland
I can tell you that in the hundreds of domain controllers in my company, only a handful are physicals, and those are in environments where we dont have full redundant VM environments.
I was just referring to the opening thread with the information provided ... Small Net, 2 hosts .... NOT(!!!) hundreds. Btw I love the term 'full redundant' ... Sounds awesome.
 

Dev_Mgr

Active Member
Sep 20, 2014
135
48
28
Texas
I think you need to think of the 2nd AD server not for your appliance, but for your active directory.

If your only (single) AD server croaks on a Windowsupdate, a worm (e.g. some crypto worm), or something else that won't allow you to rebuild said VM, you're going to have to rejoin any system joined to the domain again and any trust setups will also need to be redone.

A second DC makes this much easier; bump the old DC out of the domain from the 2nd DC and seize all roles, build a new DC with the same name and IP and join it to the domain and promote it to a DC again.

Now, if you're only running this DC for that 1 appliance and nothing else, then you'd only have to reconnect that appliance to a rebuild DC/domain, which doesn't take all that long.
 
  • Like
Reactions: StevenDTX

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Have at least 2 DCs will save you hours in some situations.

Keep it simple stupid. When an app has build in HA capability, use that instead of lower level infrastructure HA.
 
  • Like
Reactions: StevenDTX

ARNiTECT

Member
Jan 14, 2020
92
7
8
I've decided to keep primary & secondary DCs; as you have all discussed, there are causes other than virtual/physical hardware faults that could bring a DC down. Fault Tolerance, HA etc, would still be useful, if it was more accessible.

As my ESXi All-in-One Hosts have VMs stored on a local SSD that must start before the 'ZFS secure' NFS shares are available, I'm considering options:
- accept the risk, the local SSD is a durable Optane 900P, keep backups and another ready to go SSD
- deploy a 2nd OmniOS/napp-it VM with a mirror of SSDs, OmniOS VM is a quick rebuild from a template
- SATA Raid card with mirror of SSD (PCI only, no PCIe left)
- Some sort of VM level software mirror, where ESXi can use storage across 2 SSDs?
 
Last edited:

vangoose

Active Member
May 21, 2019
326
104
43
Canada
I've decided to keep primary & secondary DCs; as you have all discussed, there are causes other than virtual/physical hardware faults that could bring a DC down. Fault Tolerance, HA etc, would still be useful, if it was more accessible.

As my ESXi All-in-One Hosts have VMs stored on a local SSD that must start before the 'ZFS secure' NFS shares are available, I'm considering options:
- accept the risk, the local SSD is a durable Optane 900P, keep backups and another ready to go SSD
- deploy a 2nd OmniOS/napp-it VM with a mirror of SSDs, OmniOS VM is a quick rebuild from a template
- SATA Raid card with mirror of SSD (PCI only, no PCIe left)
- Some sort of VM level software mirror, where ESXi can use storage across 2 SSDs?
I have a similar esxi to host storage server and dc, vCenter and another 3-node vsphere cluster for everything else, including 2 additional DCs. I have another vCenter in the 3-node cluster to manage this standalong ESXi, just make patching/backup etc. easier.
- DC1 - boot order 1
- storage server - boot order 2
- vCenter - boot order 3

Local storage is 512GB Samsung 970 Pro m.2. I don't have extra disk slot for sata disks, boot disk is satadom, 8*10TB SAS disks are in the case connecting to onboard SAS 3008 and the hba is passthrough to the storage VM, as well as HGST SN260, Optane 900P and 2 dual port nics.

Why do I need 4 nics in the storage server? 2 in aggregate for front end (MGMT/CIFS/NFS), and 2 for iSCSI using MPIO.

All of the critical data (AD, vCenter and storage config, etc.) are backed up to another physical backup server. That is the only dedicated physical server I have now, everything else is consolidated and virtualized except workstations.