lga3647 esxi build to host my Oracle Apps/Databases

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BennyT

Active Member
Dec 1, 2018
166
46
28
My new 5PX1500RTNG2 should arrive tomorrow. Ordered it from provantage.

While I'm waiting for it to arrive i downloaded the free IPM version v1.7.255. Release notes were dated Dec 2021. EOL Support for IPM v1 is Dec 31 2023 according to Eaton IPM FAQ.

To implement IPM in a VMware vSphere environment you install IPM as a preconfigured appliance VM in the form of an .ova file . Deploy the IPM VM as a guest vm in the ESXi host.

IPM v1.7 works great (as far as i can tell without having a UPS yet to test with it) but one issue is that the vCenter plug-in it comes with is not HTML5 therefore the plug-in is incompatible with the HTML5 variants of vCenter vSphere Client . Without the compatible plug-in you cannot fully use IPM software connected to an HTML5 vCenter. But you can create a connection directly to the ESXi hosts without going through vCenter (didn't need the plug-in) for graceful vm and host shutdowns.

EATON also has the newer IPM v2.4, which does come with an HTML5 plug-in which is compatible with vCenter using HTML5.. But IPM v2+ is not free. You must pay a license fee for each "node" being managed plus maintenance fee.

*Definition of a NODE is a UPS, or a ESXi host or a non hypervisor computer. I may go to v2 later but for now I'm going to try the free software.

v1.7 is downloadable for now, but soon (i think Dec 2022) Eaton plans to remove the website that the .ova download is hosted on to merge it into the newer EATON website. I'm unsure if IPM v1.7 download will be accessible after that cutover occurs, so get it now while you can (url for download is in my previous post).

Regarding the v1.7 of IPM, im still experimenting with it and can't do much until the UPS arrives. But i can create connections to the ESXi host, setup email smtp to test alerts from IPM etc. I'll know more tomorrow and I'll probably post a follow up here sometime next Monday.

Have a good day
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
The Eaton 5PX1500RTNG2 arrived yesterday. We installed it and I'm still configuring but I've completed most of the setup

Here are the three Eaton software packages involved with my setup

- Installed and configured Eaton IPM v1.7 .ova as a VM on the ESXi Host.
https://powerquality.eaton.com/supp...adg_Q_QRequired=&site=&menu=&cx=179&x=10&y=14
IPM is the primary software for managing alerts, events etc of the UPS. It can connect and see the UPS, the Hypervisors and the Physical Servers (if IPP is installed on those physical servers).

- Installed and configured IPP .rpm package on physical linux servers (not virtualized):
https://powerquality.eaton.com/supp...adg_Q_QRequired=&site=&menu=&cx=179&x=15&y=12
IPP is only needed on standalone physical servers (servers that are not hypervisor hosts or not a VM). It has it's own web page login for tying that server to a UPS on the network.

- Configured the optional Eaton "Network-M2" adapter card via the adapters web page.
*Without the optional network card you can still use the included USB or Serial Cable to connect one server directly to the UPS for communicating actions and events between UPS and the Server. If that server is networked then I think other servers on the same LAN can receive trickle down instructions from the first server to also be shutdown... but I''m unsure of that... which is why I went with the optional network card installed in the UPS.

I'll post some screenshots with detailed instructions a little later, but for now I just wanted to give quick success report that I was able to test automated shutdown of ESXi Host and Guests from the UPS on power outage.

I'm still working on shutting down standalone physical servers and the devices such as network stuff (router, switch, modem). I'll post more after I get this all figured out and finalized. Very happy with the purchase.
 
Last edited:
  • Like
Reactions: Aluminat

BennyT

Active Member
Dec 1, 2018
166
46
28
I was unable to upload my word doc (file was too large) with my detailed notes on how I setup IPM, IPP and the UPS Network-M2 web interface configurations. But here are a few setup screenshots showing how I configured IPM v1.7

screenshots show how to config IPM to gracefully shutdown ESXi Host when UPS switches to battery

- timing: this config allows ESXi Host to run for 15 minutes on UPS battery. After 15 minutes (900 seconds) it will put ESXi into Maintenance mode and shutdown the Guest VMs. Then ESXi itself will shutdown. There is a 10 minute window (600 seconds) that begins after the 15 minutes on battery. If the ESXi Host has not been able to shutdown the guest VMs after 10 minutes then I will assume a VM is not responsive and the VMs will be powered off. Then the ESXi Host will be gracefully shutdown. You will see the 900s and 600s in screenshots below. That may better explain what those timers represent.

2022-06-17_12-28-53.png2022-06-24_15-01-41.png2022-06-20_19-15-34.png2022-06-20_19-42-24.png2022-06-21_9-39-23.png2022-06-21_10-32-25.png2022-06-21_10-37-08.png2022-06-21_10-48-57.png2022-06-21_10-54-20.png2022-06-21_11-07-05.png2022-06-21_11-23-35.png2022-06-20_20-04-28.png

I've also downloaded and installed the free IPP v1.69 (IPP as opposed to IPM shown above). It works in a similar fashion to IPM but instead of configuring an ESXi for Maint Mode and Shutdown, you simply tell IPP when to shutdown the Linux host it is installed on. I may later upload screenshots of that too, but it is pretty self explanatory once you get it installed. IPM shown in the above screenshots was the more complex part to configure.

It's all working great. I've tested outages by unplugging UPS from wall.
-after 15 minutes on battery:
-the ESXi Host will go into Maintenance Mode and then shutdown the VMs and then shutdown itself (configured in IPM)
-the Physical Servers with IPP installed will also run on battery for 15 minutes, then shutdown (configured in IPP)
-after 25 minutes ALL power output receptacles will have their power turned off (configured in UPS Network-M2 web interface - this software piece doesn't have a fancy acronym or name)

I may reconfigure the threshold timers to trigger shutdown based on how much percentage or time is remaining on Battery rather than how much time it's been on battery. But so far I like the way it is setup to run on battery for 15 minutes and THEN trigger shutdowns
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
Upgraded all storage in the ESXi from HDD to Flash. No more HDDs. I've spent the last two days moving all VMs from the HDDs to the SSD datastores.

According to my UPS I'm using at least 120+w less now, at idle. And the VMs are so much snappier, the Oracle DB I/O is great (no contention and performance is really good).
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
Just ordered a little unmanaged five port 10Gb BASE-T switch. TRENDnet TEG-S570

Now I'll be able to use those 10Gb BASE-T NICs that came on the Supermicro mobo. The reason I'm doing this now is because I also just setup an additional backup repository server using a Supermicro mini-tower 5028D-TN4T (pictured below - lower right minitower). That has a Xeon D-1541 8core on a X10SDV-TLN4F mobo that also has two 10Gb copper NICs (plus two 1Gb NICs). I loaded that with SSDs. It's overkill as a backup server.

The 10Gb switch hasn't arrived yet but when it does I'll maybe post how it worked out.PXL_20220827_011432559.jpg
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Nice - what drives did u use?
Still couldnt bring myself to spend a fortune on SSDs for backup purposes... O/c one or two high capacity drives is no problem but 24+? That hurts;)
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Hi Rand_

heres summary of my current storage devices

In the NORCO chassis with X11DPI-NT mobo - two Xeon 6130:
- Ten 4TB Crucial MX500 SSDs as Direct Attached (ESXi datastores). I set each drive as a datastore.
- One 80GB Intel SSD for the ESXi OS.
- I had Asus Hyper v2 16x PCIe card using mobo bifurcation with two Sabrent nvme 2TB Rocket SSDs on the card - I may switch this PCIe card to the Supermicro Minitower (if it fits). ESXi 7 has an extremely limited compatibility list with only a few PCIe NVMe M.2 SSDs on the list and none of them are affordable or consumer grade. Most consumer PCIe m.2 SSDs don't carry the firmware feature required by the ESXi v7 driver. Sabrent's did fortunately (I was lucky in that selection), but then I added two more from a different brand and it wouldn't work so I moved them into laptops for now.

In the Supermicro minitower with D1541 Xeon *I was able to use two 32GB ECC RDIMM sticks for this system that I had in my closet, which was a nice surprise because at I forgot I had them for awhile.
The minitower has:
- One 1TB Samsung 870 EVO internal as the OS (Oracle Linux 7.9)
- Four 4TB Samsung 870 EVO in the four hotswap trays (using four Felink 2.5 to 3.5 adapters [metal construction]) - backup repositories

And finally, a little HP minitower that is 12 or 13+ yrs old (only had to replace it's PSU couple months ago) with dual core 2.5Ghz E5200 Pentium:
- One 1TB HDD (RHEL 6.3)
- One 4TB HDD - backup repository
- Two 2TB HDD - backup repositories

I'll continue to use the HP microserver and rotate HDDs in it as a backup repository, but I'll use it for backing up physical servers and I'll use the supermicro for the VMs.

PXL_20220827_011432559.jpg
Here is my pile of HDDs I use to rotate backups in the old HP. I keep a spreadsheet with disk labels and dates of rotation and which VM backup jobs or physical machines are on the disks. I should write an Oracle APEX app to keep track of that but the spreadsheet works.
PXL_20220827_175701277.jpg


Here's some photos of the Supermicro minitower.


PXL_20220823_222733489.jpg

PXL_20220823_193220092.jpg
PXL_20220823_222047885.jpg
PXL_20220823_194158878.jpg
PXL_20220823_193043685.jpg



I bought the Supermicro minitower 5028D-TN4T from MITXPC and they were great to purchase from.

Most other USA online retail shops (wiredzone, thinkmate, nextwarehouse, etc) will dropship from Supermicro. And unfortunately because of supply chain issues, Supermicro says they will no longer ship barebone systems anymore unless bundling with CPU and MEMORY or DISKS. That restriction includes anything with a chassis and PSU that doesn't have RAM installed. And because a dropship from Supermicro is a bottleneck, it takes 4-8 weeks before it even begins to ship!

Fortunately, MITXPC has alot of Supermicro in their own warehouse in California. They gave me latest version of the minitower chassis with 350W 80gold PSU and front USB 3 (the normal bundle uses older chassis with 250W 80bronze PSU and USB 2 ports).

MITXPC shipped it out the day I ordered it on a Friday and it arrived the following Tuesday.

It's pretty cool mitx mini tower setup. It's circa 2016 CPU and mobo but excellent specs and overkill for a backup repository. I love it.

*also a little funny mistake... I was checking my amazon order for the 10Gb 5 port BASE-T switch. I wasn't wearing my glasses and was on the porcelain throne (if you understand what I mean), I was looking at the amazon tracking info because the switch is to arrive this afternoon. I thought I clicked "see detailed tracking", but without my glasses I wasn't sure. I tapped "CANCEL ORDER DELIVERY" by mistake and there was no confirmation! Amazon instantly cancelled the order just before it was to be put on the truck "out for delivery". I screamed and then accepted the fact that it cannot be reversed. So now I'm waiting until monday and then I'll order it again. Good grief. LOL. Now I'm researching other possible switches. I might consider rethinking which switch to order: the mikrotik with four SPF+ and one RJ45, I'd need a couple copper SPF+ 10Gb transceivers, and it would probably a little bit less expensive than the unmanaged TRENDnet TEG-s750 I was expecting.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ah ok, older consumer drives are an option o/c...
They'd prolly suffice for me too, but I simply prefer enterprise drives... trying to buy 863a's 3.84 whenever i see one cheap but haven't gotten far yet;)

Else it seems your dataset is small enough to not need excessive amount of data, thats definitely easier then :)
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
I figure with the new UPS setup for gracefully shutting down all my devices to protect from powerloss, and with nightly backups of everything, I can skip the capacitor enterprise drive expense. *famous last words. Worst case scenario, if there were SSD corruption from a powerloss then I'd estore vCenter and any or all VMs from my backups, depending on which datastore disk became corrupted (each 4TB disk is it's own datastore). The only enterprise SSD I currently have is the $20 80GB Intel SSD I got from ebay which I use for ESXi OS.

I don't do alot of writes constantly. I'm more about proof of concept development testing, Oracle DB and Application installations + upgrades and POC my dev ideas, etc.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Re backup & restore -
How long is your restore time? Have you tested actually restoring things? What are the prereqs (are your backup tools installed on VMs)?

In the end it will depend on
1. What your acceptable down time is
2. If you can live with loosing a day's worth of data/progress/whatever

my main issue is the expectation re #1, not actual need i guess;)
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Re backup & restore -
How long is your restore time? Have you tested actually restoring things? What are the prereqs (are your backup tools installed on VMs)?

In the end it will depend on
1. What your acceptable down time is
2. If you can live with loosing a day's worth of data/progress/whatever

my main issue is the expectation re #1, not actual need i guess;)
Here are the backup and restores I've tested and I've fully documented so I can repeat recovery easily. It takes as long as it takes, but here's what I do....

- esxi config restore after fresh installation has been tested very thoroughly, especially during the last 6.7 to 7u3 upgrade. I had to restore messed up ESXi 7 upgrade back to v6.7 about 20 times in one day until I figured out how to fix the v7 upgrade problem I was having. Restoring from ESXi config takes as long as it takes to install and then run the restore command. Rebuilding ESXi from scratch is more time consuming, obviously, but I have screenshots of EVERY esxi and vCenter setup page, including networks vswitches and attribute/parameter settings. I've actually had to rebuiild it the long way once because I had a bad config backup one time.

- vCenter Server appliance (VCSA) is backed up nightly to a SMB Samba fileshare on the minitower backup servers. Before I scheduled those vcsa backups I actually had it crash one night before I had a UPS. It's database was corrupted. In that case I completely reinstalled VCSA from scratch. There wasn't really a lot to do to rebuild that VM, but it did take me a whole afternoon to reinstall from scratch. Now I have nightly VCSA backups and I'd rebuild the vCenter appliance and during that rebuild it asks if I would like to restore from the backup config. I've tested this a few times after since upgrading the vcsa v7u3 and it's not a long process

- esxI guest VMs are backed up nightly using VEEAM. I restore VMs pretty regularly via VEEAM. It's not uncommon for me to need restoring a VM once or twice a month because of a failed Oracle DB or Application upgrade step that I'll miss, and it's just easier sometimes to rollback to previous night using a veeam VM restore.

- Physical machines such as physical linux servers (the HP and Supermicro mini towers), are bi-monthly backups up using Clonezilla using a USB clonezilla boot disk I back up the boot drive partitions to rotated backup HDDs I put into a HDD "toaster" using eSATA or USB 3. These servers don't change very much. Only things that change on them are the VEEAM backup repositories on the hotswap disks. Obviously I don't backup the backups. I hotswap the backup repository filesystems on those physical machines, and they are rotated out (*hotswapped manually by me) on a schedule. I'd like to switch to scheduled Physical machine backups using VEEAM eventually.

- VEEAM Backup tool software which drives all of the ESXi VM backups is in a Microsoft Windows virtual machine itself on the ESXi server.
This is the bigger potential problem and it is takes the longest to recover from. The issue is that if the ESXi datastore SSD on which that VM resides should die or corrupt then we cannot restore other VMs that may crash and how do we recover the VEEAM since it itself is NOT backed up. The solution I've tested: I have nightly Veeam Configuration backups to the minitower backup repositories. This is not the full VM backup, just it's config, like an ESXi configuration backup That Veeam config backup contains the backup jobs and schedules and awareness of the backup repository servers, eetc. To restore that config is not much different than restoring an Esxi config, in concept, but it is more time consuming because I need to reinstall windows in a VM first. I need to reinstall Windows in a VM, then fresh install VEEAM v11 in that vm, Then restore the veeam configuration from the nightly config backup, rebuilding my backup job schedules. Then I need to tell the freshly installed and reconfigured VEEAM to scan the backup repositories from the miniTowers and restore the VMs to ESXi as needed. * I tested this rebuilding of VEEAM VM process process early on when I first began using VEEAM v9 with ESXi . Then I tested again just recently as I was upgrading from VEEAM v9.5 to v11... I created a new Windows VM, reinstalled VEEAM v11 in that windows VM and restored the VEEAM configuration and test restored a few VMs. This takes a full day if the VEEAM VM itself crashes. Better solution would be if I install a standalone physical Windows machine, but who wants to do that? And then I'd have to still back that up too. I may go that route eventually though.

Performance of a Veeam bacckup of a 600GB VM...
- A full backup of a 600GB Oracle Linux VM with Oracle EBS R12.2.11, DB 19c takes about 45 minutes to an hour. - I'll shorten this with the new 10Gb switch
- An incremental nightly backup of that same VM after that takes about 3-5 minutes

Performance of Veeam restore of a 600GB VM...
Full Restore can take about 2 hours. - I shorten this with the new 10Gb switch.
A full restore using "Quick Restore" feature of the most recent backup takes only 3 seconds. I don't know how Veeam Quick Restore works exactly but it's awesome. It compares changed blocks I guess, not sure. It's super sweet.

My source code and development and documentation etc... that all resides on my Windows workstations. My development source code is tested and installed into the VM as thats how I test them in my Oracle VMs, but I keep the code, programs, documents etc. on the workstations. This is what I'd consider my most valuable data and the most critical to be able to recover if it is damaged. I use SVN Subversion service installed on a 200GB Oracle Linux VM. That Subversion Server VM is the repository and contains my version controlled source code, programs, documentation and helps me keep track of other user's revision changes, branches, etc who edit their working copies. The windows workstations use TortoiseSVN which connects to the SVN repository and allows synching, checking out, updating, committing the local working copies on the workstations with the SVN Subversion repository. This keeps all of our workstations (other user's working copies) in synch and with version control, of our development. *I have a small family business and it's really just My dad, mom, sister and myself... but we need to keep our work in synch on our workstations and we do that using SVN and Tortoise SVN.

The SVN Repository in the VM is synched nightly to a mirrored SVN repository on the HP MiniTower. If my SVN Repository VM should die, my first attempt to restore would be from the nightly VEEAM backups from the night before. If that doesn't work then I'd cutover to the mirrored SVN repository on the HP minitower and we'd redirect our TortoiseSVN working copies to the HP IPaddress and port and it's pretty seamless. Then I rebuild the SVN Subversion VM and synch back to it from the HP.

Workstations are backed up using Acronis. Not scheduled though, but not much changes on the workstations other than our development which is already backed up to the SVN repository, VEEAM backups and also mirrored to other servers.

*notice I don't use the "cloud" for any backups. I did use dropbox for a number of years to synch my SVN working copies from workstations incase all other forms of local recovery failed, but I no longer do that. I don't like the "cloud" very much. The concept means trusting another entity, and the "cloud" is just fancy code for a stranger you pay on the internet. I do need to work on a remote backup solution though, just hadn't figured it out yet. Our SVN working copies(development code, documentation etc.) which is our most valuable asset are synched to each user's workstation and each workstation is in a different residence home. If my home flooded and if everything were damaged I'd lose all server rack data (VMs, ESXi, backups, everything). But I could restore my most valuable assets from another remote workstation SVN working copies. I need to give this remote backup more thought..
 
Last edited:
  • Like
Reactions: Rand__

BennyT

Active Member
Dec 1, 2018
166
46
28
I'll also mention that I'm not using any RAID. I don't have to worry about a drive error corrupting an array.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
This week I setup the little TEG-S750 TRENDnet five port 10Gb BASE-T switch.

I ran some tests using VEEAM to backup VMs from the big chassis to the mini tower backup repository with the new switch and 10Gb NICs (two 10Gb NICS on the ESXi host and two 10Gb NICs on the minitower).
2022-09-02_11-30-58.png

2022-09-02_11-59-20.png

The TEG-S750 is a non managed switch so I couldn't bond two NICs together. What I did was setup an additional vswitch in ESXi tied to the 2nd 10Gb physical NIC in the ESXi host. Then I setup two veeam proxy VMs, each on a separate vswitch and physical switch for two lanes of backup traffic to send to the target mini tower.
2022-09-03_11-18-59.png

The problem I now faced was how to setup the target minitower running Oracle Linux 7.9 (which isn't in ESXi or virtualized, it's a physical machine), to accept those two lanes of traffic through it's two physical NICs connected to the TRENDnet switch. I can't bond them since it's not a managed switch. I can't setup virtual switches since it's not ESXi. So for now it's funneled through a single 10Gb NIC on the minitower.

But how was I seeing 11Gbps in ESXi if it's can only fit through a 10Gb NIC at the targett? I assume it's because vCenter wasn't totally accurate.


You can see the new switch on top of the mini tower. It's not the cleanest cable management. I think I'd need conduit or cable management arms or hooks to clean it up. I'm very happy though as you can see.

IMG_20220903_110454.jpg
PXL_20220903_155040715.jpg
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
If you have 2 proxies on 2 subnets why wont the backup target run on two nics?
 

BennyT

Active Member
Dec 1, 2018
166
46
28
If you have 2 proxies on 2 subnets why wont the backup target run on two nics?
that might be my problem. The two proxies are on same subnet. ill try putting second Proxy on seperate subnet from the first Proxy. Then I can setup the backup target with its two nics on those same two subnets. I’m learning networking as i go.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
I've corrected network issues. I had a few problems that I think I've worked out.

I may post image of my vcenter network vswitches later (on my phone now), but basically setup two subnets and the veeam proxies can see the target backup minitowers two IPs on those same two proxies.

I ran 9 VM full backup jobs simultaneously from veeam. The source VM disks are hot-add to the veeam proxies for reading and compression then sent to the target backup minitower

5 of the VM backup jobs went through vmnic0 to first NIC IP of target minitower, the other 4 went through vmnic1 to second NIC IP of target minitower.

The max throughput was 13.2 Gbps. The switch was warm but not hot. It has good heatsinks inside. No fans in the switch.

I wanted to push it even more by adding more simultaneous backup jobs but at nine backup jobs I was close to running out of the 10 VM license limit of Veeam free edition. I could've added veeamzip backups to try saturate the network even more (there is no limit on# of veeamzip backups), but i didn't think of that till just now.

Those were just simply proof of concept tests andI 'm satisfied with the little switch and network performance. Next time i might get a managed switch to do aggregate or NIC bonding. Although that would mean I'd need to setup a virtual distributed switch (VDS) to do link aggregation. I think I've read VDS could be difficult or risky, but i can't remember why.

very happy with my 10Gbps. Now i need to find more use for it.
 
Last edited:
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Nice.
Sounds like I should look into setting up proxies too... I just never really dug into Veeam, just set up a single poc full backup job that I ran whenever I remembered to. At least scheduled it now;) Prolly leaving out a lot of optimization opportunities...
 

BennyT

Active Member
Dec 1, 2018
166
46
28
It's pretty cool. I'm liking veeam alot and learning more about it with the recent proxy setups and testing I'm doing.

I also downloaded and installed another Veeam Backup Server Community Edition so I could get an additional 10 VMs to back up. It's pretty sweet setup and also figured out how to backup physical servers using Veeam. Previously I was using clonezilla, which is great, but clonezilla is very "hands on" each time I wanted to make a clone image or restore. I had to drag out the HDD external toaster, connect it with USB or eSATA and use clonezilla boot USB thumb drive to facilitate the backups and restores. With veeam I can schedule physical server backups and easily restore all from my keyboard. I had to install a few veeam agent .rpm on the physical linux servers though for veeam to communicate and backup the servers.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Installed another Veeam Server and gained another 10 allotments of VMs, physical servers and laptops or workstations to backup (3 workstations only take up 1 of the 10 allotments).

I'm finally free from having to use clonezilla and acronis for backing up physical servers and workstations. Those are great tools...they just weren't automated enough and required too much manual setup intervention. I'd sometimes not backup physical servers or laptops for a long time because it wasn't convenient (although my data partitions were always being backed up regularly). I now have automated veeam backups of three workstations/laptops and two physical linux servers in my home and they backup to one of my 7 backup repositories. so nice.


Also researching the vmware comptaibility guide for m.2 nvme SSDs that work with ESXi v7u3. The Samsung PM983 with powerloss capacitors are now within my grasp pricewise. And those are on the compatibility list, unlike the Sebrant consumer SSDs I had. I'd like to load up the bifurcation card with four PM983 M.2.

Also, ESXi v8 is arriving soon, I think Oct 11th. Not going to jump on that for awhile because I just went to v7 this year. That's it for now.