lga3647 esxi build to host my Oracle Apps/Databases

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,284
852
113
Denver, Colorado
Hi itronin,

I just noticed that the smaller version of that switch, the Mokerlink 8 port SFP+ managed switch, someone left a review on their website saying they are using twinax (DAC) to uplink to their router. The 12port version probably can do the same. Really excited to hear what your impression is after you test with it. I'm putting the 12port SFP+ switch part# here for my future reference... 10G120GSM I can't believe nobody has reviewed it yet.
ohhh - nice pick up! I'm coming back from this trip a day early so hopefully Friday I can do this. I've got some R86S's - and DAC's. As well as an X10SDV to test (10G-BaseT transciever)

this does appear to be an actively cooled device.

I'll post some gui screenshots too. you want in your thread or start a new one in the network section as I don't want to crap on your thread.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
169
49
28
Hello,

Yesterday and Today I spent my time upgrading from vSphere ESXi 7.0u3k to 8.0u2b Very Exciting Stuff, especially now that Broadcom has taken over since May 2024. The Support Portals from Broadcom took me a full day to learn how to login using my old vmware support, how to navigate and find stuff with a mix-mash of vmware kb web pages and broadcoms support portal web pages.

Fortunately, being a VMUG user and not actually purchasing vSphere from VMware directly means I don't really need to download anything from Broadcom Portal website. I only need the esxi .iso and vcsa .iso I get from VMUG.

First I upgraded my vCenter Server Appliance (VCSA) to v8.0.2.00200 using the vcsa .iso - That endeaver took me a whole day to complete. Then I had to configure my Veeam Backup servers to communicate with the new vcsa server. Ran into a few certificate issues, but those were resolved with help from kb articles with easy solutions.
1724442686155.png

ESXi upgrade was the easiest part. I shut down all my guest VMs on that host, then put it into Maintenance Mode. Then ssh'd into the esxi host command line and ran this:

[root@esxi01:~] esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

1724441539049.png

I didn't use the .iso from VMUG for the esxi upgrade because I was using the vmware online build repo instead.

Then I ran the actual upgrade command syntax:
[root@esxi01:~] esxcli software profile update -p ESXi-8.0U2b-23305546-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

[HardwareError]

Hardware precheck of profile ESXi-8.0U2b-23305546-standard failed with warnings: <CPU_SUPPORT WARNING: The CPU on this host may not be supported in future ESXi releases. Please plan accordingly. Please refer to KB 82794 for more details.>

Apply --no-hardware-warning option to ignore the warnings and proceed with the transaction.

Please refer to the log file for more details.

[root@esxi01:~]

Seems my hardware is getting old? nahh, that's crazy. just ignore that.

We’ll proceed by issuing the --no-hardware-warning option when we run that command.


[root@esxi01:~] esxcli software profile update -p ESXi-8.0U2b-23305546-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml --no-hardware-warning

Update Result

Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

Reboot Required: true

Success!

ESXi Host version:
1724442939262.png

I only need to apply my new license keys I get from VMUG. That's my next step. Then I'm good for another year. :D
 
  • Like
Reactions: itronin

BennyT

Active Member
Dec 1, 2018
169
49
28
you're welcome. I should note I did not check compatibility nor roadmap for those parts with ESXI as I've dropped my vmug and am in the process of moving to xcpng (for my lab, and my clients). Please do check though for ESXI.

I'm making a move that is not for everyone (really use case dependent) so please don't take my comment as an endorsement to move off ESXI.
Hi Itronin,

How has your xcpng lab been working out? I'm considering future move to either Proxmox VE or XCP-ng. I love esxi, but the vwmare compatibility means my existing hardware (specifically my 1st gen XEON Skylake-SP CPUs) will not be supported in the next major esxi 9+ release. Although I could upgrade to CascadeLake-SP (2nd gen Intel Xeon Scalable Processors). I want to consider other hypervisors other than VMware.

Did you evaluate proxmox before deciding on XCPng? Were there any specific strong points or draw backs with either hypervisor that you discovered?

I use Veeam Backup and Replication to backup my existing VMware VMs. And Veeam says they can work with Proxmox and XCP-ng too.

Thanks,

benny
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,284
852
113
Denver, Colorado
Hi Itronin,

How has your xcpng lab been working out? I'm considering future move to either Proxmox VE or XCP-ng. I love esxi, but the vwmare compatibility means my existing hardware (specifically my 1st gen XEON Skylake-SP CPUs) will not be supported in the next major esxi 9+ release.
Hallo BennyT. - so glad your journey is continuing apace and you seem to be enjoying it (even the systems side of things) !

XCP-ng Lab has been fine for computes. hardware pass through, not as "easy" or reliable as vmware IME esp GPU's. I'm running & supporting XCP-ng at 3 sites (including my home lab) and about to flip a fourth from vmware to XCP-NG - I am NOT an expert on XCP-ng.

I have not messed with native ZFS storage on xcp-ng, but if you still have a single physical server I'd be looking at that on XEN or Prox where it is also supported.

I have not played with prox, just helped a couple of folks out with their network configurations. I am likely to stand up a prox cluster in the next 3-4 months. Prox seems a bit more bleeding edge (so is it sacrificing stability for new stuff? - IDK) and there is certainly a much larger community so its clear it has much more time in the saddle for things like pass through gpu's hba's etc. For prox, I'm looking at it to use shared GPU's with some of the 5-7 year old shared GPU hardware that has become much more inexpensive.

Although I could upgrade to CascadeLake-SP (2nd gen Intel Xeon Scalable Processors). I want to consider other hypervisors other than VMware.
I'd probably upgrade to Cascade lake - anyway...when prices are right... lol - for me I'm just starting to migrate off broadwell-ep to sky lake and -sp

If you were going to migrate to XCP-NG you might set up an XCP-ng "test server" and just see how that plays out. If it worked okay you could use that to do a full migration of the vm's as a holding area. By test server I'm thinking basically a desktop with a bit more memory, and maybe hardware raid storage, nothing horribly expensive and to be blunt might be throw-away or sell cheap when done kind of hardware. I'm not suggesting you build a second of your first build (the 846 chassis alone - shudder to think of costs today for that). Re-install OS on your original host and then migrate vm's back off the test server. don't see why that process would not also work for prox.

Did you evaluate proxmox before deciding on XCPng? Were there any specific strong points or draw backs with either hypervisor that you discovered?
No, see above.

Biggest limiter with XCP-ng was early going and trying to move fast - the reliance on Xen Orchestra for gui since each compute node is (was?) gui-less. My understanding is that limited gui is coming (if not here now) in a simpler form on every XCP-ng node install. Bootstrapping XO required a bit of thinking/planning depending on whether you used the pre-packaged crippled XO or compile your own...

One thing I think that people miss with Xen and XO though is that XO does not have to live on the cluster (or node) its managing... you can for instance, run a hypervisor on your daily driver and install XO there to boot strap the process and/or leave it there if homelab... LOL - IIRC Broadcom gave away the desktop version of vmware (fusion, workstation etc.)

XO for most things provides the gui, configurations live on the nodes. Backups though - different story. Lots of good docs on this and other XCP-ng topics (including by @fohdeesha) over on the xcp-ng site. Really, their docs are very good, seem to get refreshed and while I primarily lurk on the forums - honestly I have not had a need to date to ask for help. For my needs it has been relatively simple and straightforward...

While XCP-ng does not seem to be too picky about hardware I have run into a case with some DELL NICS (Broadcom, virtual NIC hardware) that performs about 25% slower than generic intel 520/ConnectX3/4 at 10Gbe. doing some research I read that NIC family has performance issues on Linux distributions and that the drivers were really optimized for Windows server and VMware.

Observation: XCP-ng seems more api/scripting friendly than VMware. But as with everything, it takes time to learn and understand commands cli etc... so XO is much easier (IMO) to use than cli and gets you running faster.

XCP-ng - is *UID centric. I feel that can be a bit intimidating to folks when seeing those in a gui. the development effort has come a long way (again IMO) to making names easy to apply and therefore remember rather than long strings of characters. There are still scenarios where copying and pasting *UID is required. Then again in the esxi-cli you have to do a bit of that as well.

Subjectively XCP-ng feels like it might be a teeny bit slower than VMware on the same hardware. I have nothing objective to substantiate that statement. Just feels that way.

I use Veeam Backup and Replication to backup my existing VMware VMs. And Veeam says they can work with Proxmox and XCP-ng too.

Thanks,

benny
From what I've seen Veeam is nice. One site has that as their backup solution. I briefly looked at it in 2018//2019 - so long ago. But I have to say I have not played with it at all.

XCP-NG has what appears to be a very robust backup methodology baked in. I have not used it either. At all the sites I work on including my home have two baremetal TNE/C used for shared storage and that's leveraged for creating snapshots, backups, pushing storage offsite, etc. etc.

I have by and large moved away from AIO deployments even in my lab environments.
The recipe I'm using at sites tends to be 1 or 2 baremetal TNE(HA) or TNC (soon Scalable - still testing). When there are 2 then the second is a replication node which also pushes offsite (Azure CS or other cloud | third party appliance | TNC). I don't just rely on ZFS snaps : XCP-ng or VMware periodic snapshots, flat file storage in NFS shares, and DB exports, I do rely on ZFS for data integrity and transport as much as possible.

I realize I wrote a bit but probably conveyed much less. sorry 'bout that.
 

BennyT

Active Member
Dec 1, 2018
169
49
28
I logged into my vCenter Server vSphere Client and it shows a triggered alarm for a Certificate expiring soon. Usually I manage and regen certificates GUI Manage Certificates screen in vSphere Client. But the certificates that were expiring are not handled via the gui.

binary file on VCSA linux server containing all of the certificate Stores, which in turn contain the certs ==> /usr/lib/vmware-vmafd/bin/vecs-cli


login to the VCSA linux vm as root to access the shell command line. Then run this for loop syntax (paste this long syntax into your command line and press enter):

for store in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list | grep -v TRUSTED_ROOT_CRLS); do echo "[*] Store :" $store; /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $store --text | grep -ie "Alias" -ie "Not After";done;

1727972953211.png
That lists all certificate stores and their certs, along with their expiration dates

I found a very useful shell script named vCert that apparently was developed by VMware themselves. Its over 8000 lines and it's quite excellent.


I reviewed the script to get an idea how it works and what it does. Then I took a snapshot of the VCSA VM, incase I damage anything I can restore to the snapshot.

Then I uploaded the vCert script to my vcsa linux server and ran it:
1727973391474.png


excerpt of some of the output:
1727973552003.png

Using the menu's I navigated to where I can replace the Solution User Certs with VMCA signed certs:
1727973632879.png

Checked the certs after regen and they look good:
1727973736825.png

Rerunning the original for loop syntax to report all cert expiration dates, and they look good and way into the future:
1727973889100.png

Highly recommend that vCert script. Seems way better than the bundled Certificate Manager scripts or GUI. I doubt it is supported by VMware though if it makes a mess, but I don't have support anyway so I wasn't concerned. Plus I had my backups as a fallback.

I don't know if this vCert will continue to work on future releases, but works great for 8
 
  • Like
Reactions: TRACKER