lga3647 esxi build *Nov 2024 --> Began Switching from vSphere to Proxmox

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,318
874
113
Denver, Colorado
Hi itronin,

I just noticed that the smaller version of that switch, the Mokerlink 8 port SFP+ managed switch, someone left a review on their website saying they are using twinax (DAC) to uplink to their router. The 12port version probably can do the same. Really excited to hear what your impression is after you test with it. I'm putting the 12port SFP+ switch part# here for my future reference... 10G120GSM I can't believe nobody has reviewed it yet.
ohhh - nice pick up! I'm coming back from this trip a day early so hopefully Friday I can do this. I've got some R86S's - and DAC's. As well as an X10SDV to test (10G-BaseT transciever)

this does appear to be an actively cooled device.

I'll post some gui screenshots too. you want in your thread or start a new one in the network section as I don't want to crap on your thread.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
198
72
28
Hello,

Yesterday and Today I spent my time upgrading from vSphere ESXi 7.0u3k to 8.0u2b Very Exciting Stuff, especially now that Broadcom has taken over since May 2024. The Support Portals from Broadcom took me a full day to learn how to login using my old vmware support, how to navigate and find stuff with a mix-mash of vmware kb web pages and broadcoms support portal web pages.

Fortunately, being a VMUG user and not actually purchasing vSphere from VMware directly means I don't really need to download anything from Broadcom Portal website. I only need the esxi .iso and vcsa .iso I get from VMUG.

First I upgraded my vCenter Server Appliance (VCSA) to v8.0.2.00200 using the vcsa .iso - That endeaver took me a whole day to complete. Then I had to configure my Veeam Backup servers to communicate with the new vcsa server. Ran into a few certificate issues, but those were resolved with help from kb articles with easy solutions.
1724442686155.png

ESXi upgrade was the easiest part. I shut down all my guest VMs on that host, then put it into Maintenance Mode. Then ssh'd into the esxi host command line and ran this:

[root@esxi01:~] esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

1724441539049.png

I didn't use the .iso from VMUG for the esxi upgrade because I was using the vmware online build repo instead.

Then I ran the actual upgrade command syntax:
[root@esxi01:~] esxcli software profile update -p ESXi-8.0U2b-23305546-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

[HardwareError]

Hardware precheck of profile ESXi-8.0U2b-23305546-standard failed with warnings: <CPU_SUPPORT WARNING: The CPU on this host may not be supported in future ESXi releases. Please plan accordingly. Please refer to KB 82794 for more details.>

Apply --no-hardware-warning option to ignore the warnings and proceed with the transaction.

Please refer to the log file for more details.

[root@esxi01:~]

Seems my hardware is getting old? nahh, that's crazy. just ignore that.

We’ll proceed by issuing the --no-hardware-warning option when we run that command.


[root@esxi01:~] esxcli software profile update -p ESXi-8.0U2b-23305546-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml --no-hardware-warning

Update Result

Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

Reboot Required: true

Success!

ESXi Host version:
1724442939262.png

I only need to apply my new license keys I get from VMUG. That's my next step. Then I'm good for another year. :D
 

BennyT

Active Member
Dec 1, 2018
198
72
28
you're welcome. I should note I did not check compatibility nor roadmap for those parts with ESXI as I've dropped my vmug and am in the process of moving to xcpng (for my lab, and my clients). Please do check though for ESXI.

I'm making a move that is not for everyone (really use case dependent) so please don't take my comment as an endorsement to move off ESXI.
Hi Itronin,

How has your xcpng lab been working out? I'm considering future move to either Proxmox VE or XCP-ng. I love esxi, but the vwmare compatibility means my existing hardware (specifically my 1st gen XEON Skylake-SP CPUs) will not be supported in the next major esxi 9+ release. Although I could upgrade to CascadeLake-SP (2nd gen Intel Xeon Scalable Processors). I want to consider other hypervisors other than VMware.

Did you evaluate proxmox before deciding on XCPng? Were there any specific strong points or draw backs with either hypervisor that you discovered?

I use Veeam Backup and Replication to backup my existing VMware VMs. And Veeam says they can work with Proxmox and XCP-ng too.

Thanks,

benny
 
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,318
874
113
Denver, Colorado
Hi Itronin,

How has your xcpng lab been working out? I'm considering future move to either Proxmox VE or XCP-ng. I love esxi, but the vwmare compatibility means my existing hardware (specifically my 1st gen XEON Skylake-SP CPUs) will not be supported in the next major esxi 9+ release.
Hallo BennyT. - so glad your journey is continuing apace and you seem to be enjoying it (even the systems side of things) !

XCP-ng Lab has been fine for computes. hardware pass through, not as "easy" or reliable as vmware IME esp GPU's. I'm running & supporting XCP-ng at 3 sites (including my home lab) and about to flip a fourth from vmware to XCP-NG - I am NOT an expert on XCP-ng.

I have not messed with native ZFS storage on xcp-ng, but if you still have a single physical server I'd be looking at that on XEN or Prox where it is also supported.

I have not played with prox, just helped a couple of folks out with their network configurations. I am likely to stand up a prox cluster in the next 3-4 months. Prox seems a bit more bleeding edge (so is it sacrificing stability for new stuff? - IDK) and there is certainly a much larger community so its clear it has much more time in the saddle for things like pass through gpu's hba's etc. For prox, I'm looking at it to use shared GPU's with some of the 5-7 year old shared GPU hardware that has become much more inexpensive.

Although I could upgrade to CascadeLake-SP (2nd gen Intel Xeon Scalable Processors). I want to consider other hypervisors other than VMware.
I'd probably upgrade to Cascade lake - anyway...when prices are right... lol - for me I'm just starting to migrate off broadwell-ep to sky lake and -sp

If you were going to migrate to XCP-NG you might set up an XCP-ng "test server" and just see how that plays out. If it worked okay you could use that to do a full migration of the vm's as a holding area. By test server I'm thinking basically a desktop with a bit more memory, and maybe hardware raid storage, nothing horribly expensive and to be blunt might be throw-away or sell cheap when done kind of hardware. I'm not suggesting you build a second of your first build (the 846 chassis alone - shudder to think of costs today for that). Re-install OS on your original host and then migrate vm's back off the test server. don't see why that process would not also work for prox.

Did you evaluate proxmox before deciding on XCPng? Were there any specific strong points or draw backs with either hypervisor that you discovered?
No, see above.

Biggest limiter with XCP-ng was early going and trying to move fast - the reliance on Xen Orchestra for gui since each compute node is (was?) gui-less. My understanding is that limited gui is coming (if not here now) in a simpler form on every XCP-ng node install. Bootstrapping XO required a bit of thinking/planning depending on whether you used the pre-packaged crippled XO or compile your own...

One thing I think that people miss with Xen and XO though is that XO does not have to live on the cluster (or node) its managing... you can for instance, run a hypervisor on your daily driver and install XO there to boot strap the process and/or leave it there if homelab... LOL - IIRC Broadcom gave away the desktop version of vmware (fusion, workstation etc.)

XO for most things provides the gui, configurations live on the nodes. Backups though - different story. Lots of good docs on this and other XCP-ng topics (including by @fohdeesha) over on the xcp-ng site. Really, their docs are very good, seem to get refreshed and while I primarily lurk on the forums - honestly I have not had a need to date to ask for help. For my needs it has been relatively simple and straightforward...

While XCP-ng does not seem to be too picky about hardware I have run into a case with some DELL NICS (Broadcom, virtual NIC hardware) that performs about 25% slower than generic intel 520/ConnectX3/4 at 10Gbe. doing some research I read that NIC family has performance issues on Linux distributions and that the drivers were really optimized for Windows server and VMware.

Observation: XCP-ng seems more api/scripting friendly than VMware. But as with everything, it takes time to learn and understand commands cli etc... so XO is much easier (IMO) to use than cli and gets you running faster.

XCP-ng - is *UID centric. I feel that can be a bit intimidating to folks when seeing those in a gui. the development effort has come a long way (again IMO) to making names easy to apply and therefore remember rather than long strings of characters. There are still scenarios where copying and pasting *UID is required. Then again in the esxi-cli you have to do a bit of that as well.

Subjectively XCP-ng feels like it might be a teeny bit slower than VMware on the same hardware. I have nothing objective to substantiate that statement. Just feels that way.

I use Veeam Backup and Replication to backup my existing VMware VMs. And Veeam says they can work with Proxmox and XCP-ng too.

Thanks,

benny
From what I've seen Veeam is nice. One site has that as their backup solution. I briefly looked at it in 2018//2019 - so long ago. But I have to say I have not played with it at all.

XCP-NG has what appears to be a very robust backup methodology baked in. I have not used it either. At all the sites I work on including my home have two baremetal TNE/C used for shared storage and that's leveraged for creating snapshots, backups, pushing storage offsite, etc. etc.

I have by and large moved away from AIO deployments even in my lab environments.
The recipe I'm using at sites tends to be 1 or 2 baremetal TNE(HA) or TNC (soon Scalable - still testing). When there are 2 then the second is a replication node which also pushes offsite (Azure CS or other cloud | third party appliance | TNC). I don't just rely on ZFS snaps : XCP-ng or VMware periodic snapshots, flat file storage in NFS shares, and DB exports, I do rely on ZFS for data integrity and transport as much as possible.

I realize I wrote a bit but probably conveyed much less. sorry 'bout that.
 

BennyT

Active Member
Dec 1, 2018
198
72
28
I logged into my vCenter Server vSphere Client and it shows a triggered alarm for a Certificate expiring soon. Usually I manage and regen certificates GUI Manage Certificates screen in vSphere Client. But the certificates that were expiring are not handled via the gui.

binary file on VCSA linux server containing all of the certificate Stores, which in turn contain the certs ==> /usr/lib/vmware-vmafd/bin/vecs-cli


login to the VCSA linux vm as root to access the shell command line. Then run this for loop syntax (paste this long syntax into your command line and press enter):

for store in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list | grep -v TRUSTED_ROOT_CRLS); do echo "[*] Store :" $store; /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $store --text | grep -ie "Alias" -ie "Not After";done;

1727972953211.png
That lists all certificate stores and their certs, along with their expiration dates

I found a very useful shell script named vCert that apparently was developed by VMware themselves. Its over 8000 lines and it's quite excellent.


I reviewed the script to get an idea how it works and what it does. Then I took a snapshot of the VCSA VM, incase I damage anything I can restore to the snapshot.

Then I uploaded the vCert script to my vcsa linux server and ran it:
1727973391474.png


excerpt of some of the output:
1727973552003.png

Using the menu's I navigated to where I can replace the Solution User Certs with VMCA signed certs:
1727973632879.png

Checked the certs after regen and they look good:
1727973736825.png

Rerunning the original for loop syntax to report all cert expiration dates, and they look good and way into the future:
1727973889100.png

Highly recommend that vCert script. Seems way better than the bundled Certificate Manager scripts or GUI. I doubt it is supported by VMware though if it makes a mess, but I don't have support anyway so I wasn't concerned. Plus I had my backups as a fallback.

I don't know if this vCert will continue to work on future releases, but works great for 8
 

BennyT

Active Member
Dec 1, 2018
198
72
28
Today I received an email from VMUG Advantage saying that after November 30 2024 the Kivuto/OnTheHub online store will no longer be available for downloading VMware software products or the yearly license keys.

Instead, Broadcom will allow VMUG Advantage members access to the Broadcom Support Portal, and somehow associate that connection to our VMUG Advantage Membership account. From there we should have access to Broadcom VVF (Vmware Vsphere Foundation - i.e. ESXi and vSphere products) downloads and license keys. This will be funneled through the Broadcom VCP (Vmware Certified Professional) Program. The VCP program would give us access, as long as we use the products for only personal and non-production use, such as in a home lab.

1730827665621.png

I've emailed advantage@vmug.com for more details and to see how assured they are that Broadcom will honor VMUG Advantage memberships, to allow me to download license keys and product software etc. This would be awesome actually, because I was getting REALLY worried that Broadcom would close all access and I'd be forced to switch to Proxmox ve or some other hypervisor.

I'm still waiting to hear back from VMUG, but I think they are still figuring out the details with Broadcom on portal access etc.

In the meantime I decided to inquire with chatGPT, which is pretty awesome tool in itself.
1730827890951.png

1730828653000.png

After I learn more I'll post here again.

Thanks,

Benny
 
Last edited:

BennyT

Active Member
Dec 1, 2018
198
72
28
It sounds like Vmware Certification is going to be REQUIRED for VMUG Advantage users to obtain licenses going forward.

Looks like I'll be researching how to migrate to Proxmox or such. Ouch!!! I'm also going to check and see how much it costs for least expensive VMware vSphere Foundation ESXi vCenter subsriptions. I guess I don't require vCenter, but I'm so used to having it through VMUG.

Here is the reply from VMUG which I just received:

Hello Benny,

Thank you for reaching out with your questions about the recent changes to VMUG Advantage and the license access for your home lab. I’ll do my best to clarify based on the information we have from Broadcom so far.



Starting December 1, 2024, VMUG Advantage will no longer provide automatic access to EvalExperience license downloads through Kivuto/OnTheHub. Instead, Broadcom will offer a new pathway for VMUG Advantage members to access VMware licenses, expected to roll out in 2025. This new program will require VMUG Advantage members to hold a VMware Certified Professional (VCP) certification in either VCF or VVF to qualify for access to the licenses, which will be provided for personal, non-production use. However, specific details about accessing these licenses, including the login process for Broadcom's Customer Support Portal, are still being finalized. We anticipate sharing more information about the access and login process as the 2025 rollout approaches.


If you renew your VMUG Advantage membership now, you will continue to receive all other VMUG Advantage benefits, including exclusive webinars, training discounts, and more. While the new Broadcom program will not be immediately accessible, keeping an active VMUG Advantage membership may be beneficial, as it will be required for license access through Broadcom in the future.


If you extend your VMUG Advantage membership now, you will maintain eligibility for the new Broadcom program in 2025, provided you also meet the certification requirement.


We appreciate your patience and understanding as we work with Broadcom to ensure a smooth transition. If you have any further questions or would like additional assistance, please feel free to reach out.



Thank you!
Autumn Smith
 

itronin

Well-Known Member
Nov 24, 2018
1,318
874
113
Denver, Colorado
This is SOOO STOOOOPID. chicken and the egg and really cuts out the users who want to be knowledgeable but their livelihood is not dependent on certs...

I am (a) not surprised (b) was expecting something - and this fits the bill.

Glad I've moved two clients to XCPNG and I'll be spinning up prox so I can recommend that if the tool fits the use case.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
198
72
28
This is SOOO STOOOOPID. chicken and the egg and really cuts out the users who want to be knowledgeable but their livelihood is not dependent on certs...

I am (a) not surprised (b) was expecting something - and this fits the bill.

Glad I've moved two clients to XCPNG and I'll be spinning up prox so I can recommend that if the tool fits the use case.
Let me know your thoughts about reliability of xcp vs proxmox. Such as, does one or the other ever just shutdown a guest unexpectedly, that kind of thing.

I like the free aspect of proxmox. Every feature and product from proxmox, including the managing web based GUI, the backup services etc, all free. Nothing in proxmox requires a subscription to be unlocked, except for support and their enterprise "fully tested" repository.

Whereas xcp is mostly free unless you want the full featured unlocked xen orchestra gui. That's my understanding anyway.

Proxmox sits ontop of independently administered debian Linux. That's not an issue, until it comes time to upgrade to future major releases of debian with applications (proxmox) on that os. I'm not sure how xcp compares in that respect when it comes to upgrading the os and hypervisor. I'm wondering which one, prox or xcp, is better or friendlier when performing in-place major release upgrades of OS and the hypervisor.

I'm thinking about testing install and configuring both xcp and proxmox using a supermicro 5028 mini tower with UEFI, 10Gbe, single socket 8 core, 64GB and a few direct attached SATA SSDs.

Im thinking I'd start with proxmox, install it and test for a month, then wipe it out and test installing xcp and use it for a month.

I would love to purchase another mini tower to have them both up, xcp and prox. If budget allows
 

itronin

Well-Known Member
Nov 24, 2018
1,318
874
113
Denver, Colorado
Let me know your thoughts about reliability of xcp vs proxmox. Such as, does one or the other ever just shutdown a guest unexpectedly, that kind of thing.
There's lots of good (and some not so good) commentary in the other thread - but this ones yours so I'll reply here.
Consider me a journeyman - not an expert.

I've performed 3 deployments in production environments and am spinning up a 4th now.

My general XCP production recipe:
3 compute nodes running XCP-NG, all cpu's the same
1 TrueNAS HA server with SSD storage or if on a budget TrueNAS Core on SM server hardware
1 Baremetal TrueNAS Core for zfs repl pull and to push backups offsite (3-2-1)
Min 2 switch stack and full 10Gbe LAG balanced across switches for servers/nodes
2 FOSS XO VM on Alma 8 or 9 (depending on age) running on 2 of the nodes. I run the FOSS XO VM on my laptop

I'd like to see three of the deployments move to paid support but they're not ready to do that. Not going to comment on why.

0 unexpected guest shutdowns.
0 hardware incompatibilities with (2+ year old) mainstream server hardware
1 hardware deficiency using DELL/Broadcom converged NICS on DELL servers - research said Linux has a performance issue with this particular NIC, Windows and ESXI do not. Loss of about 5% 10Gbe performance. I'd expect that Prox probably has the same issue.
0 issues with Chelsio T520 based, Mellanox Connectx3/4 nics,

Nothing exotic in the configurations, no pass-through etc.

Tom Lawrence has a lot of YT vids about XCP-NG.
1 word of caution for your environment, IIRC you run TN guest and pass the storage back to your ESXI host, TL said he had issues with that a couple of years back. don't know if he has re-tested that configuration.

I will say the sysadmins at one of the deployments don't like XCPNG - they don't like the GUI or the operational concepts. they're ESXI propeller heads and they aren't going to like much else. That company has decided to stay with ESXI - but so far they haven't been willing to cough up the money to replace the E5-26xx servers running XCP so they can move to ESXI.

I like the free aspect of proxmox. Every feature and product from proxmox, including the managing web based GUI, the backup services etc, all free. Nothing in proxmox requires a subscription to be unlocked, except for support and their enterprise "fully tested" repository.

Whereas xcp is mostly free unless you want the full featured unlocked xen orchestra gui. That's my understanding anyway.
the other thread has a description about free XO vs. paid. Vates gotta eat ya know?

Proxmox sits ontop of independently administered debian Linux. That's not an issue, until it comes time to upgrade to future major releases of debian with applications (proxmox) on that os. I'm not sure how xcp compares in that respect when it comes to upgrading the os and hypervisor. I'm wondering which one, prox or xcp, is better or friendlier when performing in-place major release upgrades of OS and the hypervisor.
Honestly have not got there yet. About to for one site as I'd like to move it to 8.3 (just released). Need to practice in the lab first but...
I'd expect it to be no different than any other ESXI production deployment. Evacuate VM's to running node. Upgrade just evacutated node. Rinse Wash Repeat. Won't know till I test. If you are talking about a single node then that's a little bit of a different question.

I'm thinking about testing install and configuring both xcp and proxmox using a supermicro 5028 mini tower with UEFI, 10Gbe, single socket 8 core, 64GB and a few direct attached SATA SSDs.

Im thinking I'd start with proxmox, install it and test for a month, then wipe it out and test installing xcp and use it for a month.

I would love to purchase another mini tower to have them both up, xcp and prox. If budget allows
$500 or so build your own? X10SDV (only 4 cores less $$$), 8 bay ITX chassis, 128GB, storage? Doesn't need to be a super performer does it?

I do wish that Vates would create some sort of homelabber/UG model to get production XO. Maybe that will be part of the value of their yet to be released certification program, IDK. Even though I do this professionally, $3K / year minimum buy-in to play with production XO is a bit steep. Like I said though they gotta eat so I understand that.

I personally view Prox as a well developed "lab" solution with tons of people contributing so its had a lot of development cycles, but I also think it has that and the kitchen sink thrown in. That is based on my reading about it etc. I have not spun it up - yet.

FWIW: I don't consider Prox a type 1 hypervisor. I do consider XCPNG a type 1.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
198
72
28
Thanks itronin. I'm going to begin installing prox 8.2 next weekend on a small SM minitower with Intel xeon D1541 cpu, 64 GB ram (I might increase that), 4x4TB SSDs storage (each will be LVM Thin pool to hold the VMs), and 1x500GB SSD boot (OS+Proxmox).

My esxi host had 40TB of direct attached local datastore disks (X11DPI-NT in norco 24 bay chassis) in my single esxi host. I'm not using truenas at all, although i thought about it i never went through with it.

I plan to export a few of my critical vmware guest VMs, copy those as vmdk files to the new prox server LVM pool disks and then convert from vmdk to a prox VM and then test.

After I've converted all VMs, tested etc in prox, I'll wipe out the ESXi and install prox onto it, then migrate the VMs from smaller prox to the bigger prox. I'll use the smaller prox in mini tower for future projects, or maybe for proxmox vm backups etc.

That's my plan for now. I'm going to be learning as i go, I've got until sept 2025. Time goes by fast though. If i run into major road blocks that i cannot overcome, then ill ry xcp
 
  • Like
Reactions: Marsh and itronin

BennyT

Active Member
Dec 1, 2018
198
72
28
supermicro minitower with D series 8 core xeon, 2x10Gb TBASE, 4x4TB SSDs, 1x500GB SSD boot, and 128GB RAM

Tomorrow I'll install Proxmox onto it.
 

itronin

Well-Known Member
Nov 24, 2018
1,318
874
113
Denver, Colorado
supermicro minitower with D series 8 core xeon, 2x10Gb TBASE, 4x4TB SSDs, 1x500GB SSD boot, and 128GB RAM
99.99999% sure that's 4 core with HT so 8 Threads... If it is a 1521 that is. ;)

Its not real unless you post pictures (of the guts) :p

just curious which SSD's for the 4TB?

Tomorrow I'll install Proxmox onto it.
Looking forward to how you fare!

Happy gobble gobble day!
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
198
72
28
I probably said 1521 in error earlier, but it is a D-1541 8 core. The system i got is from MITXPC and the server name is SYS-5028D-TN4T and has a 1541. I have a twin server just like it that I've been using for awhile and I really liked it so I got another one for this proxmox project

The SSDs are Samsung consumer SSDs, nothing fancy. sata 2.5" SSDs, Samsung 870 evo. I had these still in their packaging under by bed for over a year, almost forgotten. They were meant to be used for a different project that didn't happen so I finally am using them now. I had to buy the extra 500 GB SSD for the bootdrive though.

3D printed the 2.5" adapters for the SSDs to fit in the Samsung sleds

The RAM I purchased were from ServerSupply, spanning different orders, so three of the sticks were pulled from an HP server and are Hynix, the 4th stick is a samsung.

Photos of guts :D - Happy Thankgiving Day!
PXL_20241129_010246265.jpgPXL_20241129_003651913.MP~2.jpgPXL_20241129_003234904.MP.jpgPXL_20241129_003230549.MP.jpgPXL_20241116_231313487.MP.jpgPXL_20241121_201450720-CROPPED.jpg
 
Last edited:

BennyT

Active Member
Dec 1, 2018
198
72
28
1732933664090.png
1732933743071.png
That updates the /etc/resolv.conf file in linux:
1732933829997.png

Next I need to edit a few proxmox repository list files so that it won't try to connect to the paid subscription repositories:
1732933491752.png

Now running apt commands to upgrade to latest package patch level:
1732933543563.png

Next I'll setup a Volume Group (VG) I'll name it something like "brtaminiprox1_vg1" (brta is my company) and Logical Volumes (LVM) "brta_miniprox1_lvm1" through "brtaminiprox1_lvm4" with each 4TB SSD having it's own LVM within that VG. Or something like that. I'm trying to name them smart, so if I detach them or rotate in different SSD (other LVMs) then I have a good naming convention on where they originated.

Note, I'm using ext4 because in my Oracle Linux servers I always used ext4 and I'm familiar with Physical Volumes, Volume Groups and Logical Volumes. But I might experiment with ZFS later, I've just never used that before so I'm using ext4 with LVMs for now.

I'll continue with setting up the VG and LVMs tomorrow. I also might need to rename this thread so that it references Proxmox in some way.
 
Last edited:
  • Like
Reactions: Rand__ and itronin

BennyT

Active Member
Dec 1, 2018
198
72
28
Found an excellent tutorial on how to install a custom SSL CA cert for proxmox to avoid the "unsafe" security warning.


The default certs from proxmox installation are self signed an not trusted by the browser, even if we install them into windows trusted certificates. Followed the tutorial and worked perfect. It also extends the cert expiration from 2026 to 2032. nice.


I've also reinstalled proxmox about 5 times, until I found a good balance of the 500GB SSD allocation.

So I allocated the 500GB boot disk as so:
  • HDSIZE: 465GB (total disk size) 500GB SAMSUNG 870 EVO
  • SWAPSIZE: 32GB
  • MAXROOT: 418GB (root filesystem size)
  • MINFREE: 15GB (free space to reserve for garbage collection or TRIM or whatever)
It's overkill for a proxmox OS, boot and root, etc. But I like it this way.

I didn't want to install VMs on the 500GB disk, even though there is alot of room for that if I wanted to. I want to keep the VMs on only the 4TB disks, so I can detach them from the Volume Groups when they become full and swap in a new SSD as needed.


*actually, I take that back... even though I set the above values in the Proxmox GUI installer, it mostly ignored my setting and still allocated a Volume Group and LVMs for VM storage (the pve-data stuff):

1733007498049.png

I could resize these partitions on the /dev/sde 500GB disk, but this is okay. I want to proceed to actual ESXi to PROXMOX VM conversion so I'm going forward with how it is. I can resize the boot disk later if I wanted to.
 
Last edited:
  • Like
Reactions: itronin

BennyT

Active Member
Dec 1, 2018
198
72
28
===========================================================================
*edit: DEC 1, 2024 - After sleeping and thinking about it, I've decided to use LVM Thin Pools and Thin Logical Volumes for more efficient usage of the disks instead of regular LVM storage. I'm leaving my instructions below incase anyone else wants to use regular LVMs. I'll update this thread after I've created and updated the storage from regular LVM to LVM-Thin Pools.
===========================================================================

I've decided to use regular Logical Volumes (LVMs) rather than LVM-Thin Pools and Thin LVMs within those pools (Thin LVMs are one to one with a guest VM). I'm using regular LVMs because it's more like a Datastore (as used in my esxi setup) in my mind than storage of Thin LVMs. I'm still learning the pros and cons of each, but that's my decision for now. Regular LVM.

That said, I'm allocating the Volume Groups (VGs) as a one to one with each physical SSD (sda, sdb, sdc, sdd), the idea being that it is much easier to detach and rotate out physical SSDs as they fill up with guest VMs, if each SSD has it's own unique VG. And likewise, each LVM will be a one to one with each VG, for the same reason when it comes time to detach and rotate out hotswap SSDs. Multiple guest VMs in each LVM.



*here is what this does:
- list the devices, partitions, logical volumes and their mountpoints, using lsblk (shows how it looks before running the following).
- create Physical Volumes of each SSD device
- create Volume Goups, one to one with each Physical Volume (SSD)
- create Logical Volumes, one to one with each Volume Group (and therefore a one to one with each Physical Volume, i.e. SSD)
- define those Logical Volumes as EXT4
- make the directory paths of where the Logical Volumes will be mounted
- mount the Logical Volumes to those directory paths
- edit the /etc/fstab so that when the proxmox server reboots it will remount those volumes again
- test the edited fstab by forcing it to run and remount everythign: using 'mount -a'
- list the volume groups and mountpoints again using lsblk

root@miniprox1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
sdb 8:16 0 3.6T 0 disk
sdc 8:32 0 3.6T 0 disk
sdd 8:48 0 3.6T 0 disk
sde 8:64 0 465.8G 0 disk
├─sde1 8:65 0 1007K 0 part
├─sde2 8:66 0 1G 0 part /boot/efi
└─sde3 8:67 0 464.8G 0 part
├─pve-swap 252:0 0 32G 0 lvm [SWAP]
├─pve-root 252:1 0 120.2G 0 lvm /
├─pve-data_tmeta 252:2 0 3G 0 lvm
│ └─pve-data 252:4 0 291.6G 0 lvm
└─pve-data_tdata 252:3 0 291.6G 0 lvm
└─pve-data 252:4 0 291.6G 0 lvm
sdf 8:80 1 29.9G 0 disk
├─sdf1 8:81 1 242K 0 part
├─sdf2 8:82 1 8M 0 part
├─sdf3 8:83 1 1.3G 0 part
└─sdf4 8:84 1 300K 0 part


root@miniprox1:~# pvcreate /dev/sda

pvcreate /dev/sdb
pvcreate /dev/sdc

pvcreate /dev/sdd
Physical volume "/dev/sda" successfully created.
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.


root@miniprox1:~# vgcreate miniprox1_VG1 /dev/sda

vgcreate miniprox1_VG2 /dev/sdb
vgcreate miniprox1_VG3 /dev/sdc

vgcreate miniprox1_VG4 /dev/sdd
Volume group "miniprox1_VG1" successfully created
Volume group "miniprox1_VG2" successfully created
Volume group "miniprox1_VG3" successfully created
Volume group "miniprox1_VG4" successfully created


root@miniprox1:~# lvcreate -L 3.5T -n lv_vmdata001 miniprox1_VG1

lvcreate -L 3.5T -n lv_vmdata002 miniprox1_VG2
lvcreate -L 3.5T -n lv_vmdata003 miniprox1_VG3

lvcreate -L 3.5T -n lv_vmdata004 miniprox1_VG4
Logical volume "lv_vmdata001" created.
Logical volume "lv_vmdata002" created.
Logical volume "lv_vmdata003" created.
Logical volume "lv_vmdata004" created.


root@miniprox1:~# mkfs.ext4 /dev/miniprox1_VG1/lv_vmdata001

mkfs.ext4 /dev/miniprox1_VG2/lv_vmdata002
mkfs.ext4 /dev/miniprox1_VG3/lv_vmdata003

mkfs.ext4 /dev/miniprox1_VG4/lv_vmdata004
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 939524096 4k blocks and 234881024 inodes
Filesystem UUID: 3368a770-7100-4757-9e4f-6b6c3160b0a3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 939524096 4k blocks and 234881024 inodes
Filesystem UUID: 8b4e51dc-71cb-4d2d-b47a-5a18d7e0ec53
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 939524096 4k blocks and 234881024 inodes
Filesystem UUID: 0b847866-c2f5-4b87-9967-5027d578cdbe
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 939524096 4k blocks and 234881024 inodes
Filesystem UUID: 5745582c-c96b-48ad-9d50-13ee11fd3dbd
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

root@miniprox1:~# mkdir /storage/vmdata001

mkdir /storage/vmdata002
mkdir /storage/vmdata003

mkdir /storage/vmdata004


root@miniprox1:~# mount /dev/miniprox1_VG1/lv_vmdata001 /storage/vmdata001

mount /dev/miniprox1_VG2/lv_vmdata002 /storage/vmdata002
mount /dev/miniprox1_VG3/lv_vmdata003 /storage/vmdata003

mount /dev/miniprox1_VG4/lv_vmdata004 /storage/vmdata004


root@miniprox1:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=D3E2-40E2 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

#########################################################

# BRTA - Benny R. Tate II - November 30, 2024
# Adding the following lines for the SSD disk LVMs.
#########################################################
/dev/miniprox1_VG1/lv_vmdata001 /storage/vmdata001 ext4 defaults,nofail 0 2
/dev/miniprox1_VG2/lv_vmdata002 /storage/vmdata002 ext4 defaults,nofail 0 2
/dev/miniprox1_VG3/lv_vmdata003 /storage/vmdata003 ext4 defaults,nofail 0 2

/dev/miniprox1_VG4/lv_vmdata004 /storage/vmdata004 ext4 defaults,nofail 0 2


root@miniprox1:~# mount -a


root@miniprox1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk

└─miniprox1_VG1-lv_vmdata001 252:5 0 3.5T 0 lvm /storage/vmdata001
sdb 8:16 0 3.6T 0 disk
└─miniprox1_VG2-lv_vmdata002 252:6 0 3.5T 0 lvm /storage/vmdata002
sdc 8:32 0 3.6T 0 disk
└─miniprox1_VG3-lv_vmdata003 252:7 0 3.5T 0 lvm /storage/vmdata003
sdd 8:48 0 3.6T 0 disk

└─miniprox1_VG4-lv_vmdata004 252:8 0 3.5T 0 lvm /storage/vmdata004
sde 8:64 0 465.8G 0 disk
├─sde1 8:65 0 1007K 0 part
├─sde2 8:66 0 1G 0 part /boot/efi
└─sde3 8:67 0 464.8G 0 part
├─pve-swap 252:0 0 32G 0 lvm [SWAP]
├─pve-root 252:1 0 120.2G 0 lvm /
├─pve-data_tmeta 252:2 0 3G 0 lvm
│ └─pve-data 252:4 0 291.6G 0 lvm
└─pve-data_tdata 252:3 0 291.6G 0 lvm
└─pve-data 252:4 0 291.6G 0 lvm
sdf 8:80 1 29.9G 0 disk
├─sdf1 8:81 1 242K 0 part
├─sdf2 8:82 1 8M 0 part
├─sdf3 8:83 1 1.3G 0 part
└─sdf4 8:84 1 300K 0 part


root@miniprox1:~# df -H
Filesystem Size Used Avail Use% Mounted on
udev 68G 0 68G 0% /dev
tmpfs 14G 1.7M 14G 1% /run
/dev/mapper/pve-root 127G 3.0G 117G 3% /
tmpfs 68G 45M 68G 1% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
efivarfs 525k 388k 132k 75% /sys/firmware/efi/efivars
/dev/sde2 1.1G 13M 1.1G 2% /boot/efi
/dev/fuse 135M 25k 135M 1% /etc/pve
tmpfs 14G 0 14G 0% /run/user/0
/dev/mapper/miniprox1_VG1-lv_vmdata001 3.8T 29k 3.6T 1% /storage/vmdata001

/dev/mapper/miniprox1_VG2-lv_vmdata002 3.8T 29k 3.6T 1% /storage/vmdata002
/dev/mapper/miniprox1_VG3-lv_vmdata003 3.8T 29k 3.6T 1% /storage/vmdata003

/dev/mapper/miniprox1_VG4-lv_vmdata004 3.8T 29k 3.6T 1% /storage/vmdata004
root@miniprox1:~#
 
Last edited:
  • Like
Reactions: Marsh

BennyT

Active Member
Dec 1, 2018
198
72
28
I decided to change direction and to use LVM Thin-Pool rather than regular non-thin LVMs. The idea being that if I have a 500GB virtual machine with Oracle DB and stuff on it, but it is really only using about 350GB of that 500, then the Thin provisioned LVM only expands to the 350GB used. It's a little slower than dedicated entire size of the VM upfront. Writes may be a little slow with thin provisioned. In contrast, in ESXi I was using thick eager zero for fastest possible write speeds to my datastores on the SSDs. We'll see how it goes. The main reason I'm switching from regular LVM to LVM Thin is because snapshots are only possible on Thin.

My other option was to use ZFS, but for now I'm sticking with LVM Thin Pools. and I'll experiment with ZFS later maybe.

I'm using command line syntax because I'm discovering it gives me greater flexibility when defining my Physical Volumes, Volume Group Names, and LVM Thin-Pools. I'll still use the GUI to create the actual VMs which will also create the Thin-LV. "thin LV" being the block storage and is one to one with the VM. Multiple VMs (Thin LVs) can exist in a single LVM-ThinPool.

I'm defining my SSD devices so they each have their own Volume Group and LVM ThinPool. This enables me to detach a single VG and slide out that SSD from the chassis and replace it with a fresh new one, which will get it's own new VG name and Thin Pool.

Here are my new commands for setting up for LVM Thin Pools.

# Wipe the SSDs to ensure a clean slate
wipefs -a /dev/sda
wipefs -a /dev/sdb
wipefs -a /dev/sdc
wipefs -a /dev/sdd

# choose option 'o' and then 'w', to setup each SSD as GPT with an empty partition table
gdisk /dev/sda
gdisk /dev/sdb
gdisk /dev/sdc
gdisk /dev/sdd

# choose option 'n' and then accept all defaults, then option 'w', to setup each SSD with a single partition having full size of the SSD
gdisk /dev/sda
gdisk /dev/sdb
gdisk /dev/sdc
gdisk /dev/sdd

# Create Physical Volumes (PVs) for each SSD
pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

# Create Volume Groups (VGs), one per SSD
vgcreate miniprox1_VG001 /dev/sda1
vgcreate miniprox1_VG002 /dev/sdb1
vgcreate miniprox1_VG003 /dev/sdc1
vgcreate miniprox1_VG004 /dev/sdd1

# setup the lvm configuration file so that it uses 128k or 256k chunk sizes for the Thin Pools, else it defaults to large chunks of 2M
echo "thin_pool_chunk_size = 128k" >> /etc/lvm/lvm.conf # or 256k

# Create LVM Thin Pools within each VG (one thin-pool per SSD)
lvcreate -L 3.5T -n miniprox1_thinpool001 --type thin-pool miniprox1_VG001
lvcreate -L 3.5T -n miniprox1_thinpool002 --type thin-pool miniprox1_VG002
lvcreate -L 3.5T -n miniprox1_thinpool003 --type thin-pool miniprox1_VG003
lvcreate -L 3.5T -n miniprox1_thinpool004 --type thin-pool miniprox1_VG004

1733088266721.png

1733088177895.png
1733088416887.png

Next up will be to create actual guest VMs in those Thin Pools. I want to try the ESXi Importer plugin wizard which is supposed to connect from Proxmox to an ESXi host and allow me to select Datastores and VMs to be converted to this proxmox server. That will be the real test.
 

BennyT

Active Member
Dec 1, 2018
198
72
28
The latest release of Proxmox is 8.3.0, which I'm using. In release 8.2 they introduce a cool and very convenient ESXi Importer utility. You setup a connection from inside Proxmox to your ESXi Host. Proxmox will show you a list of all the DataStores on that host. You can Import any of the VMs using this new Proxmox ESXi importer plugin:


Automatic ESXi Import: Step by Step - Migrate to Proxmox VE - Proxmox VE
To import VMs from an ESXi instance, you can follow these steps:

  1. Make sure that your Proxmox VE is on version 8 (or above) and has the latest available system updates applied.
  2. Add an "ESXi" import-source storage, through the Datacenter -> Storage -> Add menu.Enter the domain or IP-address and the credentials of an admin account here.If your ESXi instance has a self-signed certificate you need to add the CA to your system trust store or check the Skip Certificate Verification checkbox.Note: While one can also import through a vCenter instance, doing so will dramatically reduce performance.
  3. Select the storage in the resource tree, which is located on the left.
  4. Check the content of the import panel to verify that you can see all available VMs.
  5. Select a VM you want to import and click on the Import button at the top.
  6. Select at least the target storage for the VMs disks and network bridge that the VMs network device should connect to.
  7. Use the Advanced tab for more fine-grained selection, for example:
    • choose a new ISO image for CD-ROM drives
    • select different storage targets to use for each disk if there are multiple ones
    • configure different network hardware models or bridges for multiple network devices
    • disable importing some disks, CD-ROM drives or network devices
  8. Please note that you can edit and extend even more options and details of the VM hardware after creation.
  9. Optionally check the Resulting Config tab for the full list of key value pairs that will be used to create the VM.
  10. Make sure you have completed the preparations for the VM, then power down the VM on the source side to ensure a consistent state.
  11. Start the actual import on the Proxmox VE side.
  12. Boot the VM and then check if any post-migration changes are required.

Here are screenshots from my proxmox 8.3 environment after I've added the connection in proxmox to my esxi host named esxi01.brtassoc.com:

1733105573999.png

Here is how you add that connection:
1733105693010.png

1733105856861.png

I'm going to test the import next.