lga3647 esxi build to host my Oracle Apps/Databases

BennyT

Active Member
Dec 1, 2018
156
44
28
Thank you @EffrafaxOfWug

I downloaded ipmitool 1.8.18 from sourcefg and did a make install build onto one of my other linux boxes and used the tool as a client to the BMC.
At the time I didn't have any OS on the new server at all. I ran the tool from my other linux box and using your commands it remote into the BMC ip address when I supplied that ip. very cool.

The new server is whisper quiet now. In fact I can't hear it at all. There is no load on it though. I just installed a small distro for grins and it's alive!

This weekend I'll get ESXi installed and see where that takes me.

Have a great weekend

by the way, as a side note: I eventually did purchase a good tq screwdriver for tourqing down the CPU. It's nice having the right tools for a job. This one by wheeler was only $50 and made for gunsmiths but it works great for anything of course.
IMG_20190207_092715_01.jpg
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,395
509
113
The new server is whisper quiet now. In fact I can't hear it at all.
This is just going to make the moment where my voodoo skillz make screaming vampire banshee hornets fly out of your server all the sweeter.

Heh, heh, heh. Hah, hah hah, hah hah! HA HA HA HA HA!!!
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
156
44
28
Bootdrive installed. It's an older gen enterprise Intel SSD for $20 on ebay. I attached the Norco shelf that can fit a single 3.5" or two 2.5". It unfortunately hangs over the NVMe ports,SAS,SATA, FAN3-4-A-B, and DIMM slots making those impossible to reach without having to unscrew the shelf first. If it becomes a nuisance I might velcro the drive to an interior wall instead. It just clears the CPU1 heatsink.

Notice I switched to a SM heatsink instead of the Noctua heatsink. The Noctua fit under this shelf as well but the fins contacted and flexed dowwnard under the the shelf and that affected my OCD. When I ordered the replacement CPU I also ordered this SM heatsink.

I tried keeping the cable management clean for good air flow from the fan wall
IMG_20190208_155144.jpg
IMG_20190208_155304.jpg

here's how it looks under the bootdrive tray
IMG_20190208_155414.jpg
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,395
509
113
I also had the option of the tray but I ended up picking up one of these nifty brackets to hold two SSDs over two empty PCIe slots which might be an option for you depending on what else you might put in the system. Not that it really matters with an S3500 but it keeps some air going over them also.

The 3U version of this case on sale in the UK also has a couple of slots atop the hot-swap bays where SSDs can be mounted although it requires some gymnastics with the cables.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
156
44
28
I've been nonstop the past few days learning how to setup ESXi, navigating my way through it, setting up my datastores, experimenting with network, creating VMs etc.

Here's my ESXi 6.7 page/console.

upload_2019-2-9_21-22-30.png


I've completed install of vCenter Server 6.7 appliance. It's installed on a linux distro called "photon" by vmware that installs into a vm on ESXi. VMware says they are going this direction and after 6.7 vCenter will not be offered for install on windows, it will only be available as a linux appliance.

As I'm navigating vsphere client it reminds me alot of MS Azure, or maybe microsoft is copying Vmware... there's a million and one ways to navigate.


upload_2019-2-9_22-54-22.png


*edit: At first I liked managing ESXi directly from the ESXi host UI. But now after using vCenter Server (via vSphere Client) for a day, I'm liking vCenter for managing the ESXi host. Downside is that it eats up 2 logical cores and 10GB RAM as it runs on the ESXi host in a VM. As nice as vCenter is, I'm not sure it is worth the cost of dedicating 2 cores and 10GB to it just to manage a single ESXi host. I need to learn more about vCenter to understand if there are stand out advantages to using it


I'll see if I can import some of my physical machines into a few VMs. Or perhaps what I'll do is use Oracle rapid clone to create clones of the Oracle EBS Apps environments into a few VMs. That will be interesting. I don't have it all figured out yet but it's been fun.

Next up I need to decide on backup/recovery strategy. I'm considering veam. I'd like to hear what other ESXi users are doing for backups.

THanks
 
Last edited:

BennyT

Active Member
Dec 1, 2018
156
44
28
I have question about Supermicro BIOS CPU performance settings for the hypervisor. I would like to permit ESXi to turbo cores, allowing it to turbo cores above base 2.1GHz.. Is there anything I need to change in BIOS or in ESXi Host to allow for that?

Here are the changes I made in the BIOS, but I'm unsure.


Advanced >> CPU Configuration >> Advanced Power Management
Power Technology >> Custom (changed from Energy Efficient)
Power Performance Tuning >> BIOS Controls EPB (changed from OS Controls EPB)
Energy performance BIAS setting >> performance (changed from Balanced Performance)

*there is "Max Performance" but I was concerned about max one core to 3.7GHz
Advanced >> CPU Configuration >> Advanced Power Management >> CPU P State Control
Turbo Mode >> Enable (no change made to this as this was already enabled)
Advanced >> CPU Configuration >> Advanced Power Management >> Package C State Control
Package C State >> No Limit (changed from Auto)


ESXi host setting is currently set to "Balanced Performance" but I think that may be ignored now that I set BIOS to control Energy Performance.

What do you think. Any recommendations? I'm still trying to wrap my brain around virtualization and how it handles CPU resources to the guests. I don't think the guests will ever show above 2.1 GHz however I would expect ESXi Host to turbo the physical cores as it needs.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,331
1,598
113
upload_2019-2-11_23-39-3.png
Thats what I set for max performance...

If you want ESXi to be in control I guess thats this one and enable the other stuff
upload_2019-2-11_23-39-56.png
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
156
44
28
Hi @Rand__

Thanks for taking time to enter your BIOS and making screenshots. I'll experiment with those settings.


Also, I found the VMware Performance Best Practices doc (click the link below) which I'm going to give a shot at. I've screenshots below, mostly for my own benefit incase I need to reference them again later.


Following their documentation in Chapter 1 regarding hardware BIOS...

*I had to change Power Technology setting to "Custom" else it locks me out of the other options.
  • In order to allow ESXi to control CPU power-saving features, set power management in the BIOS to “OS Controlled Mode” or equivalent. Even if you don’t intend to use these power-saving features, ESXi provides a convenient way to manage them.
upload_2019-2-11_20-11-44.png


  • C1E is a hardware-managed state; when ESXi puts the CPU into the C1 state, the CPU hardware can determine, based on its own criteria, to deepen the state to C1E. Availability of the C1E halt state typically provides a reduction in power consumption with little or no impact on performance.
upload_2019-2-11_20-13-53.png


  • C-states deeper than C1/C1E (typically C3 and/or C6 on Intel and AMD) are managed by software and enable further power savings. In order to get the best performance per watt, you should enable all C-states in BIOS. This gives you the flexibility to use vSphere host power management to control their use.
upload_2019-2-11_20-15-47.png


  • When “Turbo Boost” or “Turbo Core” is enabled, C1E and deep halt states (for example, C3 and C6 on Intel) can sometimes even increase the performance of certain lightly-threaded workloads (workloads that leave some hardware threads idle). However, for a very few multithreaded workloads that are highly sensitive to I/O latency, C-states can reduce performance. In these cases you might obtain better performance by disabling them in the BIOS. Because C1E and deep C-state implementation can be different for different processor vendors and generations, your results might vary.
upload_2019-2-11_20-25-15.png

and then, in ESXi...
upload_2019-2-11_20-35-31.png
I figured that since I'm new to this I should at least follow these best practices before I get experimental. I'll try the BIOS settings I show above and also use ESXi policy "Balanced". Then depending how that works out I may change that policy to be "High Performance".

I'll be sure to try other BIOS settings perhaps later this week after I spend some time using these.

Thanks again,

Benny
 
Last edited:

BennyT

Active Member
Dec 1, 2018
156
44
28
Removed the Norco ssd tray/shelf and moved my boot ssd into a vacant hotswap bay. That really cleaned up the interior. No longer need sata and sata power cables for the bootdrive. It also meant I could reinstall the Noctua cooler that collided with the shelf. The two Noctua cooler fans keeps temps about the same as the Supermicro cooler, but the Noctua fans idle at about 500 - 700 compared to the supermicro cooler fan which idled at 1300RPM. I may still order order one of those plastic trays that @EffrafaxOfWug posted earlier if I later decide to move the bootdrive back inside, but right now I like how clean it is without cables everywhere.

IMG_20190215_142215.jpg

Also, I'm thinking of moving to hardware RAID 10 which means I'll be researching LSI disk controller and possible SAS expander too (Norco SAS/SATA backplane is direct attach- no built in expander in the backplane). I still plan to invest in Samsung DCT 883 SSDs. Not sure if it would make more sense to upgrade to SSD or to first acquire the RAID disk controller card(s). Right now I have a wild mix of 1TB, 2TB, 4tb HDDs, each is their own datastore and no RAID array.

Reason I'm considering SSDs and RAID now is that during a few big I/O jobs my new linux VMs have not been quick to respond. During intensive I/O they practically freeze (not really, but response is very slow). Conversely, my baremetal linux boxes have their home, root, oracleProducts each in different logical volumes spread across different physical devices. They weren't even performance drives but they never had response issues such as this. Now, having a VM completely on one physical HDD is causing response issue... when a big I/O intensive job is running, responsiveness is terrible. Or maybe it is something else, who knows.

I know I have alot to learn about virtualization but I'll need to start planning my datastore devices better than how it is now.

So if you have any success stories on a particular LSI disk controller (doesn't have to be the fastest), let me know.
 
Last edited:

Gadgetguru

Member
Dec 17, 2018
41
13
8
Removed the Norco ssd tray/shelf and moved my boot ssd into a vacant hotswap bay. That really cleaned up the interior. No longer need sata and sata power cables for the bootdrive. It also meant I could reinstall the Noctua cooler that collided with the shelf. The two Noctua cooler fans keeps temps about the same as the Supermicro cooler, but the Noctua fans idle at about 500 - 700 compared to the supermicro cooler fan which idled at 1300RPM. I may still order order one of those plastic trays that @EffrafaxOfWug posted earlier if I later decide to move the bootdrive back inside, but right now I like how clean it is without cables everywhere.

View attachment 10438

Also, I'm thinking of moving to hardware RAID 10 which means I'll be researching LSI disk controller and possible SAS expander too (Norco SAS/SATA backplane is direct attach- no built in expander in the backplane). I still plan to invest in Samsung DCT 883 SSDs. Not sure if it would make more sense to upgrade to SSD or to first acquire the RAID disk controller card(s). Right now I have a wild mix of 1TB, 2TB, 4tb HDDs, each is their own datastore and no RAID array.

Reason I'm considering SSDs and RAID now is that during a few big I/O jobs my new linux VMs have not been quick to respond. During intensive I/O they practically freeze (not really, but response is very slow). Conversely, my baremetal linux boxes have their home, root, oracleProducts each in different logical volumes spread across different physical devices. They weren't even performance drives but they never had response issues such as this. Now, having a VM completely on one physical HDD is causing response issue... when a big I/O intensive job is running, responsiveness is terrible. Or maybe it is something else, who knows.

I know I have alot to learn about virtualization but I'll need to start planning my datastore devices better than how it is now.

So if you have any success stories on a particular LSI disk controller (doesn't have to be the fastest), let me know.
So I went and installed ESXi after reading your adventures. You're telling me I can have multiple OSs without having to have an OS and then install VirtualBox to create VMs? I don't have to have a multi-boot system, shutdown, and reboot into another OS? I can manage all of this remotely?

I know nothing about it, but it is pretty cool. I'm on the free trial but from what I gather though, the free license allows multiple CPUs but only 32GB RAM per CPU. I have two CPUs and 96GB of RAM. I'm guessing that means I'll be limited to utilizing 64GB? I'm in the remote client and it does show 96GB.

Should I install ESXi on a small boot SSD and not one of my two Micron SSDs? I think that is what you did. If I want to RAID 0 my Micron SSDs, I'm guessing I'll need to use one of my RAID controllers like you are talking about.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,395
509
113
Technically speaking ESX(i) is an OS itself, but yes it's a dedicated type 1 hypervisor and you install your OS into the ESXi OS. Hardware and resources permitting you can run dozens of machines on a single box.

If you're got a standalone tiny SSD, use that. If you're using a dedicated RAID card, make a RAID1 and install ESXi on to a little bit of that; failing that it's quite happy on a USB or SD card (but you'll need "proper" storage for logs and dumps, as well as for your VMs). ESX is quite finicky about what hardware it supports (or sort-of-supports, or kinda-works-for-now-if-you-install-some-third-party-stuff).
 
  • Like
Reactions: Gadgetguru

Gadgetguru

Member
Dec 17, 2018
41
13
8
Technically speaking ESX(i) is an OS itself, but yes it's a dedicated type 1 hypervisor and you install your OS into the ESXi OS. Hardware and resources permitting you can run dozens of machines on a single box.

If you're got a standalone tiny SSD, use that. If you're using a dedicated RAID card, make a RAID1 and install ESXi on to a little bit of that; failing that it's quite happy on a USB or SD card (but you'll need "proper" storage for logs and dumps, as well as for your VMs). ESX is quite finicky about what hardware it supports (or sort-of-supports, or kinda-works-for-now-if-you-install-some-third-party-stuff).
Dude, this stuff is awesome! I can steal a 64GB SSD from my main computer and install that into the server to run ESXi on. I have two RAID controller cards, both SAS3, one is currently in IT mode. I see there are still some limitations with the VMs in VMWare, just like in VirtualBox.

I created a VM and installed Windows Server 2019 on it. I'm trying to access my other physical drives. So, I'd like to have my data drives, and then various OSs on datastores able to access those data drives. Apparently, that is not so easy, at least not on ESXi 6.7.

How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

If I can have one VM, with whatever OS on it (Windows, Unraid, ???) and running Plex, and then create a couple other VMs with Windows, Linux, etc. and it all run on one server (set of hardware) while being able to access the data drives, I'd be ecstatic!

So, put ESXi on the 64GB SSD. RAID 0 the two Micron SSDs for a VM datastore. RAID 1 the two 10TB for data. Still have my 5TB, two 3TB, and a 2TB drive to play with. I also have a 1TB external that I can use for a backup. I have almost 12TB of media but the 1TB can keep files that are irreplaceable. I can also trim that 12TB down quite a bit if I take the time to do so.

@BennyT sorry, don't mean to hijack your thread, my apologies! Exciting stuff!
 

Rand__

Well-Known Member
Mar 6, 2014
6,331
1,598
113
If I can have one VM, with whatever OS on it (Windows, Unraid, ???) and running Plex, and then create a couple other VMs with Windows, Linux, etc. and it all run on one server (set of hardware) while being able to access the data drives, I'd be ecstatic!
You won't be able to have full access from multiple VMs to the physical disk at the same time.
You can however attach it to one VM and then share out via (esxi internal) network (smb or whatever you prefer)
 
  • Like
Reactions: Gadgetguru

BennyT

Active Member
Dec 1, 2018
156
44
28
...

@BennyT sorry, don't mean to hijack your thread, my apologies! Exciting stuff!
Hi gadget,
No worry, I'm learning too and I'm following your thread, questions etc and seeing the feedback from others. It's all good and helps me better understand also. I'm currently trying to understand the many different ways to setup datastores for my VMs too.

Enjoy the build and learning experience and post questions/comments here anytime. Take care
 
  • Like
Reactions: Gadgetguru

BennyT

Active Member
Dec 1, 2018
156
44
28
Question. Does vCenter Server/Client come with "Converter" for importing Physical to Virtual (P2V) or do I download the "standalone Converter"?

I can't seem to find it in the menus on vCenter so I assume the way to go is Standalone Convert version. It will be interesting to see if/how the conversion will handle the logicial volume groups from the physical linux machine to the virtual machine.
 

itronin

Well-Known Member
Nov 24, 2018
976
622
93
Denver, Colorado
Question. Does vCenter Server/Client come with "Converter" for importing Physical to Virtual (P2V) or do I download the "standalone Converter"?

I can't seem to find it in the menus on vCenter so I assume the way to go is Standalone Convert version. It will be interesting to see if/how the conversion will handle the logicial volume groups from the physical linux machine to the virtual machine.
Standalone converter. since you have vcenter that'll be your destination. you'll get provisioning options as you go through the wizard.

here's an older linky
and another one

the procedure should be pretty simple and painless - so said the dentist.

itr
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
156
44
28
I was checking BIOS updates. I don't plan to update but I was curios what it entailed.

I see that supermicro is preparing for Xeon Scalable Cascade Lake CPUs. Does that imply the Cascade Lake Xeons may be able to use LGA-3647 socket? It was speculated in November that perhaps the Cascade Lake scalable CPU would not use the 3647 socket: https://www.servethehome.com/intel-cascade-lake-ap-is-this-4p-cascade-lake-xeon-in-2p/

upload_2019-2-22_12-20-34.png


*ah, nevermind. The article from Patrick is referencing Cascade Lake - AP (Scalable "Advanced Performance"). The BIOS is referencing Cascade Lake - SP (non "AP" varient).
 
Last edited:

BennyT

Active Member
Dec 1, 2018
156
44
28
View attachment 10395
Thats what I set for max performance...

If you want ESXi to be in control I guess thats this one and enable the other stuff
View attachment 10396
I'm going to try these BIOS settings this week compare with last week. Maybe I'll see a little boost.

All of the VMs are vcurrently experimental and consider them throw away as I'm still learning and constantly editing configuration.


*edit: t stats; pw states; c states... I've forgotten where I started. :) autonomous; enable; auto; disable blah.... Experimenting with these and see what happens.


edi2: testing ESXi CPU performance with various bios settings on the X11-dpi-nt. Bios version 2.1


1st test - disable all CPU Advanced Power Management in BIOS to force the CPUs stay at base clock. (very easy to do - set Power Technology to "Disable")
I have a VM windows 10 with Prime95 running.
I allocated 4 logical cpu cores.
- vSphere Client, while monitoring the VM CPU Usage in Mhz showed the 4 allocated CPUs never went over 2.1 Ghz while in the VM they were utilizing 100% usage. This is expected as the base clock for Xeon GOld 6130 is 2.1 Ghz. Max Turbo all cores is 2.7 Ghz. Max Turbo w/AVX2 is 2.4 Ghz. Max Turbo single core is 3.7 Ghz.​


2nd test -
Changed Power Technology option from Disabled to be "Custom". There is a 3rd option called "Energy Efficient" mode but that sounded worse than disabling to a base clock. I'm not concerned about energy efficiency at the moment.

I set Hardware PW state to "Out of Band Mode" (This lets the hardware choose the P state). This sounds like same as selecting BIOS to control power management. But it is a little different. Setting to "Out of Band" greyed out the option to select OS or BIOS to control power management since the hardware handles it itself somehow. This also disables the Power Management Tuning level (ie Performance, or Max Performance, etc... these are greyed out too and are only available with Hardware PW set to disabled and with BIOS controlling power management). *I will need to experiment more with that HW PW state mode- it has me a little confused.

I enabled P states: Speedstep enabled. Turbo mode enabled. EIST set to HW_ALL. I enabled C0/C1 states. I disabled C1E Halt state. Enabled C6 reporting. I disabled Core C-State (default).
rebooted and ran Prime95 in windows 10 VM-
- vSphere Client, while monitoring the VM CPU Usage in Mhz showed each of the 4 allocated CPUs went between 2.2 Ghz and 2.5 Ghz but then hovered at 2.4 Ghz
upload_2019-2-23_20-56-19.png
3rd test - I kept everything the same as before except for the following:
I changed Hardware PW State to "Disabled" instead of "Out of Band Mode". (Disabled lets the hardware choose the P state but based on request from OS.) - This made the option to select OS or BIOS to control power management no longer greyed out.

I changed Power Management Control to OS (this option was greyed out when using "Out of Band" as it was forcing hardware to control power management. ESXi will now control power management.

Inside ESXi Host I set the Hardware Power Managment level to be "High Performance".
- Similar results as test 2 shown above. The CPU cores didn't turbo above 2.4 Ghz but they were more consistently staying near 2.4Ghz than in test 2.
upload_2019-2-23_20-53-24.png
4th test - Everything same as 3rd test but with ESXi Hardware Power Management set to "Balanced"
- vSphere Client now shows each of the 4 allocated CPUs went to 3.1 Ghz
upload_2019-2-23_21-51-6.png

It seems that best performance for me has been with ESXi controlling Power Management of the system. And set to "Balanced" in ESXi Hardware Power management setting. But these have been very basic tests.

I'll need to experiment with the various CPU loads, multiple VMs and the other BIOS options. For example, I've yet to test with BIOS controlling power management (not to be confused with the 2nd test I did above with Hardware PW "out of band mode"... there were about 4 other selections to try)
 
Last edited:
  • Like
Reactions: Gadgetguru

BennyT

Active Member
Dec 1, 2018
156
44
28
I see that the prices on the Samsung datacenter DCT883 have dropped.

upload_2019-2-24_17-32-41.png

the 960GB 883 was $275 back in December when these first came out. I don't need crazy fast NVMe storage, but SATA SSDs would be huge improvement over the mix of HDDs I have now


I apologize for my random thoughts, but I have to log my ideas somewhere and this seems good place as any :)

I've been using Logical Volume (LVM) Groups inside my experimental Linux VMs to distribute the file systems across multiple physical drives. I do this on my physical bare metal boxes too with decent performance (as long as I have a strict backup/recovery solution, because LVM is not much different than RAID 0... if one drive fails the filesystem is lost.)

To do this in a virtual environment I've been assigning each physical drive to it's own datastore. Then I setup the VMs to have three virtual HDDs, each with their own vSCSI controller, and each coming from different datastores (different physical drives). Essentially what I'm hoping to accomplish is that my file systems will spread across three physical drives, allocating about 200GB from each physical drive/datastore for total of 600GB total for the VM (An Oracle APPS/Database is about 500GB so I give 100GB overhead for decent /home /tmp and swap). The Logical volumes acts "kindof" like striping in a RAID 0, where the file systems are spread across three physical drives.

My other alternative that I'll experiment in a few weeks is to build one large NAS VM and then using SMB(?) to share the storage to the other VMs. I've been reading that some people have gone that route. I don't like the idea of the SAMBA overhead, but maybe thats not a big deal.

Or, I can just start investing in SSDs like the ones shown above and eventually put them into a RAID 10 or again use LVMs with them.


Have a great week!
 
Last edited:

BennyT

Active Member
Dec 1, 2018
156
44
28
When my ESXi host is not under very much load the exhaust fans 5/6 trigger sensor alert in vSphere Client even though they are above the threshold in IPMI.

upload_2019-2-25_9-58-24.png

To counter this I've change my IPMI fan profile from "Optimal" to "Standard" which keep the fans running a few 100 RPM faster than when in "Optimal". No noticable noise difference

Other option would be to disable these events but I don't like that idea.

upload_2019-2-25_10-5-59.png
 
Last edited: