lga3647 esxi build to host my Oracle Apps/Databases

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
7 repositories due to the 3? Veeam instances or any additional reasoning behind this?
 

BennyT

Active Member
Dec 1, 2018
166
46
28
It's just the way I've configured my backup target server disks (bkp repos) and how i keep my disks organized.

I've two veeam backup server installations. Each in there own windows vm.

I've three veeam backup proxy servers (one is a standalone Linux vm, the other two are the same windows veeam server vms mentioned above - veeam servers by default act as a veeam proxy). These proxies job is to attach virtual VM disks, read, compress, then transmit to the target backup servers where the backup repositories reside.

I've two physical Linux servers mostly dedicated for backup targets.

I've total of 7 hotswap drive trays across those two physical servers dedicated for holding backups.

Each drive tray is a separate filesystem mount point partition

For example
Server 1 has four hotswap trays and therefore I've set them each as a separate mount pount:
/dev/sda --> /backup_drivebay_1

/dev/sdb -->
/backup_drivebay_2

/dev/sdc -->
/backup_drivebay_3

/dev/sdd -->
/backup_drivebay_4

/dev/sde is the internal boot disk and contains root and swap. It isn't used for backup repo, just the OS and other services.

Server 2 it's similar but had less disks:
/backup_drivebay_1
/backup_drivebay_2
/backup_drivebay_3

each of those hot swap bays has been configured as a separate veeam repository when looking at it from the veeam servers.

For example:
"smicro1-backup_drivebay_2" would be the backup repository name for the 2nd hotswap tray in server1

"hpex495-backup_drivebay_2" would be the repo name of the 2nd hotswap tray in server#2

I've setup the target physical servers /etc/fstab so that if i needed to i can unmount a drive, swap in a new disk, and remount the disk and it will be seen as it's respective /backup_drivebay_# corresponding to whichever /dev device i swapped out.
 
Last edited:
  • Like
Reactions: Rand__

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Ah i see, didn't consider you'd actually physically swap droves, but of course makes sense
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Yeah, i have powershell scripts calling veeam APIs that are scheduled through Windows task scheduler (windows had a more advanced scheduler than what's in VEEAM), and the various scripts rotate each month through the different trays and after awhile i can swap them for fresh disks.

Also the repos aren't just for the workstation backups but for all the physical servers and VM's full, incremental and veeamzip backups too.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
vSphere v8 GA is available today. woo hoo. Checked the VMware Compatibility Guide and it says all of my hardware is still compatible.

Xeon scalable gen 1
intel x722 10BASE-T NICs
LSI SAS3008 PCIe HBA

I was hoping the NVMe m.2 compatibility list would grow, but it's the same list as v7 pertaining to NVMe

I won't be going to it any time soon, but maybe when it reaches 8u1 or 8u3.

Currently I'm using VMUG Advantage membership but need to download new license keys from VMUG every year. I may switch to vSphere Essentials Kit for $686 USD with three year upgrade license for up to 3 ESXi Hosts with 2 sockets each and a vCenter Server Appliance license to manage the ESXi hosts. It would be perpetual and never expire and gives three years of vSphere upgrades. The Essentials Kit doesn't come with High Availability or vMotion (thats available with the VMUG license keys I have now, but I don't need those features for my home lab.)

Once my current VMUG Advantage membership expires I may go that route with vSphere Essentials Kit. I like the idea of having a perpetual license and I can perhaps stay at v7u3 or maybe upgrade to v8 eventually and stay locked in at a version without worrying if VMUG drops older versions only to find out newer version licenses and products aren't compatible with my hardware.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
I'm sharing my bash shell script to generate and archive ESXi configuration backups to my backup server

I scheduled this script to run nightly via cron from a linux server used for holding backups.

0 0 * * * /BRTA_scripts/brta_esxi01_config_backups.sh >> /BRTA_scripts/brta_esxi01_config_backups.log 2>&1

The script is below and it's pretty self explanatory. I was previously making esxi configuration backups whenever I could remember to do so. Now they are done nightly. I know many others already know how to do this, but having a full script with inline comments may help others just getting into esxi.

I'd like to add error handling and email notifications, but this is the basic script starting point.


Bash:
#!/bin/bash
###############################################################################
# Filename:  /BRTA_scripts/brta_esxi01_config_backups.sh                      #
#                                                                             #
# Written On: October 10, 2022                                                #
#                                                                             #
# Purpose:                                                                    #
#  Connects to the ESXi Host linux command line and runs vim-cmd commands     #
#  to generate a new ESXi configuration backup file.  Then we use find and    #
#  scp to copy the backup to the local server for safe keeping.               #
#                                                                             #
#  We'll purge the backups older than 365 days                                #
#                                                                             #
#  Scheduling this via cron to run nightly at midnight.                       #
#  Log file is kept in /BRTA_scripts/brta_esxi01_config_backups.log           #
#                                                                             #
# Connection to ESXi Host using ssh and key-pair without using password.      #
#   To use ssh, setup key pair using ssh-keygen from the local host and then  #
#   copy the authrorized key string to the ESXi /root/.ssh/authorized_keys    #
#   and also to the ESXi /etc/ssh/keys-root/authorized_keys                   #
#                                                                             #
#   key-pair example:                                                         #
#    on local server...                                                       #
#                                                                             #
#        ssh_keygen -t rsa -b 4096  <-- press ENTER to all questions,         #
#                                       accepting defaults and skip passphrase#
#                                                                             #
#        ssh-copy-id root@<esxi_hostname> <-- this will ask for the password. #
#                                             Copies authorized_keys to       #
#                                             ESXi's /root/.ssh/              #
#                                                                             #
#        ssh root@<esxi_hostname> <-- this will ask for password still.       #
#                                    we are now at the ESXi host command line #
#                                                                             #
#    on ESXi command line...                                                  #
#                                                                             #
#        cat /root/.ssh/authorized_keys <-- then copy resulting contents to   #
#                                           /etc/ssh/keys-root/authorized_keys#
#                                                                             #
#    we should now be able to ssh from the local server to ESXi host without  #
#    interactiverly entering the password.                                    #
#                                                                             #
# HISTORY                                                                     #
# Date           Version  Updated By                                          #
# -------------- -------- --------------------------------------------------  #
# OCT 10, 2022   1.00     BRTA                                                #
# - initial version                                                           #
###############################################################################

###############################################################################
# define variables
###############################################################################
PROGRAM_NAME=$0                    # <-- filename we are looking at right now.
SYSDATE=`date`                     #example: Thu Oct 13 10:18:12 CDT 2022
TIMESTAMP=$(date "+%Y%m%d_%H%M%S") #example: 20221013_101812
HOST=`hostname`                    # <--local server
ESXI_HOST="blah.blah.com"    # <-- ESXi Host
TARGET_PATH="/backup_drivebay_1/esxi_configuration_backups" # <-- local path

echo "~~~~~~~~~~~~~"
echo "${SYSDATE} - Beginning ${PROGRAM_NAME}"

###############################################################################
# connect to the ESXi Host and gather the "uname" info about the ESXi version.
# We'll use part of this info in naming the backup so that we know which
# version of ESXi the backup pertains to.
###############################################################################
echo "..Connecting to ${ESXI_HOST} to get ESXi Version and Build Number:"
ESXI_VERSION=`ssh root@${ESXI_HOST} "uname -r"`                 #example: 7.0.3
ESXI_BUILD=`ssh root@${ESXI_HOST} "uname -v"|cut -d " " -f4`    #example: build-19193900
echo "...ESXI_VERSION=${ESXI_VERSION}"
echo "...ESXI_BUILD=${ESXI_BUILD}"

###############################################################################
# connect to the ESXi Host and run vim-cmd commands to generate the backup.
###############################################################################
echo "..Connecting to ${ESXI_HOST} and generate backup file."
ssh root@${ESXI_HOST} "vim-cmd hostsvc/firmware/sync_config; vim-cmd hostsvc/firmware/backup_config"

###############################################################################
# get the backup filename (with full path) from the ESXi Host
###############################################################################
BACKUP_FILE_ON_ESXI=`ssh root@${ESXI_HOST} "find /scratch/downloads -name configBundle*.tgz"`

###############################################################################
# Using scp to copy the backup file from ESXi Host to the local server.
# Renaming the file as it lands on the local server with version and build and
# timestamp in filename.
###############################################################################
echo "..Copying the ${ESXI_HOST} configuration backup file to ${HOST}"
scp root@${ESXI_HOST}:${BACKUP_FILE_ON_ESXI} ${TARGET_PATH}/configBundle-${ESXI_HOST}-${ESXI_VERSION}-${ESXI_BUILD}-${TIMESTAMP}.tgz

###############################################################################
# although the backup file is cleaned up automatically by the ESXi host after
# a few minutes, we are going to force it to be removed immediately using rm.
# The reason we want to remove it immediately is if we ran multiple
# iterations of this script one after another, before the ESXi host would
# have time to automatically clear out the previous backups.  In which case
# when we try to copy the backups from the ESXi to the Host we'd find a bunch
# of files when we really only want to grab the last one.  That is why we
# forcfully cleanup each time we run this script.
###############################################################################
echo "..Removing backup file from the ESXi host ${ESXI_HOST}:${BACKUP_FILE_ON_ESXI}"
ssh root@${ESXI_HOST} "rm -rf /scratch/downloads/*/configBundle*.tgz"

echo "..Listing the 5 most recent backups (latest are listed at the end):"
ls -tr ${TARGET_PATH}/configBundle*.tgz | tail -n 5

###############################################################################
# Using find with mtime and rm commands to purge backup files older than 365
# days from the local server.
# -maxdepth 0 --> find files in the specified dir, not recursive dirs
# -type f     --> find files, not directories
# -mtime 365  --> find files with modified date older than 365 days
# -exec rm {} --> execute the remove command on files found
# \;          --> ends the -exec option section
# -print      --> display to standard output the files purged
###############################################################################
echo "..Purging backups older than 365 days:"
find ${TARGET_PATH}/configBundle*.tgz -maxdepth 0 -type f -mtime 365 -exec rm {} \; -print

echo "Exiting ${PROGRAM_NAME}"
exit 0
Resulting output from the script:

~~~~~~~~~~~~~
Thu Oct 13 11:46:12 CDT 2022 - Beginning ./brta_esxi01_config_backups.sh
..Connecting to blah.blah.com to get ESXi Version and Build Number:
...ESXI_VERSION=7.0.3
...ESXI_BUILD=build-19193900
..Connecting to blah.blah.com and generate backup file.
Bundle can be downloaded at : http://*/downloads/525cc1cc-ddf5-b88c-0690-4f3da313a681/configBundle-blah.blah.com.tgz
..Copying the blah.blah.com configuration backup file to bkpsrv.blah.com
configBundle-blah.blah.com.tgz 100% 93KB 14.3MB/s 00:00
..Removing backup file from the ESXi host blah.blah.com:/scratch/downloads/525cc1cc-ddf5-b88c-0690-4f3da313a681/configBundle-blah.blah.com.tgz
..Listing the 5 most recent backups (latest are listed at the end):
/backup_drivebay_1/esxi_configuration_backups/configBundle-blah.blah.com-7.0.3-build-19193900-20221013_101200.tgz
/backup_drivebay_1/esxi_configuration_backups/configBundle-blah.blah.com-7.0.3-build-19193900-20221013_102606.tgz
/backup_drivebay_1/esxi_configuration_backups/configBundle-blah.blah.com-7.0.3-build-19193900-20221013_104348.tgz
/backup_drivebay_1/esxi_configuration_backups/configBundle-blah.blah.com-7.0.3-build-19193900-20221013_113216.tgz
/backup_drivebay_1/esxi_configuration_backups/configBundle-blah.blah.com-7.0.3-build-19193900-20221013_114612.tgz
..Purging backups older than 365 days:
Exiting ./brta_esxi01_config_backups.sh
Next I'd like to write a powershell script using vmware PowerCLI cmdlets to send email notifications after each scheduled vcsa file based backup completes. I'm already scheduling nightly backups of vcsa using it's own GUI (vCenter appliance port 5480 GUI) backup scheduling tool, but I don't see where it has a provision for sending email notiifications. If I write my own powershell script I think I can not only schedule the vcsa backups but also figure out how to call an email handler for notifications.
 
Last edited:
  • Like
Reactions: Rand__ and Marsh

BennyT

Active Member
Dec 1, 2018
166
46
28
Added email notifications to the esxi configuration backup script.

basically, using mailx linux command. Postfix for mta and configuring to use TLS encrypted email body transmited to remote smtp server. You can view your resulting emais header to verify that the email transmitted to the SMTP server using ESMTPS and TLSv1.3

The email body is from a /tmp file we append to as the script progresses. We do that by piping to tee -a ${EMAIL_FILENAME} for the echo and linux commands we want to see in the email. We're using the process_id in the /tmp filename so that we can identify it later, near the end of the script, and remove it as part of the cleanup.

Instructions for the postfix and TLS setup are in the comments of the scripts header. The call to the mailx command is near the very end of the script.

Bash:
#!/bin/bash
###############################################################################
# Filename:  /BRTA_scripts/brta_esxi01_config_backups-TLS.sh                  #
#                                                                             #
# Written On: October 10, 2022                                                #
#                                                                             #
# Purpose:                                                                    #
#  Connects to the ESXi Host linux command line and runs vim-cmd commands     #
#  to generate a new ESXi configuration backup file.  Then we use find and    #
#  scp to copy the backup to the local server for safe keeping.               #
#                                                                             #
#  We'll purge the backups older than 365 days                                #
#                                                                             #
#  Scheduling this via cron to run nightly                                    #
#  Log file is kept in /BRTA_scripts/brta_esxi01_config_backups.log           #
#                                                                             #
# Connection to ESXi Host using ssh and key-pair without using password.      #
#   To use ssh, setup key pair using ssh-keygen from the local host and then  #
#   copy the authrorized key string to the ESXi /root/.ssh/authorized_keys    #
#   and also to the ESXi /etc/ssh/keys-root/authorized_keys                   #
#                                                                             #
#   key-pair example:                                                         #
#    on local server...                                                       #
#                                                                             #
#        ssh_keygen -t rsa -b 4096  <-- press ENTER to all questions,         #
#                                       accepting defaults and skip passphrase#
#                                                                             #
#        ssh-copy-id root@<esxi_hostname> <-- this will ask for the password. #
#                                             Copies authorized_keys to       #
#                                             ESXi's /root/.ssh/              #
#                                                                             #
#        ssh root@<esxi_hostname> <-- this will ask for password still.       #
#                                    we are now at the ESXi host command line #
#                                                                             #
#    on ESXi command line...                                                  #
#                                                                             #
#        cat /root/.ssh/authorized_keys <-- then copy resulting contents to   #
#                                           /etc/ssh/keys-root/authorized_keys#
#                                                                             #
#    we should now be able to ssh from the local server to ESXi host without  #
#    interactiverly entering the password.                                    #
#                                                                             #
# HISTORY                                                                     #
# Date           Version  Updated By                                          #
# -------------- -------- --------------------------------------------------  #
# OCT 10, 2022   1.00     BRTA                                                #
# - initial version                                                           #
#                                                                             #
# NOV 15, 2022   2.00     BRTA                                                #
# - Now sending TLS encrypted notifications using mailx and postfix config.   #
#   postfix configuration is in /etc/postfix/main.cf (owned by root)          #
# - /etc/postfix/main.cf configuration file contains the following:           #
#        smtp_use_tls = yes                                                   #
#        smtp_tls_security_level = encrypt                                    #
#        relayhost = [your_smtp.host.com]:587                                 #
#        smtp_sasl_auth_enable = yes                                          #
#        smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd              #
#        smtp_sasl_security_options= noanonymous                              #
#        inet_protocols = ipv4                                                #
#        home_mailbox = mail/                                                 #
# - /etc/postfix/sasl_passwd contains the following:                          #
#        [your_smtp.hostname.com]:587 donotreply@your_domain.com:passwd       #
# - run the following to generate the postfix lookup table:                   #
#        postmap /etc/postfix/sasl_passwd                                     #
# - secure the SMTP account credentials password file:                        #
#        chmod 600 /etc/postfix/sasl_passwd                                   #
# - startup the postfix service:                                              #
#        systemctl restart postfix                                             #
# - test outbound email from command line:                                    #
#    echo "email body...testing" | mailx -r sender_addr@domain.com -s "Testing - subject" recip_addr@domain.com
# - examine postfix status:                                                   #
#        systemctl -l status postfix                                           #
# - examine mail log in:                                                      #
#        /var/log/maillog                                                     #
#                                                                             #
###############################################################################

###############################################################################
# define variables
###############################################################################
PROGRAM_NAME=$0                    # <-- filename we are looking at right now.
SYSDATE=`date`                     #example: Thu Oct 13 10:18:12 CDT 2022
TIMESTAMP=$(date "+%Y%m%d_%H%M%S") #example: 20221013_101812
HOST=`hostname`                    #example: oel7u9.domain.com <--local server
ESXI_HOST="esxi01.domain.com"    # <-- ESXi Host
TARGET_PATH="/backup_drivebay_1/esxi_configuration_backups" # <-- local path

###############################################################################
# email handler variables
###############################################################################
PID=$$                             # <-- processID used for email body filename
EMAIL_FILENAME="/tmp/brta-email_body-${PID}.txt"
SENDERS_EMAIL="sender_addr@domain.com"
RECIPIENTS_EMAIL="recip1_addr@domain.com recip2_addr@domain.com"
EMAIL_SUBJECT="[Success] BRTA Custom Backup: ${ESXI_HOST} Configuration to ${HOST}"

echo "~~~~~~~~~~~~~"
echo "${SYSDATE} - Beginning ${PROGRAM_NAME} [v2.00 Nov 15, 2022]"|tee -a ${EMAIL_FILENAME}
echo -e ""|tee -a ${EMAIL_FILENAME}

###############################################################################
# connect to the ESXi Host and gather the "uname" info about the ESXi version.
# We'll use part of this info in naming the backup so that we know which
# version of ESXi the backup pertains to.
###############################################################################
echo "..Connecting to ${ESXI_HOST} to get ESXi Version and Build Number:"|tee -a ${EMAIL_FILENAME}
ESXI_VERSION=`ssh root@${ESXI_HOST} "uname -r"`                 #example: 7.0.3
ESXI_BUILD=`ssh root@${ESXI_HOST} "uname -v"|cut -d " " -f4`    #example: build-19193900
echo "ESXI_VERSION=${ESXI_VERSION}"|tee -a ${EMAIL_FILENAME}
echo "ESXI_BUILD=${ESXI_BUILD}"|tee -a ${EMAIL_FILENAME}

###############################################################################
# connect to the ESXi Host and run vim-cmd commands to generate the backup.
###############################################################################
echo -e ""|tee -a ${EMAIL_FILENAME}
echo "..Connecting to ${ESXI_HOST} to generate backup file."|tee -a ${EMAIL_FILENAME}
ssh root@${ESXI_HOST} "vim-cmd hostsvc/firmware/sync_config; vim-cmd hostsvc/firmware/backup_config"

###############################################################################
# get the backup filename (with full path) from the ESXi Host
###############################################################################
BACKUP_FILE_ON_ESXI=`ssh root@${ESXI_HOST} "find /scratch/downloads -name configBundle*.tgz"`

###############################################################################
# Using scp to copy the backup file from ESXi Host to the local server.
# Renaming the file as it lands on the local server with version and build and
# timestamp in filename.
###############################################################################
echo "..Copying the ${ESXI_HOST} configuration backup file to ${HOST}"|tee -a ${EMAIL_FILENAME}
scp root@${ESXI_HOST}:${BACKUP_FILE_ON_ESXI} ${TARGET_PATH}/configBundle-${ESXI_HOST}-${ESXI_VERSION}-${ESXI_BUILD}-${TIMESTAMP}.tgz
echo -e ""|tee -a ${EMAIL_FILENAME}

###############################################################################
# although the backup file is cleaned up automatically by the ESXi host after
# a few minutes, we are going to force it to be removed immediately using rm.
# The reason we want to remove it immediately is if we ran multiple
# iterations of this script one after another, before the ESXi host would
# have time to automatically clear out the previous backups.  In which case
# when we try to copy the backups from the ESXi to the Host we'd find a bunch
# of files when we really only want to grab the last one.  That is why we
# forcfully cleanup each time we run this script.
###############################################################################
echo "..Cleaning up files from the ESXi host ${ESXI_HOST}"|tee -a ${EMAIL_FILENAME}
ssh root@${ESXI_HOST} "rm -rf /scratch/downloads/*/configBundle*.tgz"

echo -e ""|tee -a ${EMAIL_FILENAME}
echo -e "..Listing the three most recent backups: "|tee -a ${EMAIL_FILENAME}
ls -t ${TARGET_PATH}/configBundle*.tgz | head -n 3|tee -a ${EMAIL_FILENAME}

###############################################################################
# Using find with mtime and rm commands to purge backup files older than 365
# days from the local server.
# -maxdepth 0 --> find files in the specified dir, not recursive dirs
# -type f     --> find files, not directories
# -mtime +365 --> find files with modified date older than 365 days
# -exec rm {} --> execute the remove command on files found
# \;          --> ends the -exec option section
# -print      --> display to standard output the files purged
###############################################################################
echo -e ""|tee -a ${EMAIL_FILENAME}
echo "..Purging backups older than 365 days:"|tee -a ${EMAIL_FILENAME}
find ${TARGET_PATH}/configBundle*.tgz -maxdepth 0 -type f -mtime 365 -exec rm {} \; -print|tee -a ${EMAIL_FILENAME}
echo -e ""|tee -a ${EMAIL_FILENAME}
echo -e ""|tee -a ${EMAIL_FILENAME}

###############################################################################
# using mailq along with our custom postfix configuration /etc/postfix/main.cf
# to send TLS encrypted email to remote SMTP server.
###############################################################################
cat ${EMAIL_FILENAME}|mailx -r ${SENDERS_EMAIL} -s "${EMAIL_SUBJECT}" ${RECIPIENTS_EMAIL}

echo "cleanup and remove the tmp email body file"
rm -rf /tmp/brta-email_body-${PID}.txt
##########################################################################
######################## END OF EMAIL BLOCK ##############################
##########################################################################
echo "Exiting ${PROGRAM_NAME}"
exit 0

example of received email:
2022-11-15_12-14-23.png
 
Last edited:
  • Like
Reactions: itronin

BennyT

Active Member
Dec 1, 2018
166
46
28
Updated my yearly VMUG license key over the weekend. I've been using vSphere products and VMUG for over 4 yrs now. Very happy with this setup and it has allowed me to have so many more experiment servers for Oracle experiments compared to when I had only physical servers.

Also, upgraded ESXi to 7.0u3k over the weekend. Did it primarily because they said there were some bug fixes relating to guests using UEFI. Now that I'm using UEFI more, I thought this was a worthwhile patch.

The BIGGEST thing I noticed with this patch was that the GUI for ESXi changed a little bit, different colors and fonts.
2023-03-11_19-10-13.png

Was tempted to upgrade to ESXi 8 but I'm still going to hold out longer before that jump.

*also upgraded to latest vCenter Server

things have been working fine. no noticeable differences really.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
Hello,

My ESXi Host is a Supermicro X11-DPI-N(T) - That mobo has Two Intel x722 10GBASE-T RJ45 network adapters and One IPMI RJ45 adapter. Today I tried connecting to guests in my ESXi host (7.0u3k) and discovered that the two 10GBASE-T physical network adapters of the motherboard are disconnected/down. These connections were working as of 6am this morning, but when I tried connecting to guest VMs at 9am they were not working.

I've not done any ESXi or vCenter upgrades since May 2023, so I don't believe it is a software or ESXi configuration issue. I've not done ANY changes to my ESXi host or vCenter for at least a few months.

Could it be the motherboard's physical hardware adapters are simply "broken"?

Here are some screenshots:
2023-09-07_09-57-29.png



*EDIT: We can absolutely(?) rule out ESXi configuration causing this because even before ESXi boots, the IPMI ability from the first of the two 10GBASE-T adapters should at least be working, and it isn't working. That should at least work, even with the system shutdown and without booting up the system, the IPMI ability of that adapter should at least work AND IT ISN'T. That tells me there is something wrong with that network adapter port hardware level (or firmware, but we haven't changed anything in the firmware).

I'm looking now at a few inexpensive Intel x540 or X550-2T (2x10GBASE-T) cards for about $90-$220 USD on amazon. The X722 10GBASE-T cards are ALOT more money so I don't think I'll get one of those. I'm really tempted to go SFP but I just want to get this working again first using existing hardware switch etc. I may move to SFP later after I get this working again first, or on my next server build.

ANY IDEAS or SUGGESTIONS are welcome :)

Thanks,

Benny
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
Ordered an x550-t2 two port 10GBASE-T card for $216.99 on amazon. It's silkscreened with "10GeTek" on the pcb so don't think it is official intel card, but I'm expecting it has an authentic intel chipset and firmware :cool: we'll see how well it goes. Should arrive tomorrow and I'm hoping it's plug n play after a reboot. It is in the vmware compatibility guide from 6.7 to latest 8.0u1. I might have to set the new adapters as the ESXi management adapter in the ESXi Console, and I might need to redirect my virtual switches to the new adapters. I'm hoping that is all I'll need to do :)

Weird that this happened today after 4.5 yrs. Maybe it was the heat because those 10GB copper adapters are always hot even with a fan pointed right on them. I may be moving to SFP+ sooner than later if this card dies too.
 
Last edited:
  • Like
Reactions: itronin

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Ordered an x550-t2 two port 10GBASE-T card for $216.99 on amazon. It's silkscreened with "10GeTek" on the pcb so don't think it is official intel card, but I'm expecting it has an authentic intel chipset and firmware :cool: we'll see how well it goes. Should arrive tomorrow and I'm hoping it's plug n play after a reboot. It is in the vmware compatibility guide from 6.7 to latest 8.0u1. I might have to set the new adapters as the ESXi management adapter in the ESXi Console, and I might need to redirect my virtual switches to the new adapters. I'm hoping that is all I'll need to do :)

Weird that this happened today after 4.5 yrs. Maybe it was the heat because those 10GB copper adapters are always hot even with a fan pointed right on them. I may be moving to SFP+ sooner than later if this card dies too.
card is probably an intel reference design 550 and will most likely show up as an intel.

SFP+ if you can. IMHO always better and less stressful to the components. Maybe a melly CX4 10/25gbe card? I believe that is still supported in ESXI 8.

You didn't patch ESXI the night before it died did you? if so I'd be looking to roll back and see if they come back.

FWIW I'm going to start retiring my X10 E5 boards in favor of X11DPH-T's which I have started acquiring... only 5+ years after your original build!

still installing SFP+ cards even though 10GbT is onboard. I'm looking at the melly and chelsio 10/25 nics but have not fully researched nor picked.

ITR
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
card is probably an intel reference design 550 and will most likely show up as an intel.

SFP+ if you can. IMHO always better and less stressful to the components. Maybe a melly CX4 10/25gbe card? I believe that is still supported in ESXI 8.

You didn't patch ESXI the night before it died did you? if so I'd be looking to roll back and see if they come back.

FWIW I'm going to start retiring my X10 E5 boards in favor of X11DPH-T's which I have started acquiring... only 5+ years after your original build!

still installing SFP+ cards even though 10GbT is onboard. I'm looking at the melly and chelsio 10/25 nics but have not fully researched nor picked.

ITR
Very cool on the X11DPH you are moving to.

I didn't patch ESXi recently. Last time i changed anything in our configuration was back in May when i went from 7.0u3h to 7.0u3k.

I'm going to price out the sfp+ parts list. Thanks for those sfp brand recommendations.

I'd need an sfp+ switch too. 25gbe sounds awesome
 
Last edited:

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
I'm going to price out the sfp+ parts list. Thanks for those sfp brand recommendations
you're welcome. I should note I did not check compatibility nor roadmap for those parts with ESXI as I've dropped my vmug and am in the process of moving to xcpng (for my lab, and my clients). Please do check though for ESXI.

I'm making a move that is not for everyone (really use case dependent) so please don't take my comment as an endorsement to move off ESXI.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
I'm pretty happy with ESXi and plan to stay with it. I'll be sure to check against the compatibility list for the cards, etc. Thanks again and have fun with your new planned lab setup and with xcpng :)
 

BennyT

Active Member
Dec 1, 2018
166
46
28
good news is the new x550-T2 (has two ports) arrived from amazon about 45 minutes ago and it is working great so far. It was mostly plug n play. The only thing I' had to do is to go into the ESXi console (yellow ascii UI screens) and assign the new physical adapters vmnic2 as the new ESXi management adapters which automatically assigned it to the main virtual switch vSwitch0. Then in vCenter (or I could've done this in ESXi too) I set the 2nd adapter port "vmnic3: to one of my other virtual switches "vSwitch2"

ESXi still lists the bad onboard physical adapter ports vmnic0 and vmnic1 but shows them as disconnected/down. They are still getting power and have a temp of 50c even though they don't work. Would be nice if I could turn those off completely and disable them altogether since they are broken, but that's not a big issue. Maybe I can do that in the SM X11 mobo BIOS, I'll look at that tomorrow.

But I'm already thinking I should add a few SFP+ adapter ports. They should run alot cooler than the twisted pair 10Gb adapters. I've made a parts list for a mikrotik 8 port 10Gb sfp+ switch, four DAC cables, a couple SFP+ NICs for my serves, and a couple RJ45 transceivers for uplink from mikrotik switch to my main router. May not need the RJ45 transceiver (because the SFP switch has one RJ45 I think) but I think I'll get a couple of those anyway incase I need them later.

2023-09-08_20-19-56.png
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Check the manual but I believe you're looking for Jumper JPL1 to disable lan1 and lan 2 on the X11DPH-T - I sure hope that would cut the power to the controller. my plan to kill the onboard to use my dual sfp+ cards.

I picked up an inexpensive mokerlink 12 port SFP+ managed L3 switch on AMZ PD for about 230 with sale and coupon. its 270 now.. I have not had time to test that for DAC or RJ45 10G-BaseT operations though. its described as "fiber" and some of the low cost shenzen switches don't support DAC's in the SFP+ pots. I got it to see how it might do for low cost deployments where the staff isn't going to be cli savvy or really prefer GUI. May have time to test it in about 10 days. Not sure what your timeline is.
 
  • Like
Reactions: BennyT

BennyT

Active Member
Dec 1, 2018
166
46
28
Check the manual but I believe you're looking for Jumper JPL1 to disable lan1 and lan 2 on the X11DPH-T - I sure hope that would cut the power to the controller. my plan to kill the onboard to use my dual sfp+ cards.

I picked up an inexpensive mokerlink 12 port SFP+ managed L3 switch on AMZ PD for about 230 with sale and coupon. its 270 now.. I have not had time to test that for DAC or RJ45 10G-BaseT operations though. its described as "fiber" and some of the low cost shenzen switches don't support DAC's in the SFP+ pots. I got it to see how it might do for low cost deployments where the staff isn't going to be cli savvy or really prefer GUI. May have time to test it in about 10 days. Not sure what your timeline is.
That looks like a good switch at that price. It says it can handle SFP and SFP+, it has cli ssh and also a GUI. Only question is if it can use DAC or if requires transceivers. And even so, that's no big deal to me.

Let me know what you think about it after you get chance to try it and test with it. I didn't see any reviews for it yet, but on paper it looks like a win. Thanks!
 
Last edited:

BennyT

Active Member
Dec 1, 2018
166
46
28
I was looking into how to completely turn off the Onboard LAN so that it avoids drawing power or producing heat.

On the X11DPi for both for the N and NT editions [1000Mb and 10Gb versions of the board], there are not any jumper pins for the LAN. For example, there isn't a JPL1 or JPL2

There is a BIOS setting to disable it though.

2023-09-09_15-29-49.png

But disabling the Onboard LAN via BIOS, the LAN CONTROLLER chip is still sensing and reporting temps of 50c +

2023-09-09_15-45-44.png

It seems that I cannot turn off the LAN Controller on my X11DPI-NT. Disabling the LAN ports basically just removes the devices from the device lists. It's not a huge deal for me. I'll open a ticket with Supermicro, thats the best I can do. They are quick to respond usually.
 
Last edited:

Stephan

Well-Known Member
Apr 21, 2017
942
711
93
Germany
AFAIK both x722 are part of the Intel C622 chip. And that is one cooker, like 5-10 watts continuous. Check through power consumption in idle, not by temperature. Chances are you can only disable the PCIe device but never really bring the ethernets into powerdown. Try looping to the x550 card and set link speed manually to 1/10G without autonegotiation. Some x722 chips will do only 1G and that is it. Nothing else. See if link comes up. Try booting some recent Linux ISO from USB to double check without ESXi involvement.
 

BennyT

Active Member
Dec 1, 2018
166
46
28
I picked up an inexpensive mokerlink 12 port SFP+ managed L3 switch on AMZ PD for about 230 with sale and coupon. its 270 now.. I have not had time to test that for DAC or RJ45 10G-BaseT operations though. its described as "fiber" and some of the low cost shenzen switches don't support DAC's in the SFP+ pots...
Hi itronin,

I just noticed that the smaller version of that switch, the Mokerlink 8 port SFP+ managed switch, someone left a review on their website saying they are using twinax (DAC) to uplink to their router. The 12port version probably can do the same. Really excited to hear what your impression is after you test with it. I'm putting the 12port SFP+ switch part# here for my future reference... 10G120GSM I can't believe nobody has reviewed it yet.