Zeus

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hsmeets

New Member
Feb 18, 2011
8
0
0
Build’s Name: Zeus
Operating System: ESXi & FreeBSD
CPU: Xeon X3440
Motherboard: Asus P7F-E
Chassis: Norco PCR-270
Drives: 2x500Gb + 2x2TB (to start with)
RAM: 8GB
Add-in Cards: none
Power Supply: Seasonic 500W
Other Bits:

Usage Profile: to host personal website, mail, fileserver, squeezebox, sabnzbd

The system is at time of posting not yet live, awaiting delivery of parts.

"Zeus" will be the hostname, it will be the successor of a system called "Chaos" which is an AMD based system (which replaced a Via Mini-ITX called MicroCosmos) as the current hardware gives troubles with 64b kernels. Want to start using virtualisation to be able to clone the live system to test/prepare bigger updates like FreeBSD major version upgrades and experiments without compromising the live system.
 

hsmeets

New Member
Feb 18, 2011
8
0
0
Picture(s) will follow, but not much to see than some ugly cabling.....:D

Parts arrived yesterday. After getting back home from the office, having dinner and doing some other stuff I started to build around 8:30 pm. At midnight FreeBSD booted as guest under ESXi.

Some stuff I bumped into:

P7F-E bios set to boot from USB to install ESXi failed, it only booted from harddisk which had a old FreeBSD install. After mucking around I disabled the OEM Logo Boot screen to see the POST messages and saw a message to press F8 for the BSS menu. BSS?????, well I pressed the F8 during boot and got to options: drive name of my hardisk and something refering to what looked like the memstick....choose the mem stick and a few moments later the ESXi Installer was running. BSS=Boot Sector/System Selector???

Boot ESXi and it takes very long (1 to 2 minutes) to load the IPMI module. Probably because I have not installed the BMC daughterboard ASUS offers for the PF7-E. I found some hints at how to disable loading the IPMI module but the change in the 72.IPMI file did not survive the reboot, it was back at the original after booting. Was able to find 72.IPMI but not before I found how to login at the service console under ALT-F1. (I'm a total ESXI green-horn). For 72.IPMI I found another reference to make the file sticky with a CHMOD so it survives a boot (I suspect these files are in a memory drive and get reloaded at boot from the image.) Gonna try the sticky bit tonight. If that does not work I'll dig into ddimage.bz2 on the install mem stick and alter it there. Hopefully the MD5 check will only nag and not abort at install.

I'm using iMac/OSX so to use vSphere client I had to fireup my Win-XP under paralells. After making a VM and installing FreeBSD into it I ran into some conflict with the mouse when opening the console. Normally mouse focus under Parallels is very gracefully in use, no need to release the mouse from a window via the keyboard. But when I start the console in vSphere client I loose the mouse. ALT-CNTRL releases it, but it is paralells that catches the keyboard and I still cant operate the vSphere client anymore. Vmware uses the same ALT-CNTRL to release the mouse from the console. Google is your friend. Someone had the same issue and knew that VMware uses the same ALT-CNTRL to release the mouse. The solution (but not yet tested myself) is to alter the key sequence in Paralells to something else.

Todo:

Today a LSI 3081 card will arrive: I'll have to decide how to use it: let the card do the raid and install all on it including ESXi and have the FreeBSD reside within the VMFS and use the plain FreeBSD UFS fs or use it in passthrough and let FreeBSD have it for it's own and use the hardware raid or reflash it ti IT mode and use ZFS and keep VMware + datastore/inventory on a disk connected to the ICH10. The first FreeBSD VM will be the fileserver and provide that to other VM's and the other computer at home. Some testing will be done the upcoming weekend, at the moment I favor the passthrough mode with the LSI as 8 port HBA and have FreeBSD do it thing with ZFS.....:cool:

Also have to figure out how I can shutdown all VM's (and ESXi itself) gracefully when the UPS tells the system to do so, FreeBSD and the APC UPS can communicate.....found some stuff that FreeBSD (or other *nix) can propagate that to ESXi via a SOAP call.
 
Last edited:

No1451

New Member
Jan 1, 2011
32
0
0
Going to be watching this with some interest, it looks like your buildout is going to be quite similar to my own.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,805
113
Cool! BTW I have been doing a lot more NAS/ OSX and iOS integration over the past few months, so it might be cool to share your thoughts on that too.
 

hsmeets

New Member
Feb 18, 2011
8
0
0
Some more hardware p*rn: motherbord, LSI controller installed, don't pay attention to the cabling mess.



And the system up and running:

 

hsmeets

New Member
Feb 18, 2011
8
0
0
and the saga continues:

The OSX/Paralells and WinXP/VNWare mouse release conflict: yup, setting a different key sequence in Parallels solved the issue. I can now release the mouse from the console and navigate in the WinXP/vSphere client again. Solved.

IPMI module taking long to load: the hack i found via a google search is to "disable" the module by altering the content of file /etc/vmware/init/init.d/72.ipmi and setting the sticky bit with 'chmod +t 72.ipmi' to survive past boots worked however, I rebooted an no module loading. BUT read on......

I went on and installed the LSI card that arrived today, flashed it with IT firmware. I also had to flash the 2 Samsung 2TB drives (F4 204UI) with new firmware for that s.m.a.r.t. related issue.

Stupid me misread the LSI ' what is delivered' and got not the 2 sas-sata cables I was expecting...need to order them quickly, otherwise no testing with the LSI card this weekend.

I booted the server again: mischief: IPMI module load returned!?! and I could not use F2 or F12 or Alt-F1 to get into vmware right at the server.....also I could no longer connect to the ESXi via vSphere.

Long story cut short: I took out the stick and reinstalled ESXi again and re-did the few settings like static network adress, ntp settings and the power management settings.

While peeking around at all settings I went into the advanced settings: Kernel-->Boot and found this tickbox:



Enable IPMI: I unticked the setting and rebooted: yup the IPMI module was not loaded (or tried to).



It still annoys me that somehow ESXi acted up after adding the LSI card and the 2 disc (temporarely connected to the ICH10).......need for some more testing how robust ESXi is with hardware changes...

to be continued.
 

hsmeets

New Member
Feb 18, 2011
8
0
0
Small update:

After 2 days of "always on" the Norcotek case started to emit a lot of noise, noise beyond the point of 4 fans and moving air in a hollow box. I opened the case and started to pull the power leads of the fans and the noise was caused by 1 of the 4 fans as to be expected.

On the software side of things:

I prepped one VM with a basic freebsd to be used as a "template" setup for other VM's. I have only the vSphere client in use, so no cloning but within the inventory browser of VMWare I copied the original directory with all the vm files to a new directory. While creating the 2nd VM I choose to not create a virtual disc but to use the existing (copy) one. This 2nd VM also got PCI passthrough access to the LSI controller. The new VM booted without troubles but at some point the MPT driver for the LSI card started to act up: lot's of entries in /var/log/messages. Google search gave a grim outlook.....but not is all doom and gloom. Earlier today I did a complete fresh install into an empty VM and it looks now that the MPT driver is no longer complaining about disc connectivity. Need to do some more stressing of the driver to be sure everything is 100%
fine now.
 
Last edited:

john4200

New Member
Jan 1, 2011
152
0
0
Did you reverse either the fan on the PSU or the 4 case fans? Otherwise, I think they will be blowing in opposite directions. I doubt that would cause a fan to fail, but it certainly would make the air flow in your case more turbulent and less efficient.