vSphere (ESXi) 5.1 is out.

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
vSphere 5.1 downloads;
vSphere 5.1 is available for download here (60 day trial or paid versions).

vSphere 5.1 Hypervisor (the free one) available here (click the bottom left download link, sign in then download).

First impressions
After a quick download and upgrade I have to say the resulting server was not as I had hoped. There were a number of issues and I decided to to a complete new install. To be fair, a fresh install is the default option.

I have a 4 port Intel ET network card and a HP 1810-24G switch and setting up the LACP was very easy and seems to be working fine. I have not load tested to confirm speeds but all 4 light are flashing on the switch and checking the performance graphs, all 4 NICs are reporting figures while the remaining non teamed NICs are not.

Upgrade of the VMware tools was same as usual for Linux and Windows. My CentOS installs went smoothly but my WHS-2011 install does not seem to like the VMXNET3 adaptor. Changing it back to a ET1000 sorted any issues out. Found a post on the VMware Communities regarding the issues I am seeing. Previously the WHS box was running with a VT-d passthrough NIC but as I am now using teaming (LACP) I though I would not bother. The NIC is picked up as a VMXNET3 adaptor and installing the updated VMTools does not fix the problem for me. According to the post this affects WHS 2011 / Win Server 2008r2 with remote access turned on and something is manipulating packets for traffic flow (QoS etc). I did try removing the network device in device manager and then refreshing and it found the adaptor and installed the VMXNET3 driver but still no network.

Copying a 1.5GB test file hit transferred between 109MB/s and 122MB/s from my PC (Hitachi HDD, Broadcom Netlink (TM) integrated network, Win 7 64bit) to my server on the 4x1GbE trunk (Intel Quad ET, Stablebit Drivepool storage, WHS 2011).

The vSphere client feels a fair bit faster. A few extra storage metrics predefined.

Only a very quick install and browse so I am sure more info will come to light as more people get their hands on it and I have a bit more of a play around.

RB
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
Note: vSphere 5.1 has a 32GB hard limit on physical ram per server (physical) for the free version.

vRam limit has been removed.

RB
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
Good luck.

Sure you don't want to do a fresh install and then import the VMs ;).

I had some major issues over the last couple of days with storage. I put a second M1015, flashed to a 9211-8i IR, and set two 1.5TB Seagate Barracudas in raid 1. I then went to copy a couple of VMs from another drive to this new datastore. It gave an ETA of 5 days for a 200GB vmdk file. I left it overnight and it did not improve. I finally got a chance to reboot it last night and checking on the controller it had the two drives listed as being out of sync and requiring a resync. The throughput in the vSphere monitoring gui (not vCenter, the other one as I am using the free hypervisor at home) gave an average write of 9MB/s on that array.

As of this morning the array is synced and I will try again. Hopefully it will be resolved.

The problem with this is that while it was failing to copy at any reasonable speed the vSphere server would pause, both in the Gui and also on the server console which is quite worrying. I also noticed in the vKernel logs that my 60GB Vertex 2 was throwing up device errors but am unsure if it was due to the array issues or the drive itself. I will look closer tonight.

RB
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
Well the LACP (Network link aggregation / bonding) seems to be working well.

I have only seen one issue when adding new hardware resulting in one port being shunted to a different virtual network adaptor name which resulted in big pauses and lost connectivity to the VM via ssh, no DNS ability etc. After tracking down the issue, removing the incorrect adaptor and adding the original port back in to the collection fixed the issue without VM or host restart.

RB
 

sotech

New Member
Jul 13, 2011
303
0
0
Australia
Thanks for posting your experiences here - based on that I might move our servers across this week sometime, I'm keen to set teaming up but not keen to be an early adopter and find issues ;)
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
NP,

I was quite concerned when I saw it but when I was able to figure out what had happened it was very easy to fix after confirming which virtual adapter ports belonged to which controller.

I would not expect this issue to happen if you are not changing hardware after setting up the trunk.

Regards
RB
 

sboesch

Active Member
Aug 3, 2012
419
40
28
Columbus, OH
I upgraded my Virtual Center Server from 4.1 to 5.1 this morning. Pretty painless. I am going to add an additional ESXi server to the cluster before upgrading the hosts. This should be done by EOD tomorrow.
I am going to perform clean installs on the hosts, upgrading the host OS always seems shaky to me.
 

Patrick

Administrator
Staff member
Dec 21, 2010
11,971
4,932
113
Wish I had the time to do a guide on this. I still have a 5.0 server so maybe when I do that one...
 

sboesch

Active Member
Aug 3, 2012
419
40
28
Columbus, OH
The madness has begun, 2 down, 3 to go on my production cluster! I did not bother doing a bunch of research, attaching software iSCSI storage has changed a bit on ESXi 5.x. I will have to find time to document setting up iSCSI connections to a SAN(s).
 

sboesch

Active Member
Aug 3, 2012
419
40
28
Columbus, OH
I successfully installed ESXi 5.1 on all my hosts today. Everything is working good, done in a nick of time, tomorrow AM I am expecting some heavy load on my web servers.
 

Ragious

New Member
Oct 12, 2012
4
0
1
Just to know that not everything is all good....
Did a fresh install of 5.1. and ran into two problems:
1. Passthrough of onboard ich10 controller results in purple screen (reproducible). Apparently i m not the only one.
2. Vm converter does not work with 5.1, waiting for fix. Pretty stupid. I create test vms on workstation and then convert to esx. This means with 5.1 i cannot add vms until vmware comes out with a fix.
So i went back to 5.01 waiting for better times....
Cheers R.

Fujtisu rx200s5 xeon e5540 30gb ecc lsi megaraid (4x146gb sas10k for vm storage) lsi sas2008 (sata disks)
 

sotech

New Member
Jul 13, 2011
303
0
0
Australia
Just to know that not everything is all good....
Did a fresh install of 5.1. and ran into two problems:
1. Passthrough of onboard ich10 controller results in purple screen (reproducible). Apparently i m not the only one.
2. Vm converter does not work with 5.1, waiting for fix. Pretty stupid. I create test vms on workstation and then convert to esx. This means with 5.1 i cannot add vms until vmware comes out with a fix.
So i went back to 5.01 waiting for better times....
Cheers R.

Fujtisu rx200s5 xeon e5540 30gb ecc lsi megaraid (4x146gb sas10k for vm storage) lsi sas2008 (sata disks)
Yikes. I wonder why #1 wasn't picked up in testing... I suppose enterprise environments may not bother with passing through the onboard ports so it may not be a priority. Not really helpful for the home/SMB user!
 

Ragious

New Member
Oct 12, 2012
4
0
1
It even seems to be the case for some non on-board pci/pci-e devices see http://communities.vmware.com/thread/417736.

Conclusing of VMWare so far:

We've isolated the problem and have an internal bug report open to track the fix.

The problem should (mostly?) only affect PCI devices as opposed to PCIe devices; I would expect that your onboard SATA controller should be PCIe, but we generally don't support PCI[e] passthrough of motherboard devices. You may have to wait for an update/patch with a fix for the issue and see if it allows you to pass-through your SATA controller again.


Well, onboard PCI-e DID work in ESX 5.01 so something surely is going on.

I would be interested if anyone has succeeded in passing through an LSI MegaRaid or SAS2008 (IBM 1015) controller with 5.1.

Cheers R.
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
It even seems to be the case for some non on-board pci/pci-e devices see http://communities.vmware.com/thread/417736.

Conclusing of VMWare so far:

We've isolated the problem and have an internal bug report open to track the fix.

The problem should (mostly?) only affect PCI devices as opposed to PCIe devices; I would expect that your onboard SATA controller should be PCIe, but we generally don't support PCI[e] passthrough of motherboard devices. You may have to wait for an update/patch with a fix for the issue and see if it allows you to pass-through your SATA controller again.


Well, onboard PCI-e DID work in ESX 5.01 so something surely is going on.

I would be interested if anyone has succeeded in passing through an LSI MegaRaid or SAS2008 (IBM 1015) controller with 5.1.

Cheers R.
Passthrough on my M1015 (cross flashed to LSI 9211 works fine for me.

I did notice that the server took a long time coming up after it was passed through but this may be unconnected to vSphere.

RB