ESXi v5.1 Tweaks/Optimizations/Recommended Settings?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ZzBloopzZ

Member
Jan 7, 2013
91
13
8
Hello,

I am in the process of building my first All-In-One server. Will be using ESXi v5.1 w/ OI + napp-it, primarily for ZFS use. I also want to have 2-3 other non-critical VM's to play around with such as PFsense, 2008 R2 etc.

System Specs:

E3-1230v2
32GB 1600 ECC UDIMM
SuperMicro X9SCM-IIF
2x IBM M1015 + 10x Toshiba 3TB RAID-Z2
Crucial M4 256GB (Will be on motherboard controller and want to load ESXi and all VM's on this drive)

Is there any particular settings/tweaks/optimizations I should set for VMware? I will be sure to check the BIOS to make sure VT-d and VT-x is enabled.

I also read somewhere something about pass-through of OI via NFS or iSCSI. What does this exactly mean and which is better?

Thank You!
 

Scout255

Member
Feb 12, 2013
58
0
6
Passthrough is generally meant as passing a device through the hypervisor directly to a VM. I.e. you would likely pass your M1015's through to your OI VM in order to allow it direct access to the SAS card. If you were not to do this, performance would likely suffer. One disadvantage of doing this though is that you can't share passed-through devices to other VM's (i.e. lets say you passed through a network card to your PF sense node. If you did that, no other node would have access to that NIC).

NFS stands for Network File System and it is a distributed network file system protocal that permits multiple users to access files at the same time. This is done by mounting an NFS share on a client. Note that this does not give you block access to the device, only file level access and I believe some server type software does not function properly without block level access. From what I can tell you can also not remote boot windows off of NFS shares (someone correct me if I'm wrong). (More reading: Network File System - Wikipedia, the free encyclopedia ). This is the same type of sharing that you would have on a home network (although the protocal would likely be CIFS and not NFS).

iScsi is basically internet scsi, where remote storage located on another device (like a server) is shared over the network as a SCSI device. This allows direct block access to the data stored within the ISCSI volume, and your computer sees it as if it were a locally attached harddrive. The issue with this protocal is that it is not a distributed network file system like NFS and as such, it does not allow multiple computers read and write access at the same time as this will quickly lead to data corruption. This is because if system A modifies a file, another system that is using the iscsi volume will be unaware of the change and could then try to re-modify the same file (imagine splitting a hard drive out to two seperate computers directly and all the problems that this could lead to.....). I believe you can safely have multiple computers access the same iscsi volume as long as it is mounted as read only however. Because it is seen by your computer as a local hard drive, you can do diskless remote booting of Windows off of an iSCSI volume.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Hello,

I am in the process of building my first All-In-One server. Will be using ESXi v5.1 w/ OI + napp-it, primarily for ZFS use. I also want to have 2-3 other non-critical VM's to play around with such as PFsense, 2008 R2 etc.
I would
- install ESXi onto the SSD (option is a 8GB USB Stick)
- use the SSD as local datastore for your virtual SAN (OI, 16 GB HD is enough, give as much RAM as possible-ex 16 GB)
- eventually use this local datastore for other VM's (but remember: ZFS +NFS is far better than ESXi regarding storage)

-> your SSD is bigger than needed, unless to put other VM's on it
think about a second mirrorred highspeed ssd pool for VM's.

- Pass-through the two IBM to OI to have full control and full speed for ZFS and OI
- Share all filesystem via CIFS
- Share one filesystem via NFS to hold other VM's for ESXi to have ZFS performance, file access via SMB and ZFS snaps

- disable sync write for that NFS share (on a power-failure: 5s of last writes are lost) for ultimative performance

- do not use iSCSI for ESXi (NFS is similar regarding performance but more flexible)
 
Last edited:

hagak

Member
Oct 22, 2012
92
4
8
iScsi is basically internet scsi, where remote storage located on another device (like a server) is shared over the network as a SCSI device. This allows direct block access to the data stored within the ISCSI volume, and your computer sees it as if it were a locally attached harddrive. The issue with this protocal is that it is not a distributed network file system like NFS and as such, it does not allow multiple computers read and write access at the same time as this will quickly lead to data corruption. This is because if system A modifies a file, another system that is using the iscsi volume will be unaware of the change and could then try to re-modify the same file (imagine splitting a hard drive out to two seperate computers directly and all the problems that this could lead to.....). I believe you can safely have multiple computers access the same iscsi volume as long as it is mounted as read only however. Because it is seen by your computer as a local hard drive, you can do diskless remote booting of Windows off of an iSCSI volume.
Not exactly true about iSCSI and multiple devices writing to it. Note that iSCSI does not provide a filesystem, so you must place a filesystem on it. There are filesystems that support multiple devices being able to write to the same block device. OCFS is one such filesystem. This is usually done in cluster environments such as an ORACLE RAC setup. This is not something you would use on MANY client machines for online storage.
 

Scout255

Member
Feb 12, 2013
58
0
6
Thanks for the clarification Hagak, I didn't realize that by utilizing a cluster file system on ISCSI volumes you could have multiple writing connections. Good to know.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
For an all in one, people generally;
1. Install ESXi to a USB stick.
2. Put in a drive as a datastore for VMs (possibly mirrored).
3. Create a VM.
4. Use VT-d (passthrough) to pass the SAS controller chipsets for direct VM control using its own drivers.
5. Install OS which includes ZFS support.
5. Create ZFS pools based on disks attached to the SAS controller(s).
6. Share the zpools out as iSCSI or format and share as NFS.
7. Mount one (or more) zpools on to ESXi to use as datastores for other VMs
8. Mount one (or more) zpools directly on to any VMs you create.

I am currently using iSCSI as this allows the VM to format and control the file system. Doing this means that I can have a single 4TB zpool, share half to a Windows box and half to a Linux box and they will be able to format as NTFS / Ext3 and use their own tools to manage. Also, if I under-provision (i.e. give 1.5TB each to the two systems), I can allocate some of the unused 1TB if one VM starts to fill up more than the other and then just resize with Windows Disk Management or LVM on Linux. VMFS also supports multiple connections to an iSCSI target so you could have a VM on the shared storage and start it up on one server then shut it down and start it up on a second server using the same shared storage. May be an interesting test to see what they do if the VM is started on both vHosts. I would hope the second one would error (already running or something like that) but have never tested myself.

I also found out last night that reconnecting disconnected shares with iSCSI is also very fast. I pulled the wrong network cable from the storage server when connecting up sone C6100 nodes to it and a video I was playing on my desktop (shared from my Win Server 2012 machine which had the file on an iSCSI share mounted from the storage server) stuttered but within a second of putting it back in again the video resumed. Saying that my iSCSI shares are not user segregated so there is no authentication to be managed on connection. It would be interesting to see how NFS and SMB would cope or if the player (VLC) would error.

I was also very surprised that the boot time on my CentOS minimal install over iSCSI (raid 5 array of 3x 1.5TB Seagate 7200.11s) was not far off booting it from a local SDD (agility 3). The storage servers P812sSAS card with 1GB FBWC may have something to do with that though.

For NFS you have the big advantage of the FS being managed in a single place by a single OS. This has an advantage of when space is free, it is free to all hosts connecting to it. With iSCSI, typically it would be more difficult to free up that unused space and reallocate to another target if needed.

For home use, it really is a case of make your choice as both will work just fine.

Interesting, if a little old, discussion on it at Server Fault - NFS protocol vs iSCSI protocol

There is also a great comparison by Cormac Hogan on his VMWare blog here.

RB
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I also tend to prefer NFS to iSCSI because if you need to recover a VHD from a snapshot, it's a matter of cloning the snapshot, copying the VHD on top of the borked one and restarting the VM.
 

hagak

Member
Oct 22, 2012
92
4
8
I do not understand the advantage of installing ESX to a usbdrive. Particularly when used in a All-in-One build. With an All-in-one you have to have a local datastore for the SAN VM regardless why not just install ESX onto that same device? You can not put the datastore on the usbstick. Now with a build where the ESX is not in an all-in-one then having a usbstick makes sense, since that machine would not need any local storage if its VMs are stored on a remote SAN.

For the All-in-one the usbstick just seems to add another failure point. I used a single small SSD drive to install ESX and my single SAN VM. I am thinking about getting a second drive and setting up a mirrored version of my OI SAN VM as gea describes.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I think the theory is that if you have to reinstall, you just pull the local HD and reinstall without worrying about somehow borking your local datastore. I've done it both ways myself...
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Segregation as dswartz says.

If ESXi gets corrupted or needs a reinstall for any reason then it is clear you will not be doing any damage to the datastore data on the install if you install to a USB stick. It is also fairly easy to clone the sticks and have a second on hand should the first fail.

I have had drives end up with errors but as ESXi was on a USB stick it was not affected and I could use the CLI via ssh to fix the drives (usually lock file issues, especially after a power outage).

Saying that, either way works so it is personal preference.

RB