Haven

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

No1451

New Member
Jan 1, 2011
32
0
0
Build Name: Haven
Hypvervisor: VMWare ESXi
  • Guest OS 1: pfSense
  • Guest OS 2: FreeBSD w/ ZFSGuru
  • Guest OS 3: Windows Home Server 2011
CPU: Intel X3430
Motherboard: Supermicro X8SIL
Chassis: Norco 3116
RAM: 16GB KVR1333D3E9SK2/8G
HBA: 2x Intel SASUC8i w/ LSI IT Firmware
Power Supply: Corsair TX850(Seasonic 380 for the moment)

The Mission: After dealing with lackluster performance from WHS in my current server I have decided to try and consolidate my expanding store needs with my desire for a more robust routing capability(Multi-wan, rule based routing, etc). The intended setup for this machine will be ESXi managing a pfSense 2.0 installation for my routing needs. FreeBSD w/ ZFSGuru will act as a storage backend for my network, sharing it's ZFS pool through iSCSi to a VM of Windows Home Server 2011.

Before anyone goes and says it, yes I am aware that this will likely result in significantly decreased performance in access to my pool. Performance on par with what GbE can deliver is my best-case scenario, if it can deliver this or close to, I will be pleased. Mostly I want ZFS for the robust parity, ease of use and maturity of the platform as compared to a newer solution such as FlexRAID Live. As new options open up this may very well change but barring any catastrophic failures this is my preferred setup. WHS is preferred for the incredible ease of bringing new machines into the fold and the fantastic recovery options it provides for connected machines.

Final pieces of the puzzle(CPU and second HBA) arrive tomorrow so setup will commence, can anyone offer any insight into sharing between the two VMs? At the moment I am figuring on using a vSwitch connected to one of the physical NICs to tie the two VMs both into the physical network and to each other. Is this the best option? Are there other preferable methods that will produce the results I am looking for? I'm a complete ESXi noobie with only a few hours clocked using it(in a VM no less!) so any advice is soundly appreciated.

Edit: Something I'm not entirely certain on, with regards to iSCSi, it presents to a connected machine as a standard block device correct? So if I connect, format it and write to it from one machine then disconnect and connect from another it retains that structure, without the target actually having any idea what that structure might be. Is this right or am I horribly confused?
 
Last edited:

john4200

New Member
Jan 1, 2011
152
0
0
Something I'm not entirely certain on, with regards to iSCSi, it presents to a connected machine as a standard block device correct? So if I connect, format it and write to it from one machine then disconnect and connect from another it retains that structure, without the target actually having any idea what that structure might be. Is this right or am I horribly confused?
That is correct. iSCSI makes a remote device appear like a local block device. The server (aka iSCSI target) is ambivalent to the filesystems and/or partitioning of the device it is serving.
 

apnar

Member
Mar 5, 2011
115
23
18
Final pieces of the puzzle(CPU and second HBA) arrive tomorrow so setup will commence, can anyone offer any insight into sharing between the two VMs? At the moment I am figuring on using a vSwitch connected to one of the physical NICs to tie the two VMs both into the physical network and to each other. Is this the best option? Are there other preferable methods that will produce the results I am looking for? I'm a complete ESXi noobie with only a few hours clocked using it(in a VM no less!) so any advice is soundly appreciated.
I don't know if it's the best approach but this is how I handled it. I did exactly what you mentioned for general access to my home network. I also created a new second vSwitch that wasn't tied to any physical adapters that I use for internal storage traffic. I created it via the debug console on ESXi so that it'd support jumbo frames (command is esxcfg-vswitch). I then added "backend" interfaces for each virtual machine that needed direct access to storage using VMXNET3 interfaces where ever possible and using a different IP range. You'll also want to create an VMk interface for ESXi so it can talk to the new virtual switch if you'll be sharing any storage back to your ESXi server (again using command line for jumbo frames support, command is esxcfg-vmknic). In the end it should work as if the machines are all connected with 10gig ethernet.

I ran into one issue in that the VMXNET3 adapter for Solaris (my storage VM of choice at the moment) doesn't support jumbo frames so I had to switch back to standard size frames. Still the performance is good (and there is no need to dive into the command line if you opt for standard frames).

You'd likely get pretty decent performance with them all on just the single shared vSwitch, but I also like the added security aspects of only allowing certain connections on the backend interfaces.
 

No1451

New Member
Jan 1, 2011
32
0
0
That is correct. iSCSI makes a remote device appear like a local block device. The server (aka iSCSI target) is ambivalent to the filesystems and/or partitioning of the device it is serving.
Alright thanks! Once I actually get a working DVD drive and 3! midterm exams done tomorrow I'm going to dive into this :)

I don't know if it's the best approach but this is how I handled it. I did exactly what you mentioned for general access to my home network. I also created a new second vSwitch that wasn't tied to any physical adapters that I use for internal storage traffic. I created it via the debug console on ESXi so that it'd support jumbo frames (command is esxcfg-vswitch). I then added "backend" interfaces for each virtual machine that needed direct access to storage using VMXNET3 interfaces where ever possible and using a different IP range. You'll also want to create an VMk interface for ESXi so it can talk to the new virtual switch if you'll be sharing any storage back to your ESXi server (again using command line for jumbo frames support, command is esxcfg-vmknic). In the end it should work as if the machines are all connected with 10gig ethernet.

I ran into one issue in that the VMXNET3 adapter for Solaris (my storage VM of choice at the moment) doesn't support jumbo frames so I had to switch back to standard size frames. Still the performance is good (and there is no need to dive into the command line if you opt for standard frames).

You'd likely get pretty decent performance with them all on just the single shared vSwitch, but I also like the added security aspects of only allowing certain connections on the backend interfaces.
I like the idea of running it all in a backend connection, my preference is of course that nothing else in the house has access to the iSCSI target but the WHS VM so I'm probably going to follow this advice. Sadly getting the hardware up and running is the easiest part of this whole thing, I have a lot of learning and experimenting to do in the software side. Hopefully newegg processes and ships my order quickly.