- Sep 17, 2011
If you have VT-d capable hardware and pass the entire PCIe controller to the VM, then yes the expander ends up passed through too. The expander is just another SAS device connected to the controller. You pass the controller to the VM, and EVERYTHING on the SAS fabric goes with it.I want to make sure I'm understanding this properly.
I can pass-through the entire Controller/Expander to the VM which is running NAS__X__ software.
However, I'm not 100% clear on:
- Can I have multiple arrays on the RAID controller/expander passed through to the NAS VM? It sounds like "hardware passthrough" but then there was mention of not running the SSD on the array, and I'm pretty sure that's for ESXI guest OS hosting, not for the NAS VM w/passthrough of the controller&expander, but I want to be 100% on this.
My goal is for the NAS to manage the access/controls for the shares on the network/esxi hosts. -- Would using VMWARE software be better/faster for this?
I have the VMUGADVANTAGE license, and 3 systems (that are 2x CPU) for hosts. I don't want to run all 3 24/7 unless the power usage is rather-low. 1 is a E5-2620 v1 (plan to run 24/7 if it can handle what I need 24/7), the other two hosts are are dual E5-2683 v3s so they will idle down a good bit more (hopefully SM mobo allows this) than the SandyBridge combo, but I haven't done testing yet to see. All 3 are using desktop SeaSonic Platinum or Gold PSU 700w+ and "Haswell" approved. I do NOT mind running them all the time if power consumption isn't too bad, so I will check this out. I think it's time to start my own build/advice threads getting to much into "my situation" i fear for this thread.
I'm not quite sure what your goal with these disks is, which is why I kept my previous posts pretty generic. It sounds like you want to use hardware RAID, which is fine, but it means you likely won't be using any advanced storage software (eg. ZFS), and so the SSD's will be their own array and not just a cache device. Such an SSD array could be either used directly by the ESX host formatted as a VMFS filesystem, or could be passed to the NAS and exported so that it can be shared across multiple ESX hosts (NFS3 with any filesystem you like, or iSCSI with VMFS)
As for your 3 hosts and power savings, you have 2 options. Either each host has some local storage in it, and when that host is offline then any VMs on that storage are inaccessable - this will likely give you the best IO performance for those VMs as it sounds like you would be limited to 1Gb ethernet for shared storage access. Or your other option is centralized shared storage holding all of the VMs - the host with the NAS VM will have to be always-on, but you can use vMotion to move other VMs around live and only power on the extra hosts when you need additional CPU power - in fact you can just configure VMware Dynamic Power Management and it will do all that for you, automatically migrating VMs around and turning on/off host servers as needed. I suppose theres also nothing from stopping you from combining both options - have some VMs on local storage on each host, and some VMs on shared storage and able to migrate around, though dynamic power management wouldn't be as effective as it won't turn a VM off on a hosts local storage to get the host off. You also have storage vMotion, so you can live-migrate a VM from local storage to shared storage and back if you want.