Passthrough storage for virtualization?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Katfish

New Member
Aug 14, 2016
12
0
1
44
Hello,

I am looking for confirmation or clarification .

I know that in everything IT related, there is no absolute and the best practice is that 'it depends'.

That aside, it seems that the general recommendation in STH is to use an HBA to pass through number of drives to a guest VM. The guest VM will then offer storage back to the host for consumption by the other guests.

Am I understanding this so far? I come from enterprise side where hardware controllers are the de facto for DASD.

I am building a new box with the c2600cp2j from Natex. If I were doing multiple hosts (not planning on it), I would see where doing a software solution on a dedicated box would make sense. I had done a setup with OpenFiler 6+ years ago, not totally foreign.

Am I understanding correctly? Help shed some more light on it for me?

Katfish
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
Really you are asking a "It Depends" question.

If you need to have the data volumes eek out all of the performance of a set of disks. pass through that disk to the guest.

My preference is that you do not pass through disks. This allows you to migrate Guests to other hosts easier. I spent way too many hours migrating data...

Chris
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
You mainly want shared NAS or SAN storage for ESXi (NFS or FC/iSCSI) as this offers best flexibility, an easy and very fast way to move/clone/backup VMs with full network speed and SAN features like unlimited snapshots, SAN replication or the security of newer CopyOnWrite filesystems like ZFS with its advanced RAM/SSD caching features.

You can now use a dedicated storage server with a 10G network or you can virtualize a storage server. In this case transfer between ESXi and storage is in software with serveral GByte/s does not matter the physical network.

If you virtualize the storage server, you can virtualize the OS but should not virtualize the disks or the disk controller, this is why you use pass-through.

With ZFS you must avoid hardware raid or you will loose the self healing feature. With newer CPUs and filesystems hardware raid is obsolete and softwareraid much safer (no write hole problems) and faster.

I came up with this idea called All-In-One about seven years ago. Now this is quite common . You can read my HowTo with a Solaris based ZFS storage VM, http://napp-it.de/doc/downloads/napp-in-one.pdf
 

Katfish

New Member
Aug 14, 2016
12
0
1
44
Really you are asking a "It Depends" question.
Agreed!

You mainly want shared NAS or SAN storage for ESXi (NFS or FC/iSCSI) as this offers best flexibility, an easy and very fast way to move/clone/backup VMs with full network speed and SAN features like unlimited snapshots, SAN replication or the security of newer CopyOnWrite filesystems like ZFS with its advanced RAM/SSD caching features.
Thank you for the feedback. I've used iSCSI, FC, and NFS to deliver storage to hosts before, but from a non virtualized platform.

I guess in an enterprise environment, I'd stick with fully physical/hardware setup. I can't imagine vMotion working well on a virtualized storage server?

Don't get me wrong, I can absolutely see the value in straying from the traditional hw RAID cards. And that... 'It depends'. I am looking at a sole box for compute and storage. This negates some of the benefits of a storage server and allows me to eek out more compute and RAM for other guests, all while handing this off to a dedicated card.

Thanks again for the comments. Was trying to wrap my head around things and think I've gotten there.
 

jacobwilliam

New Member
Aug 9, 2016
3
0
1
37
Such as a hard disk, removable drive or CD/DVD-ROM. However, desktop VMs can benefit from directly attaching to a physical device known as a passthrough disk.

sthrough disks connect to thevirtual machine (VM) and serve as a storage source with an existing file system and disk file format. They can improve VM performance but also come with some challenges in a virtual desktop environment. For instance, servers are designed to support more physical disks than desktop machines by default. To run VMs on the machine on which I'm typing, I had to spend the money to get a whopping 8 SATA ports.

If you're going to use passthrough disks for VMs that support virtual desktops, here are some pointers to get the most out of the technology.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I can't imagine vMotion working well on a virtualized storage server?
From ESXi point of view, you use NFS shares or iSCSI targets over ip.
ESXi is not aware if your NAS/SAN server is physical or if the SAN OS is virtualized
with internal traffic over the vswitch in software and external traffic via the regular LAN.

You only need to care about
- the extra CPU and RAM needs of a storage VM
- must bootup the storage VM as the first VM with the VM itself on a local datastore
Other VMs or ESXi vdisks are on shared storage from ZFS via NFS/iSCSI,
exactly identical to a barebone storage server/SAN setup.
- virtualize only the storage OS, not storage itself (HBA, disks)