HBA vs. Disk passthrough, any problem?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
Hello all,

Just a quick general question for you all. Is there any problem using ZFS (or similar) with passthrough (RDM) disks under VMware vs. passing through the whole HBA?

Aside from obvious things like it just being possibly more "messy" if you had a lot of them. ZFS should still have access to the full disk and thus be able to do all it's checksum goodness.

I ask because if I want to test multiple solutions on one server, it would be less physical cluster to use a single HBA or whatever and then just divvy out the disks to whichever guests need them.

Thoughts?

-JCL
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
If you use RDM you can use single disks, you do not need an extra controller and you do not need vt-d that is available only on serverclass hardware.

If you are looking for best stability and performance, use the hba passthrough with the generic Solaris disk-controller driver with real disk access for ZFS including access to Smartinfos.

For me it's out of the question that if you have vt-d and a separate diskcontroller you should use hardware pass-through instead of ESXi raw disk mapping.
 

jcl333

Active Member
May 28, 2011
253
74
28
If you use RDM you can use single disks, you do not need an extra controller and you do not need vt-d that is available only on serverclass hardware.

If you are looking for best stability and performance, use the hba passthrough with the generic Solaris disk-controller driver with real disk access for ZFS including access to Smartinfos.

For me it's out of the question that if you have vt-d and a separate diskcontroller you should use hardware pass-through instead of ESXi raw disk mapping.
Of course I would agree that using the controller is better, just wanted to see if anyone had tried with just the disks.

-JCL
 

Mike

Member
May 29, 2012
482
16
18
EU
In ram limited situations remember that iommu reserves all guest memory whilst RDM still allows for thin provisioning.
 

gea

Well-Known Member
Dec 31, 2010
3,162
1,195
113
DE
In ram limited situations remember that iommu reserves all guest memory whilst RDM still allows for thin provisioning.
Does not matter with ZFS.
If you assign for ex 8GB Solaris will use the 8GB, if you assign 16 GB it will use also (ALL Ram) for ZFS Arc caching
 

Mike

Member
May 29, 2012
482
16
18
EU
I don't use ZFS so can't really comment on that one but i would want the balloon to retrieve memory for other vms when needed. I figure that the full read cache isn't mandatory for zfs stability till a certain degree?
 

Thatguy

New Member
Dec 30, 2012
45
0
0
If you are using zfs as the filesystem in the end, isn't it better to pass the controller through?

My understanding of RDM's is 1) they are depreciated by VMWare and 2) you need to format it with VMFS before you're able to pass it through, adding another layer, which may or may not cause issues with ZFS, and it wanting to see everything about the disk. My other argument with doing RDM's is the disk may or may not be usable outside of vmware, so if you ever wanted to move that disk/array/whatever to another physical host, you may have an annoying time getting at the data under VMFS.
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
I Found RDM worked great when testing, then when I went to deploy I found out that it doesn't allow for more than 2TB drives. My server is all 3TB drives and I totally had to scrap the plan.

I can't say I have 100% tested it, but I found it a very good option for a server without IOMMU / raid card. That being said, I think you should choose a plan that provides a >2TB upgrade path. I would hate to back myself into a corner like that.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
There is a performance hit using RDM vs VT-d based PCI Passthrough. There was a thread on this at [H] about a year or so ago with detailed benches. The performance hit was not small, but I don't remember exactly what they found.

Search in the 'virtualization' subforum at [H].