NAS volume - RDM or datastore vmdk?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

denisl

Member
Dec 20, 2014
54
6
8
49
Curious if there is a best practice on this. I have 12TB usable of NAS (24TB RAID 10) through a H310 HBA. I like the fact that I can provision less than 12TB (i.e. 2TB vmdk) and attached the new disk to an existing VM which will be running xpenology. However, is there a disadvantage of doing it this way?

Would passing the 12TB volume through to the VM as a RDM be a better option? I'm thinking that if I someday decided to get rid of the xpenology VM for NAS would RDM's make it easier to just pass it through to a different VM and have all my data available?

Thanks
 

Mike

Member
May 29, 2012
482
16
18
EU
Think of it this way. If one day your Vmware box dies, recovering from the Vmwarez FS and container is a drag and largely unsupported. A bare volume however can be mounted from any computer.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
To move to a new VM, its just as easy to re-attach the .vmdk file to the new VM as to move the RDM. If in the future you ever want to move to a physical NAS box, having the storage as an RDM does make it much easier to attach there and drop VMware completely. Or with RDM it would also be easy to put xpenology on the box that has your HBA in it and use that disk locally.

It does seem kind of odd to me to have a NAS to present storage to a hypervisor, just to present that to a VM and export it as NAS again.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Think of it this way. If one day your Vmware box dies, recovering from the Vmwarez FS and container is a drag and largely unsupported. A bare volume however can be mounted from any computer.
Actually if the source is NAS, then it is not running vmfs filesystem. It could be zfs, ext4, ntfs, etc. with nfs protocol between the box hosting the drives and the VMware box.

There is also a vmfs driver available for linux that can be used for such a recovery if it was needed.
 

denisl

Member
Dec 20, 2014
54
6
8
49
I can't imagine moving away from vmware on this physical server I'm using. I have a xpenology vm with the os vmdk running on SSD and a second nas-data vmdk provisioned from the sata local datastore. From the xpenology vm I have it NFS mounted to another VM running owncloud. Without virtualization I wouldn't be doing all this. The server is somewhat beefy so dedicating it to xpenology in the future is very unlikely (or dedicating it to any NAS server). It looks like using a RDM will be of little benefit to me since I can always reattach the nas vmdk to a new VM if I ever wanted to swap out xpenology (like freenas for example).

Thanks for the input. Unless I'm missing something I'll keep it as a vmfs volume on xpenology and NFS it to other VM's as needed (like Plex which I haven't installed yet).
 

Mike

Member
May 29, 2012
482
16
18
EU
Actually if the source is NAS, then it is not running vmfs filesystem. It could be zfs, ext4, ntfs, etc. with nfs protocol between the box hosting the drives and the VMware box.

There is also a vmfs driver available for linux that can be used for such a recovery if it was needed.
If his NAS was external then he could mount it over NFS on his synology vm. From his post you could read it as external NAS, or internal H310. There is indeed a read-only driver for Linux but a 2tb vmdk setup doesn't make sense in the first place and prohibits a migration to physical storage if you wanted. AFAIK VMFS cannot be shrinked so resizing is out of the question and you need a 12 TB volume to migrate in that case.
 

denisl

Member
Dec 20, 2014
54
6
8
49
Mike the storage is internal local disk on a dell h310 HBA. 6 4TB drives RAID 10.
If I RDM the storage i have to dedicate it to the VM which Id rather use that capacity for other things until i need it. Currently its configured to esxi as vmfs. I created a 2TB vmdk on it and added it to my xpenology VM. I think the vmdk can be increased in size but not reduced.

That said, what would you recommend i do?
Thanks
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Pretty sure you can shrink a VMDK file - it is the vmfs filesystem itself that cannot be shrunk if you decided you wanted some of that 12TB for a different use.
 

denisl

Member
Dec 20, 2014
54
6
8
49
Ok so I'm still somewhat confused but I guess there is no clear cut answer here. I think I'll test shrinking and expanding a vmdk and see how I manage that through my ubuntu vm. If all goes well then I guess I'll go with this method.
 

lundrog

Member
Jan 23, 2015
75
8
8
43
Minnesota
vroger.com
Thin provisioning is a lie.
So your saying, if i have 500GB, of space in a datastore.. and I make a 500GB VM with think provisioned,that actually happens. But if I make 5X 500GB VM's that are thin provisioned, that really never happens? and if make a 2nd think provisioned VM, after I made the first, I'm not really seeing seeing the error that the datastore is full? and it's not really possible to make 100 thin provision VM's that are over allocating a datastore?

Guess I've been living a lot of lies.

:0)
 

Mike

Member
May 29, 2012
482
16
18
EU
So your saying, if i have 500GB, of space in a datastore.. and I make a 500GB VM with think provisioned,that actually happens. But if I make 5X 500GB VM's that are thin provisioned, that really never happens? and if make a 2nd think provisioned VM, after I made the first, I'm not really seeing seeing the error that the datastore is full? and it's not really possible to make 100 thin provision VM's that are over allocating a datastore?

Guess I've been living a lot of lies.

:0)
I saw this coming, tried to avoid.
Thin provisioning VM images is a bad idea, certainly on barely-scratching-it Sata storage. If you actually use the underlying filesystem you will find that in scenarios where multiple images are being written to, which is all cases except those where you want to 'benchmark' the performance of thin provisioned images, causes fragmentation that makes your head ache .
A less awful way would be to create images the size you need them and expand on a logical level in the VM if the need arises, or migrate to a bigger image.

Since this was about NAS storage on Sata disks, not uncommon to be used for further storage of disk images, where it is not uncommon to have another layer of thin provisioned volumes by your standards, you could see dead awful performance with no easy to spot cause. A bad idea in my book, mostly because of lazy or poor resource planning.


:'(
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I saw this coming, tried to avoid.
Thin provisioning VM images is a bad idea, certainly on barely-scratching-it Sata storage. If you actually use the underlying filesystem you will find that in scenarios where multiple images are being written to, which is all cases except those where you want to 'benchmark' the performance of thin provisioned images, causes fragmentation that makes your head ache .
A less awful way would be to create images the size you need them and expand on a logical level in the VM if the need arises, or migrate to a bigger image.

Since this was about NAS storage on Sata disks, not uncommon to be used for further storage of disk images, where it is not uncommon to have another layer of thin provisioned volumes by your standards, you could see dead awful performance with no easy to spot cause. A bad idea in my book, mostly because of lazy or poor resource planning.
:'(
Thin provisioning is great, and the fragmentation it causes and associated performance hit is typically not a big enough issue to to worry about. The moment you have more than one VM accessing the datastore the resulting combined stream of IO is going to be random, even if both VMs are doing sequential IO, and it gets worse and worse as the number of VMs increases. Basically unless you are planning to use a datastore for a single VM / single workload you may as well just plan on 100% random IO from the start. And guess what the performance difference is comparing random IO on a few non-fragmented thick disks VS random IO on a few very-fragmented thin disks? Nothing - your block storage layer (SATA or otherwize) doesn't know what a "file" is - random IO is random IO end of discussion.

Thin provisioning saves a ton of time managing storage, resizing things, etc. and makes cloning VMs/templates much faster. The only downside is that you need to monitor your free space at the datastore level to ensure you don't accidentally fill the VMFS if you have over-provisioned it. If you mess up over-provisioning RAM then stuff starts to swap and performance sucks - if you mess up over-provisioning disk and VMFS fills up, then any VMs that need additional space to write lock up completely (better than older ESXi versions though where all VMs locked up whether they needed extra space or not)
 
  • Like
Reactions: lundrog

Mike

Member
May 29, 2012
482
16
18
EU
Thin provisioning is great, and the fragmentation it causes and associated performance hit is typically not a big enough issue to to worry about. The moment you have more than one VM accessing the datastore the resulting combined stream of IO is going to be random, even if both VMs are doing sequential IO, and it gets worse and worse as the number of VMs increases. Basically unless you are planning to use a datastore for a single VM / single workload you may as well just plan on 100% random IO from the start. And guess what the performance difference is comparing random IO on a few non-fragmented thick disks VS random IO on a few very-fragmented thin disks? Nothing - your block storage layer (SATA or otherwize) doesn't know what a "file" is - random IO is random IO end of discussion.

Thin provisioning saves a ton of time managing storage, resizing things, etc. and makes cloning VMs/templates much faster. The only downside is that you need to monitor your free space at the datastore level to ensure you don't accidentally fill the VMFS if you have over-provisioned it. If you mess up over-provisioning RAM then stuff starts to swap and performance sucks - if you mess up over-provisioning disk and VMFS fills up, then any VMs that need additional space to write lock up completely (better than older ESXi versions though where all VMs locked up whether they needed extra space or not)
You read my post too closely. You are referring to pipelining not being an issue, and that is totally not the point. If your thin images are live and live on the same store, and are written to in what ever order , you get fragmentation to the order of whatever block size you handle. You may call that a non-issue but it very much is for a variety of workloads, and it is making the issue worse. Your block device may not know what a file is, it IS taking longer to process all reads for the same (partial)file and in that random-vm-fragmented-io scenario you will find that the Sata disks will not shine.
Also, this is all way off what this topic is about. If you want to advice sub-standard methods to thin-provision NAS storage, go right ahead. End of discussion I guess.

Have you found the 'dislike' button yet, Lundrog?