The performance hit for using VMDK files on a VMFS filesystem is very low and not something to worry about. I'm also not sure where this belief that RDM's are not recommended or supported is coming from - it seems to me to be a case of people just keep repeating it until everyone believes it, while the truth is that RDMs are supported by VMware and are recommended for a variety of use-cases.
To try and clear things up a bit, what is called 'passthrough' in this thread is referring to passing an entire PCIe device through to the VM. It is the highest performance way of giving hardware to a VM, and comes with the most restrictions. Yes, the VM gets full direct access for eg. smart data, etc. but the VM also needs drivers for the PCI card(s), and vMotion, HA, snapshots, fault-tolerance, and probably other VMware features are not supported and will not work against a VM using PCI pass-through. For this situation, keep in mind that if you are using a hardware RAID card that you must pass the entire card through to a single VM if you go this route. You can't pass through a RAID-6 array to a NAS VM and also use the same RAID card for a SSD array as a local VMFS filesystem.
RDM can also be thought of as a pass-through technology - in physical RDM mode it is a (almost) full SCSI passthrough with only enough virtualization to make it appear that the SCSI device is attached to the virtual LSI (or pvScsi) controller in the VM. Smart still works with physical mode RDM, and if the device to be RDM'd can be accessed by multiple hosts (eg. iSCSI, FC, etc.) then you can still use vMotion, HA, etc. - I think the only thing that you can't do with a physical-mode RDM is use VMware snapshots. Virtual mode RDM does add quite a bit more abstraction between the hardware and the VM - smart won't work anymore, though VMware snapshots can work against virtual-mode RDMs. Physical mode RDM in this situation would allow you to eg. pass a RAID-6 array from a hardware RAID card into a NAS VM, and allow a second array from the same card to be used for local VMFS or pass to a different VM.
VMDK files are your last option, and are probably what about 99% of all virtual machines use for their disk space. VMDK files can live on either a VMFS-formatted disk (when the ESX host has block-level access to the disk, eg. SATA, SAS, iSCSI, FC, etc.), or a network share (NFS v3). VMDK files involve the most abstraction of any of the storage options completely hiding the details of the storage from the VM (possibly bad if your VM is a storage management platform, but perfectly fine for everything else), but in exchange they grant the most flexibility and work with every feature VMware has to offer. When you change a server's hard-drive into just a regular file life as a sysadmin becomes so much easier - servers can grow, shrink, move around, make copies (for backup or cloning purposes), etc. as easily as you can do those things to a file on your desktop. And for most things the performance difference is barely measurable - in certain situations with fresh VMDK files there can be an initial penalty as the disk is zero'd, though in some other situations VMDK files can actually perform better than the disk they're sitting on.