Mgmt nightmare at scale, mis-conception that an RDM will provide 'better perfoamnce' than a VMDK, migration nuiscances of data in/out of RDM's if you want to convert, potential corrupted/stepped on volume if SAN eng has a bad day or presentation is mis-configured (data/written over) etc.
That being said they 'can' be handy for DR scenario's, MSCS clusters/other clustering scenario's although there are many times more elegant way to do clustering that relying on technologies that require this, DIRECT SAN lun access for tools/apps/san mgmt control-plane that demand/require hooks to the RAW lun. I could go on...lived nightmares in an ENT env too long and we and now almost fully converted out of that mess and since vmdk's can grow to 62 TB now (supported array of course), really a non-issue/kinda defeats the whole RDM purpose (although I must confess again I have seen more mis-configured or falsely assumed that a 'higher-performance profile/tier or more native access to the disk was gonna be night/day difference...sadly VMFS and other cluster/shared filesystems over a reasonable dedicated stg network fabric can keep up to snuff and drive insane I/O...many times greater than an single local disk RDM certainly :-D wink wink/no hard feelings...I get you have a 'specific use case'
(disclaimer we typically we're using FC 8 Gbps carved of SAN fabric LUNs to vSphere so not really apples to apples...I digress...) Even my iSCSI RDM volume to yours is quite different to your approach but more similar to the FC RDM approach.