Do you know, or have you estimated the IOP's required for your workload? I would start there, I have a lot of web servers on blades and the requirements are pretty low for mine but each app is different. What are your growth requirements over the next year, 3 years etc., in other words what are your expected expansion requirements over time? What is your backup plan, i.e, are there limited time windows for backup or other use cases that would require a more robust system beyond normal workload requiring higher sustained I/O? How much extra capacity do you expect to need over the first year and how much of a safety buffer do you want to build in? Are there any budget limitations? If you are looking at Dell/EMC I am guessing there are not, if your requirements and expansion needs are low enough you could probably go with a cheaper solution. Its hard to make suggestions without knowing the targets you have to hit and what the platform will have to service over its lifetime.
For example, I have one 3 blade cluster running 72 CentOS VM's (mostly web servers, 3 name servers, 3 time servers, 3 radius servers, 3 mysql servers), average sustained IOP's is 900 read/650 write with short spikes to ~6500 IOP's. Any of those solutions listed would most likely be overkill for this. I have another cluster running 59 vm's, mostly web based monitoring services and 3 syslog servers, any one of the blades in that cluster would destroy the SAN the web cluster uses ( same exact blades 2670v2/128gb). In generic terms they are both running on the the same hardware, OS's, web servers etc. but the workloads and I/O requirements are totally different.
I don't know if any of the above helps but, your question/poll is too vague to provide any useful suggestions, you have to determine your requirements as accurately as possible first. These EMC solutions _seem_ like overkill to me, without knowing what you need any response to this poll may just empty your bank account unnecessarily. My experience with Dell/EMC ends at the PS series so I cant provide any useful insight there, it seems like you may be able to save on spend here with additional detail and actual or estimated performance requirements.
Are you using raw device mapping for a specific performance requirement, software compatibility, vendor requirement or some other reason? If the content on the RDM's is the same across VM's (I am assuming pics, files etc for web apps) have you considered using a shared LUN mount on each of the content store consumers. Actually, I am assuming again, mostly static reads to the web content stores? I used to do this at a previous job where they sold music, video etc., saved a lot of storage space and cost on the solution.