I dont work at oracle, and never have. I am a algorithmic trading researcher, statistic arbitrage, high frequency trading, etc. I am just a nerd, fan of the best tech. And right now, it is Solaris with zfs, dtrace, SMF, containers, crossbow, etc. For instance, take dtrace. Everybody wants it:
-Mac os x has ported it.
-FreeBSD has ported it.
-Linux has copied it under the name systemtap.
-IBM AIX copy is called probeVue.
-QNX has ported it.
-NetApp has ported it
-VMware copy is called vProbes.
-etc. All big players have dtrace now, as a port or a clone. It is a game changer for developers and must have.
ZFS is a nice Solaris tech, but not a must have. Only Mac os x, FreeBSD and Linux have it.
Btw, linux have also copied solaris containers (evolved into docker), Linux copied Solaris SMF as systemd. Systemd is not a good copy of SMF because SMF is mainly for huge servers, not desktops. Linux is mainly a desktop os, or running on small servers, so there is no need for systemd. Linux copied zfs as btrfs. Linux copied crossbow as Open vSwitch. Heck, entire Linux is a copy of Unix (ie. Solaris etc).
The largest Linux servers, as SGI Altix or UV2000 servers, have 10.000s of cores and 100 TB of ram, but they are clusters. They are exclusively used for number crunching HPC cluster workloads, which SGI confirms. They are similar to a small supercomputer cluster, many cheap nodes on a fast switch. They serve one scientist at a time that chooses which number crunching workload will be started next 24 hours. Clusters are very cheap, just a bunch of PCs.
In contrast, there are SMP servers, ie one huge fat server. The largest have 32 or even 64 sockets. IBM mainframes are SMP servers too, but their CPUs are slow. They typically run business ERP systems, big databases, etc, serving thousands of users at once. The latency in clusters are very bad in far away nodes, so clusters only run embarrassingly parallel code that runs for loops in each node. Not much communication going on. Typically they run computations. Otoh, SMP servers run business systems that branch the code everywhere, so you need tightly coupled CPUs, you can not use too many CPUs or latency will be bad. Maximum are 32 /64 sockets.
A cluster can never replace a SMP server. For instance, you will never see a SGI server benchmarking SAP because they can not run SMP workloads. The largest Linux SMP server is an ordinary 8-socket x86 server. In fact, there have never existed larger Linux servers than 8 sockets. I invite anyone to post such links to a larger Linux server, such as 16 sockets. Because large linux servers does not exist, Linux scales bad on 8 sockets, and extremely bad on 16 sockets. HP experimented with Linux on their huge 64 socket Unix integrity/infinity servers with bad results (40% cpu utilization under full load). Google HP Big Tux. IBM experimented with Linux on their old AIX Unix P795 server with 32 sockets, with equally bad results.
SMP servers are very difficult to build and they cost very much. For instance, IBM p595 Unix server with 32 sockets for the old TPC record, costed $35 million. Yes, that's right. One single server. You could by a very large cluster for that.
Solaris has scaled to 144 socket servers for decades. And scales extremely well on huge SMP servers. So, I am a Solaris fanboy. Btw, FreeBSD and openbsd are also very good oses. If Solaris were closed I would use bsd. Linux hacker Con Kolivas compared Linux code to openSolaris, and he said that openSolaris code was far superior. Google "con Kolivas blog openSolaris scheduler" to read his impression.
Regarding esxi or whatnot, I think for desktop usage all of them should do, but I don't know. I exclusively use Solaris on bare metal, it is rock stable. But gea_ is the man to ask on this, many use esxi with great success. I also use virtualbox for virtualization on top Solaris, but virtual box is not that stable as esxi, and should be avoided in production. I am a desktop user so it is fine for me.