If money is not an issue and the main constraint is space in datacenter you may consider 3 options
1) buy supermicro top loaders
they have 1U servers that come with 12 3.5 SAS hdd dual power e.t.c. Some are hotswapable some are not and some should be fully configured to order. you can read more at
1u, 2u, 4u Top Loading Storage Servers for Max Density | Supermicro USA
I think this would be the closest to your specification / requirements you can find. You can have single D-2xxx which is closest to E-2xxx you're looking for or dual second gen xeon scalable.
2) you can separate compute node from storage, there are many solutions from all major vendors that would allow you to do so. You can get a JBOD storage either in 2X2U or 4U which will give you 64+ disks you're looking for and connect it through switch to multinode/blade server like Dell FX2 or similar
so 4u storage + 2 U multinode + 1 U switch. Watchout for power redundancy though in some configuration it may be lost (even though it would still be 2 power supplies)
3) Finally another option is to change one rack to OCP. You would have to convert other servers in your rack to OCP as well but it will give you highest density
With different solutions from winwynn or inspur or QCT you can get up to 34 drives per 2U JBOD or 70+ per 4U and 3 compute nodes per 1U.
You may get away with just storage .. according to
Open Compute Project you get Intel® Avoton server nodes (2 nodes per 2U) if you you don't have much traffic - it may be sufficient and you get 15 drives per 1U.
to give you an idea how it would look like
and around ~16mins into the video they also go through compute sleds overview.
Current situation: We have several cephs built on 2U (12+2) and 4U (36+2) servers. However we also have a small one with small subset of data. The small ceph is first production ceph we test new features and versions on, after we test them in our lab and before we test run them on the larger cephs.
The 1U ceph consists of 8 osd nodes, each one is: 4x 4TB drives, Xeon E3 CPU, 32GB RAM, minimum traffic, EC 6+2. Monitors and other services are on other nodes.
The goal is double the size of the small ceph without increasing rack space usage. We have plenty of 2U servers in stock, however space is our main limitation, so we can't use them.