Looking: 1U 8x3,5" HDD, 2PSU

Yarik Dot

Active Member
Apr 13, 2015
172
87
28
44
I am not sure if this is in production, but we would like to have a small testing ceph based on 1U servers (8 pieces). So we are looking for:

- 1U
- Redundant PSU
- 8 pieces of 3,5" drive slots (+2x2,5" for system is nice to have)
- space for E-2100 supermicro motherboard

If you know about any chassis like this, feel free to let me know.

Thanks
 

raiderj

Member
Dec 27, 2014
79
23
8
109
I think you can get 8x2.5" at 1U, but I don't know if you have options for 8x3.5" - but 4x3.5" do exist. If you're testing ceph, I think you'd want to target SSDs and 10+Gb network, so 2.5" drives might be better anyway? 2x2.5" + 6x2.5" setups for OS + CEPH would be a good setup I'd think.
 
  • Like
Reactions: Samir

Yarik Dot

Active Member
Apr 13, 2015
172
87
28
44
We are using EC with high capacity drives and low workload. SSDs are useless in this case.
 
  • Like
Reactions: Samir

BeTeP

Well-Known Member
Mar 23, 2019
509
315
63
1U will not get you very far if you want to use 3.5" drives. Go with 2U instead.
 
  • Like
Reactions: Samir

Crond

Member
Mar 25, 2019
43
10
8
What is the goal ? Highest density based on 3.5 drives? Do you actually need a local storage for each 1u ?
What is the total rack space are you trying to fill ? If you need just 1 server than QCT mentioned above or super micro is your best option but they don’t fully meet your requirements.

If you have at least 4u of rack space you can go with different solutions like stornador. It will give you up to 60+ 3.5 HDD per 4u or 15hdd+ per 1u of rack space. Sounds like you don’t need compute power so 2 sockets should be fine. And they are based on super micro motherboard. If you want to build yourself they are based on backblaze design and you can buy just the case or manufacture yourself. All specs and BOM are available from backblaze

If that fits your requirements you can get more info here
Storage Pod 6.0: Building a 60 Drive 480TB Storage Server

I am not sure if this is in production, but we would like to have a small testing ceph based on 1U servers (8 pieces). So we are looking for:

- 1U
- Redundant PSU
- 8 pieces of 3,5" drive slots (+2x2,5" for system is nice to have)
- space for E-2100 supermicro motherboard

If you know about any chassis like this, feel free to let me know.

Thanks
 
  • Like
Reactions: Samir

Yarik Dot

Active Member
Apr 13, 2015
172
87
28
44
Current situation: We have several cephs built on 2U (12+2) and 4U (36+2) servers. However we also have a small one with small subset of data. The small ceph is first production ceph we test new features and versions on, after we test them in our lab and before we test run them on the larger cephs.

The 1U ceph consists of 8 osd nodes, each one is: 4x 4TB drives, Xeon E3 CPU, 32GB RAM, minimum traffic, EC 6+2. Monitors and other services are on other nodes.

The goal is double the size of the small ceph without increasing rack space usage. We have plenty of 2U servers in stock, however space is our main limitation, so we can't use them.
 
  • Like
Reactions: Samir

BeTeP

Well-Known Member
Mar 23, 2019
509
315
63
The 1U ceph consists of 8 osd nodes
You mean 8 OSD nodes 1U each, not 8 nodes cramped in to a single 1U chassis, right? Because if it is the latter - it's completely new level of high density to me. And if it's the former, it should be easy to get a previous gen blade chassis to fit most of the requirements (but no standard motherboards).
 

Yarik Dot

Active Member
Apr 13, 2015
172
87
28
44
You mean 8 OSD nodes 1U each, not 8 nodes cramped in to a single 1U chassis, right? Because if it is the latter - it's completely new level of high density to me. And if it's the former, it should be easy to get a previous gen blade chassis to fit most of the requirements (but no standard motherboards).
That is the goal - having 1U server with 8x3,5" drives in it. We are currently limited with 4 hotswap positions of CSE-813.
 

Crond

Member
Mar 25, 2019
43
10
8
If money is not an issue and the main constraint is space in datacenter you may consider 3 options
1) buy supermicro top loaders
they have 1U servers that come with 12 3.5 SAS hdd dual power e.t.c. Some are hotswapable some are not and some should be fully configured to order. you can read more at 1u, 2u, 4u Top Loading Storage Servers for Max Density | Supermicro USA
I think this would be the closest to your specification / requirements you can find. You can have single D-2xxx which is closest to E-2xxx you're looking for or dual second gen xeon scalable.

2) you can separate compute node from storage, there are many solutions from all major vendors that would allow you to do so. You can get a JBOD storage either in 2X2U or 4U which will give you 64+ disks you're looking for and connect it through switch to multinode/blade server like Dell FX2 or similar
so 4u storage + 2 U multinode + 1 U switch. Watchout for power redundancy though in some configuration it may be lost (even though it would still be 2 power supplies)

3) Finally another option is to change one rack to OCP. You would have to convert other servers in your rack to OCP as well but it will give you highest density
With different solutions from winwynn or inspur or QCT you can get up to 34 drives per 2U JBOD or 70+ per 4U and 3 compute nodes per 1U.

You may get away with just storage .. according to Open Compute Project you get Intel® Avoton server nodes (2 nodes per 2U) if you you don't have much traffic - it may be sufficient and you get 15 drives per 1U.

to give you an idea how it would look like

and around ~16mins into the video they also go through compute sleds overview.


Current situation: We have several cephs built on 2U (12+2) and 4U (36+2) servers. However we also have a small one with small subset of data. The small ceph is first production ceph we test new features and versions on, after we test them in our lab and before we test run them on the larger cephs.

The 1U ceph consists of 8 osd nodes, each one is: 4x 4TB drives, Xeon E3 CPU, 32GB RAM, minimum traffic, EC 6+2. Monitors and other services are on other nodes.

The goal is double the size of the small ceph without increasing rack space usage. We have plenty of 2U servers in stock, however space is our main limitation, so we can't use them.
 
Last edited: