[WTB] Supermicro Microcloud - used

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

life2b

Member
Jul 26, 2020
50
16
8
I'm looking for full Supermicro microcloud with front hot swap bays(H8TRF kind) at reasonable price for my homelab. I want to use it as Ceph OSD nodes.

X10 would be better, but if it's too pricy(above $1500), I'm ok with X9 series.

Can anyone sell this for me? I can pay with Paypal.
 
Last edited:

Yarik Dot

Active Member
Apr 13, 2015
220
110
43
47
What is the purpose to have 2 hotswap drive nodes for ceph?

We use following setups for ceph osd:
- 1U 10x2,5" - single cpu
- 2U (24+2x) 2,5" - single/dual cpu
- 2U 12x3,5" + 2x2,5" - single cpu
- 4U 36x3,5" + 2x2,5" - dual cpu
 
  • Like
Reactions: Samir

life2b

Member
Jul 26, 2020
50
16
8
What is the purpose to have 2 hotswap drive nodes for ceph?

We use following setups for ceph osd:
- 1U 10x2,5" - single cpu
- 2U (24+2x) 2,5" - single/dual cpu
- 2U 12x3,5" + 2x2,5" - single cpu
- 4U 36x3,5" + 2x2,5" - dual cpu
2 SATA SSDs for OSD, Add 1 PCIE NVME for Bluestore per node. dual 10G by micro-lp.
 
  • Like
Reactions: Samir

life2b

Member
Jul 26, 2020
50
16
8
It still sounds too little drives to me, to be honest. Where is your system drive/drives?
By SATA DOM w DOM connector onBoard. And it will use Xeon E3, 1220v3 or 1230v3, 4 core, and very cheap in 2021.

And I saw Supermicro's Ceph IOPS optimized build that used Microcloud, which used E5-2630v4 with 1 NVME OSD. I thought I would need more storage, and enough IOPS to run VM. It sounds odd to you? I heard the rule 1 core per 1 osd, and due to nvme, few more core. I'm not familiar with ceph, so I would welcome advice.

스크린샷 2021-09-27 오후 3.12.23.png
This is what I saw.
 
Last edited:
  • Like
Reactions: Yarik Dot

Yarik Dot

Active Member
Apr 13, 2015
220
110
43
47
And I saw Supermicro's Ceph IOPS optimized build that used Microcloud, which used E5-2630v4 with 1 NVME OSD. I thought I would need more storage, and enough IOPS to run VM. It sounds odd to you? I heard the rule 1 core per 1 osd, and due to nvme, few more core. I'm not familiar with ceph, so I would welcome advice.
This gets my couriosity to be honest. We haven't done any nvme only deployment yet or any other which would require low latency or ultra high io troughput.

For virtualization (RBD) storage we use 1018R-WC0R with 2x boot drives + 8x OSD SATA SSD with E5-2630v3 CPU and it works without any issues.

I am not saying you are going to do it wrong. You definetely can have use case for this. So I am just curious.
 

life2b

Member
Jul 26, 2020
50
16
8
This gets my couriosity to be honest. We haven't done any nvme only deployment yet or any other which would require low latency or ultra high io troughput.

For virtualization (RBD) storage we use 1018R-WC0R with 2x boot drives + 8x OSD SATA SSD with E5-2630v3 CPU and it works without any issues.

I am not saying you are going to do it wrong. You definetely can have use case for this. So I am just curious.
Nope, it's just fine. I was originally on ZFS with mirrored Optane VM pool and 40G uplink, but Now I decided to move towards Ceph, I wanted to get enough IOPS.

I got 5038ML-H8TRF at reasonable price, so now there's no chance to change the architecture. Well , it's just homelab, right? What could go wrong? I'm even thinking about getting 2 seperate pool, 1 by having 2 HDD w 2 1 NVME per node as object/file storage, and 1 NVME per node as block storage.
 
Last edited:
  • Like
Reactions: Yarik Dot