Any interest in Ceph articles?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
MON should be on on 1gbe. It has to handle several transactions per write, but the actual data never touches them. I guess it could max out on small block writes. But small block IO is unlikely to be stellar anyway.

I say it should be ok (knowing full well that SHOULD is the most dangerous word in tech).
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
MON should be on on 1gbe. It has to handle several transactions per write, but the actual data never touches them. I guess it could max out on small block writes. But small block IO is unlikely to be stellar anyway.

I say it should be ok (knowing full well that SHOULD is the most dangerous word in tech).
Both on the "should be ok" and should being dangerous is exactly what I was thinking too.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Hey - no word on your Ceph cluster. I know we're all busier than heck but you shouldn't tease like that!

Also - was sleeping on a plane last night and while dreaming has a "vision" of a pile of low-scale NUCs, each with a single 2TB Samsung SSD, serving as a Ceph cluster. Not as cool as your NVMe drives and 10Gbe...
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Methinks one of the killer features that ceph is missing is an ephemeral distro (PXE boot) so all disks in the chassis can be dedicated to OSD use. That would make your NUC fantasy ever more compelling.
Confused...Ceph is not an OS. It runs on top of Linux. There is no reason why you can't do PXE.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Also - no reason not to boot off a small partition on the SSD and leave the rest for OSD. With spinny disks this would be a really bad idea - but with the single drive being SSD the "seek thrashing" problem no longer exists.

In fact, this might be the best approach. Build a standard image to boot, configured so that the first thing it does is contacts a config server to pull its personality, reboot, and insert itself into the Ceph cluster. Then repair is simple - pull the broken unit and toss in a new one with the standard self configuring image. All done. If the SSD is still alive then re-image it to the original and put it back into the spares inventory.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
I know the feeling! I am trying to get a day or two to work on this tomorrow. Actually, also had been awaiting another Xeon D and a second P3600 1.6TB AIC.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
Hey - no word on your Ceph cluster. I know we're all busier than heck but you shouldn't tease like that!
I went to start the build today and realized I was missing the cheapest/ most important part: the chassis for all the new nodes.

Then I remembered, before one of the recent trips I got a kit from @renderpockets Very impressed. I got 3 units but instead of trying 1 per, I tried out the 2 in one shelf configuration.

Renderpockets Dual Supermicro X10SDV-TLN4F 3300GB SSD.jpg
Key Parts:
  • 1x Renderpockets Ikea Helmer shelf
  • 2x Supermicro X10SDV-TLN4F w/ 128GB RAM each (will drop to 64GB later)
  • 2x Intel DC P3605 1.6TB AICs
  • 4x Samsung SV843 960GB SSDs
  • 2x Supermicro 64GB SATADOM SSDs (found these while picking up fans at Central Computer so bought them)
  • 3x Fractal Design 60mm fans

Now to source 10x more network ports while I am waiting to get chassis later this week.

The goal will be Ubuntu on the SATADOMs, 2x Samsung drives/ node for storage, 1x 1.6TB NVMe drive/ node for cache tier.

I will say, these are not exactly low end setups but I did not want to buy much more and it is what I had on the shelf in the lab.

One thing I just saw is that the new Proxmox VE 4.0 beta has much better Ceph integration, has LXC and no longer requires funky fencing. May be the way to go in the future or for this project.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,645
2,062
113
Very cool. Build.

I'm looking forward to the #s you can obtain, and latency with the NVME as CACHE drives!!

These low watt, good power setups are awesome, I wish they were mATX so we had more PCIE room!!
 

Scott Laird

Active Member
Aug 30, 2014
317
148
43
I'd skip the cache tier entirely; it has a lot of overhead in Ceph and isn't always a performance win with spinning disks behind it. You'd probably get better performance adding the cache SSDs to the main pool and storing your data on them.
 

renderpockets

Member
Dec 6, 2014
38
21
8
41
www.facebook.com
Nice start!
You have to rotate the right motherboard so that the port panel is at the front, that way you can fit the Pico-PSU.
Like this:

I do NOT recommend this type of config, because of very little room and bad air ventilation, I had to discontinue this setup, was getting 105C+ when on full load (i7 65W processors). I might have some solution to air ventilation later on, but no guarantees...

This was my 2-in-1 prototype:


 
  • Like
Reactions: Marsh and Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
Nice start!
You have to rotate the right motherboard so that the port panel is at the front, that way you can fit the Pico-PSU.

I do NOT recommend this type of config, because of very little room and bad air ventilation, I had to discontinue this setup, was getting 105C+ when on full load (i7 65W processors). I might have some solution to air ventilation later on, but no guarantees...
Ah... not sure how I missed the mounting holes for that? On the server motherboards, I think there is a better opportunity for this type of setup. Everything is aligned properly for airflow through the case. Also, the components are lower power. The DDR4 RDIMMs are 1.2v and the SoC is only 45w TDP even though it is 8C/ 16 thread.

Here is what this looks like:
Renderpockets Dual Supermicro X10SDV-TLN4F 3700GB SSD Helmer Dual.jpg

Certainly need to work a bit on cable management. you can see it is much better for airflow overall than the consumer motherboards.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Hey - no word on your Ceph cluster. I know we're all busier than heck but you shouldn't tease like that!

Also - was sleeping on a plane last night and while dreaming has a "vision" of a pile of low-scale NUCs, each with a single 2TB Samsung SSD, serving as a Ceph cluster. Not as cool as your NVMe drives and 10Gbe...
I may have a few nucs laying around....cough...16ish...oooppps. I got a good deal on them. I was thinking the same thing. Low power docker cluster or ceph or storage spaces direct cluster. 2 ssds on the double height one with USB 3.0 Mass storage jbods and 3-5 3-6TB hdds. Just haven't determined viability of the USB 3.0 chipsets and max sequential speeds, ie I would hope roughly 300-500mbs per second via usb3.0 but from what I have read this isn't the case with USB 3.0 jbod chipsets. Anyway keep up the hard work I can't wait to see this in action!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,825
113
The "real" 1U chassis for these arrive on Friday of next week. Hoping I can get this setup in the datacenter by late Sunday.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
That's just so cool lookin' tho. I can't even imagine getting 2 of those helmers and having a mini 24 node cluster to fool around with. Prob have to be next-gen C2000 tho.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Great concept but I can get small cases for 50 ea with PSU and just stack them. This would have to be under 100 for me to consider
 

renderpockets

Member
Dec 6, 2014
38
21
8
41
www.facebook.com
Those 45W are indeed great, I am also considering buying one for my rendering purposes, on the charts it looks like it is equivalent to the speed of i7-4790K (88W) which I use right now for rendering and it overheats like crazy! Also more than 16GB RAM for mini-ITX looks very appealing as well!

But then again, the space you save with 2-in-1 drawer, you still need to find room for the Pico-PSU bricks. I still prefer the 1 drawer solution, with PSU built-in.