Ceph on C6100

JTB

New Member
Apr 30, 2017
9
0
1
42
Hi,

Looking for a little advise with people who has used Ceph a little more than myself!

I have just purchased the following equipment

2 x Dell C6100
4 Blades in each consisting of
2 x L5640
96GB RAM
2 x 10Gb NIC Mez Cards (Waiting to be delivered)

The model has 12 x 3.5" drives.

I am looking to build a hyperconverged model using ceph as the underlying storage platform (running bluestore possibly) deployed using ansible-ceph.

For the portal / automation / orchestration layers I will be using Ocata openstack deployed via ansible-openstack running KVM for the hypervisor.

I will divide the servers; 2 for management, 2 for a NFV environment and 4 for Tenant (Guest) workloads.

Getting to the question I suppose.... I was looking at buying 24 x 1TB drives to provide a usable storage of about 8TB for the ceph cluster, however I am unsure on how the performance will be over those 24 drives. I was either going for the 1TB Seagate Barracuda's or the 1TB WD Golds.

Any assistance would be good before I splash out over 1K on HDDs.
 

cliffr

Member
Apr 2, 2017
77
32
18
43
2 C6100 isn't bad but no journals. I don't think it's going to be crazy quick. Lower number of disks too.

I'd love to see what kinda performance you're actually getting.
 

JTB

New Member
Apr 30, 2017
9
0
1
42
The barracuda according to UserBenchmark looks to supply around 170MB sequential.

Each node will have 3 of these in them. As i am using a Mez card for the NICs i will technically have a 16xPCIe2.0 slot to add a journal in the future. I was hoping someone else had a simular setup that could provide performance stats :)

Or maybe a setup with a C6100 and a spinning rust running ceph with no SSD so I can gauge the performance across the 8 nodes?
 

markpower28

Active Member
Apr 9, 2013
415
103
43
no HBA needed? what kind of drives you will use?

Sent from my SM-G928V using Tapatalk
 

JTB

New Member
Apr 30, 2017
9
0
1
42
The C6100 has a 3 x SATA backplane per server allowing for SATA3 drives to be used upto 6gbps.

The above drives I listed are standard commodity 6gbps SATA drives, 7200RPM spindle speed.
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
C6100 is a passive backplane on both LFF and SFF models, you will be limited to 3 gbps via the onboard ICH10R.
 

JTB

New Member
Apr 30, 2017
9
0
1
42
Yeah just checked dmesg and it says 3.0gbps :/

Not good.... perhaps these were not a wise investment after all :(
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
The LFF c6100 chassis doesn't really make any sense unless you are network booting the nodes. The SFF is better in every way since you get double the device capability per node.

Sent from my LG-H811 using Tapatalk
 

JTB

New Member
Apr 30, 2017
9
0
1
42
the SFF servers was at least 90% more expensive over the LFF here in the UK.
I can network boot the OS no problem, I was going to run them off a NFSRoot mount from another server anyway.
 

JTB

New Member
Apr 30, 2017
9
0
1
42
Could I perhaps get some LSI SAS raid cards, and convert say 3 nodes into 3 x 2 x 2.5" giving 6 drives in 3 of the nodes?
I think i have a couple of LSI 6gbps SAS cards somewhere would only need 1 more.

Are the connectors in the backplane SAS or SATA?
 

JTB

New Member
Apr 30, 2017
9
0
1
42
Just found one of the cards it is a LSI SAS 9211-8i card, I can grab an extra one for around £50.00 would just need to know how to replace the onboard with this card.
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
You need specific sas cables. Well you dont really but dell made sas cables with a custom spgio(which you don't need either)and are the perfect length for the node. One of the cables is sas to x2 sata and the other cable is sas to 4x sata.

Sent from my LG-H811 using Tapatalk
 

JTB

New Member
Apr 30, 2017
9
0
1
42
Frogtech, I was about to hit the buy button and I had a thought, seeing as the storage is HDD.... wouldnt 3Gbps be fine for HDD as they dont push much over 200MBps? SATA2 included some of the nicer features like NCQ etc.
 

skunky

New Member
Apr 25, 2017
24
1
3
40
Hi,
I like c6000's too, I'm using for now 3 pieces ( 12 compute nodes ). And i was thinking too about how suitable is for a ceph deployment.
For Ceph, per node, I would get 6 cpu cores, 48GB ram, 6 x 2.5 hdd trays, 1 x LSI SAS-9260-8i 6GB raid. I was thinking about adding one 2 ports 10GB ethernet, and use one tray /node for a journal ssd hdd. The rest of trays to be filled with spinning / ssd hdd's for osds.
Do you think it would be a good configuration ? this sas-9260-8i-6gb is suitable for ceph ?
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
Frogtech, I was about to hit the buy button and I had a thought, seeing as the storage is HDD.... wouldnt 3Gbps be fine for HDD as they dont push much over 200MBps? SATA2 included some of the nicer features like NCQ etc.
Probably but then at that point I'd rather have multiple 1U or 2U chassis. The C6100 is great I just can't see the LFF chassis being really optimal.
 

frogtech

Well-Known Member
Jan 4, 2016
1,367
216
63
33
Hi,
I like c6000's too, I'm using for now 3 pieces ( 12 compute nodes ). And i was thinking too about how suitable is for a ceph deployment.
For Ceph, per node, I would get 6 cpu cores, 48GB ram, 6 x 2.5 hdd trays, 1 x LSI SAS-9260-8i 6GB raid. I was thinking about adding one 2 ports 10GB ethernet, and use one tray /node for a journal ssd hdd. The rest of trays to be filled with spinning / ssd hdd's for osds.
Do you think it would be a good configuration ? this sas-9260-8i-6gb is suitable for ceph ?
Yes I think the C6100 is good for hyperconverged setups but depends on your needs/equipment. Let me explain.

If you have all flash storage devices then I would use the LSI 2008 mezz card and then get a network adapter of your choice for the PCIe slot. Or you could get a vertical port card for the PCIe slot and use the 10GbE mezz card which is based on Intel 82599.

If you have a mix of flash based cache devices and the rest of your storage is spinners, then the 9260-8i is definitely a good choice, however, there is another caveat here: If Ceph software architecture can natively use flash devices for both RW cache and not just as read cache then I would just use an HBA as explained above. If Ceph is only capable of using flash devices for a read cache I would get cachecade 2.0 keys and utilize that of the LSI raid cards. The hardware caching will be better since you can get RW cache and not just read.
 

skunky

New Member
Apr 25, 2017
24
1
3
40
Thank you.
Yes, it seems that a HBA in IT mode is recommended for ceph, so LSI 2008 flashed in IT mode would be a good choice then ?