Dell C6100 used for openstack cluster.

Drive configuration for a C6100 Openstack cluster using Ceph.

  • 8 x 2TB HDD

    Votes: 0 0.0%
  • 8 x SSD

    Votes: 0 0.0%
  • 8 x HDD + 4 x SSD

    Votes: 0 0.0%

  • Total voters
    4
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RandyC

Member
Mar 1, 2014
70
15
8
Portland, OR
I have a C6100 that I want to setup with Openstack.

The only decision holding me back right now is HDD / SSD choice. Should I get some inexpensive 2TB HGST drives that have been dumped on ebay recently? Or should I get 8 Intel DC S3500 drives? (Or should I get a mix of both?)

Does anyone one have any resources or suggestions for specific storage setup for Ceph with a limit of 3 drives per node? I was thinking of getting 2x2TB HGST drives per node to start, then adding a single SSD later. (Although, an all SSD Openstack Ceph cluster has a certain appeal to it.)

I don't need a ton of storage, as this is just for teaching myself how to setup Openstack.

This seems like a good deal to have an all SSD cluster. (With 8 or 12 drives)

Intel DC S3500 Series 160GB SSD SATA 2.5" Hard Drive SSDSC2BB160G4 | eBay
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I'd go with the 12x SSDs. I actually voted "other" because you want a somewhat different type of SSD for the OSDs than you do for boot/OS.

My recommendation would be to go for 4x 240Gb S3500 you references for boot/OS. And go with used 8x Samsung 960Gb drives for the Ceph OSDs (see here: Samsung PM853T MZ7GE960HMHP-000AZ 960GB SATA 6GB/s 2.5" Enterprise SSD | eBay).

With Ceph you really want an SSD-based journal for any spinning disk, so your real config would be 8x 2TB spinners + a small SSD per node, which leaves you no boot drive in your configuration. So you start making ugly compromises right out of the gate (things like partitioning the SSD for boot + journal, which is sub optimal and a PITA if you have faults or need to do re-installs). You want to avoid this road.

Besides, Ceph is tailored to show good performance for very large scale-out designs. You'll find that with small numbers of OSDs (disks) the performance is really disappointing. And for Ceph deployments "small numbers of OSDs" means less than 80-100, or at least 10x your scale before you get the performance curves to flatten out nicely. The impact of having too few OSDs is magnified write-latency of spinning disks.

With all-flash Ceph you can mostly avoid these issues. Having the journal and data on the same drive doesn't create much of a performance hit (even less when you get to the Kraken release and can use Bluestore). And the low latency read/write of SSDs hides the fact that replication activity inefficient over a small pool of OSDs.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
So, I found a decent deal on Intel 300GB DC S3500 for $75 each.

I am looking at the 480Gb Samsung SM843T, as opposed to the PM853T. (The 480GB PM853T is about $150).

$800 is about my budget for the main disks. Any reason I shouldn't get the 480GB SM843T ?

480Gb Samsung SM843T Data Center Series SATA 6.0 7mm 2.5" Solid State Drive SSD | eBay
  • Read Latency (99.9% QoS): 170µs
  • Write Latency (99.9% QoS): <3ms
Is why I avoid the older Samsung.

The latest (863 I believe) they seem to have fixed latency issues, and luckily for us price is not bad at all!
 

DaSaint

Active Member
Oct 3, 2015
282
79
28
Colorado
One thing to note about the 3.5 bay system, is you will have to have converters from 3.5 to 2.5 drives at least i had to buy them when i had my 3.5 system...

I have a bunch of 845DCs (SM843T) and they work fine as a capacity drive, i use in an All-Flash VSAN.

@RandyC - i swear i had the Mez Slot used in my 3.5 and 2.5 bay setups with Mellanox Controllers, kinda strange that you would have this issue unless its not truely a 6100 but maybe one of those weird 6005s that call themselves 6100s The blades are pretty much the same between 2.5 and 3.5 its Just the backplane on the HDs
 

RandyC

Member
Mar 1, 2014
70
15
8
Portland, OR
I have known I would need a 2.5 to 3.5 adapter. I will see if the ones that came with the Intel s3500 I got will work.

There is no removable grill in the back where the mezx cards go. I would have to dremel a hole for them.

I have ConnectX 2 VPI in the PCIe slot, as it was significantly cheaper than getting the infiniband mezz cards. I wanted to get quad port gigabit in the mezz slot, which I have the cards already, just need to dremel.