Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mikeynl

New Member
Nov 16, 2013
15
0
1
There is another option that I very highly recommend: Dump the 15K SAS drives and use one or more large SATA SSDs instead. Even a single SSD will out-perform a pile of SAS drives in a virtualization scenario. For my c6100 VM cluster, I use 512GB SSD drives to house my VMs, and I run up to 12 big fat Windows VMs per c6100 node without any noticeable slowness. I could very likely run many more. As you probably know, VM IO is usually limited by disk IOPS, and even really fast SAS drives will give you only ~200 IOPS while you gets tens of thousands of IOPS from even the worst SSD drives.

Here is another idea, since the c6100 has only three or six disk slots per node: I have stopped using any type of RAID for VM storage. Instead, I use simple SSD drives and rely on very frequent VM replication plus daily backups as my DR strategy. This decreases storage costs with only a small decrease in possible downtime and very small data loss window, which is perfectly acceptable for most (but not all) VMs. My c6100 "corporation in a box" setup has three c6100 nodes acting as VM hosts plus one one as a VM replica destination.
Quite interesting, what about trim support ? As far as i understand, vmware doesnt support trim. And rely on gc those days is also over. Since more and more os's support trim, the gc is less aggressive as it was before. We tested ssd in vm environment, and with heavy load and ssds that where filling up, major gaps in performance kicks in. Even so badly that a sas array would cycle around the ssd... With two fingers in its nose... :)

I must say, last time i tested and spent time on it is a while ago. We tested with intel modular server en promise vtrak storage.
 

root

New Member
Nov 19, 2013
23
0
1
There is another option that I very highly recommend: Dump the 15K SAS drives and use one or more large SATA SSDs instead. Even a single SSD will out-perform a pile of SAS drives in a virtualization scenario. For my c6100 VM cluster, I use 512GB SSD drives to house my VMs, and I run up to 12 big fat Windows VMs per c6100 node without any noticeable slowness. I could very likely run many more. As you probably know, VM IO is usually limited by disk IOPS, and even really fast SAS drives will give you only ~200 IOPS while you gets tens of thousands of IOPS from even the worst SSD drives.

Here is another idea, since the c6100 has only three or six disk slots per node: I have stopped using any type of RAID for VM storage. Instead, I use simple SSD drives and rely on very frequent VM replication plus daily backups as my DR strategy. This decreases storage costs with only a small decrease in possible downtime and very small data loss window, which is perfectly acceptable for most (but not all) VMs. My c6100 "corporation in a box" setup has three c6100 nodes acting as VM hosts plus one one as a VM replica destination.
That was my plan "B" - to build this array using SSDs. That is he approach I am using since last year for my own home lab, but this build is for a friend of mine that is going to use it for business. Actually, he is upgrading existing virtualized servers that are running on like 10 year old hardware.

Unfortunately, singe SSD is not a solution for us; and he refuses to even think about NAS, SAN etc.

There are specific requirements about the disk space for virtual machine. Currently it is using 1.3TB and may grow up to 1.6TB and on old server it is bottle-necking because existing 7200 rpm disk array cannot handle the load so that's why 15k rpm disks came to the picture. 6x 600GB Seagate Cheetah ($260/ea) will give us 1.8TB disk space for $1560 (RAID10). I was thinking of getting 3x 1TB Samsung 840 EVO ($560/ea) which will give me 2TB RAID5 storage for $1680. Seagate is enterprise HDD with 5y warranty. Samsung is a consumer MLC SSD that comes with 3y warranty and according to some information available online, may not last 5 years if one writes more than 15-20GB worth of data daily. This makes SSD setup a bit risky since Samsung may decline your warranty if excessive amount of data was written. There is no such a problem with mechanical drives. I am not saying that SSD is not good solution (and it is definitely much faster than any HDD) but we need to analyze precisely current array usage pattern and see how much data is actually written daily.

The reason I wanted SSD RAID is because I cannot get required space on single SSD and because RAID5 will survive one disk failure.

Also, single SSD with daily backups is not an option since all the information that is added during the day will be lost. It should be some kind of replication/mirroring so nothing will be lost if SSD all the sudden dies. This will be not a problem if there is HA but that's not an option at the moment.
 
Last edited:

Mikeynl

New Member
Nov 16, 2013
15
0
1
Dont worry about the endurance, 20 gb a day, is 7,3TB a year, on extreme systems they are benchmarking 24x7 ssds till they stop responding, many models over 150TB and more. So the 5 year would not be the problem. And when the MWI is 0, doesnt mean drive is dead, most vendors say end of warranty. The home use ssds doesnt have a guaranteed endurance written in TB they just say avg between such and such a day over x years. Enterprise have a minimum guaranteed endurance written in TB. Just look about the warranty, not every vendor covers warranty if ssd is used in a array.
 

root

New Member
Nov 19, 2013
23
0
1
How to connect the nodes?

.

I've spent 3 days reading all 984 posts in this thread :) , many thanks to all participants helping fellow less experienced users :) ! It helped me to shape my vision about what I can do with these C6100s.

Now it is time to start working on new project. I'm actually going to ask my questions in a separate thread, this one already span over 66 pages.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Quite interesting, what about trim support ? As far as i understand, vmware doesnt support trim. And rely on gc those days is also over. Since more and more os's support trim, the gc is less aggressive as it was before. We tested ssd in vm environment, and with heavy load and ssds that where filling up, major gaps in performance kicks in. Even so badly that a sas array would cycle around the ssd... With two fingers in its nose... :)

I must say, last time i tested and spent time on it is a while ago. We tested with intel modular server en promise vtrak storage.
If you are worried about SSD lifespan and performance due to relatively heavy write loads, and while most people will never need to worry about that in a VM environment it seems that you may have had trouble previously, then I still recommend SSD drives, but with two tweaks to the plan:

1) Leave more overprovisioning (OP) space. I format 128GB drives to 100GB and 512GB drives to 400GB for example. Google a bit and you'll see that leaving extra OP space dramatically improves write speed under load and lifespan.
2) Use a slightly more enterprise-focused drive like the Intel S3500, or even the S3700 if you have some extremely write-happy VMs. A basic S3500 800GB (which is a 1TB drive formatted to 800GB) is rated for 450TB of writes, for example. The S370 for probably 10x that much.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
.

I've spent 3 days reading all 984 posts in this thread :) , many thanks to all participants helping fellow less experienced users :) ! It helped me to shape my vision about what I can do with these C6100s.

Now it is time to start working on new project. I'm actually going to ask my questions in a separate thread, this one already span over 66 pages.
Good point. It did get a bit out of hand :) Luckily there is a search feature.
 

Mikeynl

New Member
Nov 16, 2013
15
0
1
If you are worried about SSD lifespan and performance due to relatively heavy write loads, and while most people will never need to worry about that in a VM environment it seems that you may have had trouble previously, then I still recommend SSD drives, but with two tweaks to the plan:

1) Leave more overprovisioning (OP) space. I format 128GB drives to 100GB and 512GB drives to 400GB for example. Google a bit and you'll see that leaving extra OP space dramatically improves write speed under load and lifespan.
2) Use a slightly more enterprise-focused drive like the Intel S3500, or even the S3700 if you have some extremely write-happy VMs. A basic S3500 800GB (which is a 1TB drive formatted to 800GB) is rated for 450TB of writes, for example. The S370 for probably 10x that much.
Lifespan is something i not worry, even wrote on webhostingtalk.nl a nice article about it to convince people not to worry about that. I monitored a year a sas array with plenty of busy vms and big websites. If you see total write over the array, and the individual writes per drive. Absolute no worry. By the time its over, your hardware is allready classic oldtimer.

I know about sizing partitions smaller, but its not really a suitable solution. Trim support is not only for lifespanning, it also maps unused cells free so that the ssd can map them directly instead of moving. I agree smaller sizing will work, but its waisting space for not having a good suitable solution around. Its a nice work around, nothing more.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Lifespan is something i not worry, even wrote on webhostingtalk.nl a nice article about it to convince people not to worry about that. I monitored a year a sas array with plenty of busy vms and big websites. If you see total write over the array, and the individual writes per drive. Absolute no worry. By the time its over, your hardware is allready classic oldtimer.

I know about sizing partitions smaller, but its not really a suitable solution. Trim support is not only for lifespanning, it also maps unused cells free so that the ssd can map them directly instead of moving. I agree smaller sizing will work, but its waisting space for not having a good suitable solution around. Its a nice work around, nothing more.
You should share the URL to your webhostingtalk.nl article; it would be valuable to read.

I do have to disagree with you a bit about the value of over provisioning (OP), with or without TRIM. I consider it more than a workaround. In fact, the net effect of TRIM is to increase the amount of effective overprovisioning, with the space gained sometimes called "dynamic" OP. And if you don't have the benefit of TRIM, the best way to make up for that fact is to leave extra OP manually. Consider this:

"Enterprise" drives come in sizes like 100GB, 200GB, and 800GB while "Consumer" drives are usually sold as 128GB, 256GB and 1TB. A 200GB Enterprise drive contains the same amount of NAND memory as a 256GB Consumer drive, but the Enterprise version devotes more of that NAND to Overprovisioning space. In return, the Enterprise versions have better performance under load and more consistent latency. The improvement comes from both OP and from firmware, controller, and NAND differences (with more aggressive garbage collection being quite important), but it turns out that OP is the major contributor under most usage scenarios. I remember an article that illustrated this quite well, comparing various Consumer and Enterprise drives with different levels of OP, and found that good Consumer drives started behaving very much like Enterprise drives as OP increased. Not surprisingly, formatting Consumer drives to the Enterprise standard 100GB/200GB/400GB/etc. provided the most benefit per dollar.

Here is the article. Click the buttons on the interactive charts to show that the humble $200 Samsung 840 Pro provisioned to 192GB has very similar latency under long-term load as the $500 200GB Intel S3700. The S3700 is still better, but the differences have become quite small.
http://www.anandtech.com/show/6489/

More detail about why this is so:
http://www.lsi.com/downloads/Public/Flash Storage Processors/LSI_WP_Over_provisioning.pdf
 
Last edited:

rnavarro

Active Member
Feb 14, 2013
197
40
28
Late to the game....but I'm wanting to get my hands on a C6100.

I've noticed that prices have gone up a bit since this all started (likely due to the demand) so I'm trying to score a reasonably priced system with the "new" market rates.

I've got my eye on this:

Dell PowerEdge C6100 XS23 TY3 4X Nodes 8x 2 26GHz QC L5520 96GB 1x 250GB SATA | eBay

With some of these:
Dell C6100 2 5" Hard Drive Trays Lot of 10 7JC8P No Screws | eBay

(Those caddys are actually from a local here at STH.... JBushOptio)

All in all it looks like it'll be $755 shipped to my door....but I'm looking to see if I can do better. :p
 

PersonalJ

Member
May 17, 2013
127
11
18
So, I ordered my XS32-TY3 today with 4 nodes configured with 2x L5520 CPU's, 24GB memory and 1 250GB HDD directly from eSISO for 650.00!

I cant wait for it to ship and arrive as this will be used to maintian my IT Certifications VCP, MCSE, etc..
So far I would have to say that I would refer anyone interested in this equipment to send a direct email to them instead of going through ebay which adds the costs of posting and the percentage they take for the deal (Win/Win).

Ill post a picture once I recieve the server(s) and let you alll know how the deal completes.
I called esiso today and spoke with a guy who knew about 6 different words of English. He didn't seem to want to allow me to purchase outside of Ebay, or maybe he just didn't understand me.
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
I called esiso today and spoke with a guy who knew about 6 different words of English. He didn't seem to want to allow me to purchase outside of Ebay, or maybe he just didn't understand me.
I just sent them an email with a link to WScott66's post....lets see if they give me a deal :)

I submitted an offer on eBay for $650, and they countered with $700...so we'll see!
 

jschloer

New Member
Aug 12, 2013
2
0
1
I just sent them an email with a link to WScott66's post....lets see if they give me a deal :)

I submitted an offer on eBay for $650, and they countered with $700...so we'll see!
I did the same thing. I used the contact form on their site and pointed them at Wscott's post. They agreed to match his deal, and I have it on the way now. Should be here by Monday. Super excited.
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
I did the same thing. I used the contact form on their site and pointed them at Wscott's post. They agreed to match his deal, and I have it on the way now. Should be here by Monday. Super excited.
Aww man, that's the same thing I did but I haven't heard anything back! Can you PM me a contact email address that I can reach directly?
 

PersonalJ

Member
May 17, 2013
127
11
18
Aww man, that's the same thing I did but I haven't heard anything back! Can you PM me a contact email address that I can reach directly?
I spoke to the Sales manager on the phone today, the guy who does the server sales left early for Thanksgiving. I was told they could do $650.
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
I spoke to the Sales manager on the phone today, the guy who does the server sales left early for Thanksgiving. I was told they could do $650.
I'm talking to them right now, I quite literally just recieved the invoice for $650...YAY waiting to see on a price match from a local vendor so I can pick up....if not I'm going with Esiso
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
Are there any RAID5 options for these servers (outside of going the software route?)

I'm looking at the different mezz cards and they seem to be raid 1/0/10

I'm going to put a quad gige nick in the pcie slot....so that limits my options a lot.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
If you want hardware RAID5 in the c6100, you need to use a PCIe card. That leaves the mezzanine slot free for a dual-port 10GbE card if you want networking.

Are there any RAID5 options for these servers (outside of going the software route?)

I'm looking at the different mezz cards and they seem to be raid 1/0/10

I'm going to put a quad gige nick in the pcie slot....so that limits my options a lot.
 

rnavarro

Active Member
Feb 14, 2013
197
40
28
If you want hardware RAID5 in the c6100, you need to use a PCIe card. That leaves the mezzanine slot free for a dual-port 10GbE card if you want networking.
Oh well, I guess I'll just use software RAID5.

I wanted to go with ESXi on all the nodes but I can't afford to drop a 10GbE card in all the servers, let alone a switch to connect it all :)

I've always wanted to kick the tires on KVM anyways....so we'll go with that!

Thanks DBA!