Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Patrick:

Mine Dell 3.5" to 2.5" Caddy - 9W8C4 just come but it does not work well with C6100 since I can only put the screw on one side. We may want to remove that part number from the list. It does work with R710/R720 etc.
Have a picture of this? I think we have had someone make it work successfully.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hard drive mounting points are always in the same spot. Very strange.
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
I don't have any drives handy to look at -- do 9.5mm and 12mm drives have different (height) mounting points?
 

Jaknell1011

New Member
May 14, 2013
18
0
0
Looking at purchasing one of these...

Hey all, new to the forums. I just finished reading all 26 pages in this thread and I feel confident that this setup might be just what I am looking for. I am working for a private school with a tiny budget, but am looking into getting one of these to switch us over to an ESXi solution. I am looking at a few c6100s on Ebay and think I have an idea for the configuration we are looking for. I was just hoping to run a few things by you before I pull the trigger, regarding how these nodes will be set up. The way I see it I have a few options. The only thing I know for sure is that I will need 1TB 3.5" drives for storage, and 8 quad core CPUs.

Option 1: Use 1 node for storage, with 9 drives attached to that single node. (I see it is possible but not necessarily clean or easy) Then use 1 drive per node for the other 3 nodes, installing ESXi 5.1u1 on the 3 nodes. This will yield 4TB in raid 10 plus 1 hot spare, (correct me if I'm wrong), and 3 ESX hosts.
-Questions- Can I install FreeNAS on the storage node, and point ESX to it for storage? Or is there a better option?

Option 2: Use 2 nodes for storage, with 5 drives per node. (this can be done without any tricky wiring from what I've read) Then use 2 nodes as ESX hosts. This will yield 2x2TB=4TB of storage plus a hot spare per node in RAID 10.
-Questions- The same regarding FreeNAS, or is there a better way to use ALL of the storage for ESX?

One way I can save some money is by only getting 1TB drives for the ones that will be used for storage, so for option 1 it would be 9TB drives and 3 small drives for the ESX installation.

We are currently only running 3 servers, but we have had the need arise for a few more, and the ones we have are starting to show their age, and I would like to balance the workload by creating multiple server VMs so that we don't rely on each single server as heavily. Right now if one of our major servers goes down, EVERYTHING goes down (file shares, printing, DHCP, DNS, etc.)

Any advice would be great, our budget is only about $1,000, but I might be able to push it to $1,500. Thanks in advance, and sorry for the novel.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
VM shared storage is a fantastic solution and a lot of fun to build, and there is nothing at all wrong with using a c6100 node as a storage node. For a very low-budget, very low-complexity solution, there is another option: replication. It's simpler to build, cheaper for the same level of performance, and eliminates one possible point of failure.

The problem with shared storage for a low-budget VM build is that it requires some serious performance from the storage node and your network.
On the network side, a Gigabit connection between a VM and storage is already a bottleneck, and when you run several VMs over that gigabit pipe the bottleneck gets bad quickly, making things feel very slow. By the time you have more than a handful of VMs, you either have virtual servers that feel like virtual Pentium4 computers, or you have upgraded to 10Gbe or Infiniband networking.
On the storage side, trying to serve up enough throughput and IOPS to serve three c6100 nodes worth of VMs will drive you to more than one disk spindle per VM and a cached RAID card. You'll end up with plenty of disk space but not enough disk slots to achieve the IOPS you need.

Here is an alternative for the c6100: Simple direct-attached SSD non-RAID storage with backup and replication as the disaster recovery strategy. You jettison the complexity of a cluster setup, save money on RAID, and avoid storage server bottlenecks that are impossible to avoid on a tight budget. You'll probably end up with better performance than if you built a low-end shared storage solution.

Start with a single c6100 node. Configure it with dual L5520 CPUs and enough RAM to run your VMs. Add a single large SSD drive for VM storage and configure your backup software to take a daily snapshot-based backup of the entire server and its VMs. With the SSD, you'll have plenty of throughput and IOPS to run four to eight average VMs with good performance. If you need more space or have more VMs, add a second SSD drive and store half of the VMs on that drive.
Now spin up a second c6100 node as your replication destination. Use Hyper-V 2012 replication, VMWare replication, or Veeam to replicate your VMs to the second node for disaster recovery. The second node just needs large slow SATA storage drives, not SSDs.
If your VM count grows, you can start to configure the remaining two c6100 nodes and have these replicate to the replication node as well. Fully configured, your c6100 could easily handle 30VMs or more - mine does.

With two SSD drives in a single c6100 node, you'll have around 1GB/s of throughput and more than 80,000 IOPS, enough for ten VMs at least. With three c6100 nodes running, that's an aggregate 3GB/s and 240,000 IOPS over say 30 VMs. It's possible to get that level of performance from a c6100 storage node, but it would not be cheap, it would add complexity, and that storage node would still be a single point of failure.

I use Hyper-V 2012 replication for my c6100 "Corporation in a Box" machine. The VMs running on three of the c6100 nodes replicate to the fourth node every five minutes with an application-aware snapshot every four hours. I retain about five hours of five-minute replicas and 24 hours worth of the four-hour snapshots, plus a separate daily backup. This lets me recover from a failure or human error by selecting a daily backup as far back as 60 days ago or selecting from any of the four-hour snaptshots today or any of the five-minute replicas from the prior four hours. At one time I replicated over Infiniband, but it was overkill, so I removed the cards and now just use LACP to combine the two Gigabit connections. Right now it's a plan old c6100 with two 512GB SSD drives and a 2TB backup drive in each of the three VM nodes and 2TB drives in the replica node. There is no RAID and I do not back up the replica node at all since I retain backups on each of the three VM nodes. It's a very simple architecture and it works very well.

Hey all, new to the forums. I just finished reading all 26 pages in this thread and I feel confident that this setup might be just what I am looking for. I am working for a private school with a tiny budget, but am looking into getting one of these to switch us over to an ESXi solution. I am looking at a few c6100s on Ebay and think I have an idea for the configuration we are looking for. I was just hoping to run a few things by you before I pull the trigger, regarding how these nodes will be set up. The way I see it I have a few options. The only thing I know for sure is that I will need 1TB 3.5" drives for storage, and 8 quad core CPUs.

Option 1: Use 1 node for storage, with 9 drives attached to that single node. (I see it is possible but not necessarily clean or easy) Then use 1 drive per node for the other 3 nodes, installing ESXi 5.1u1 on the 3 nodes. This will yield 4TB in raid 10 plus 1 hot spare, (correct me if I'm wrong), and 3 ESX hosts.
-Questions- Can I install FreeNAS on the storage node, and point ESX to it for storage? Or is there a better option?

Option 2: Use 2 nodes for storage, with 5 drives per node. (this can be done without any tricky wiring from what I've read) Then use 2 nodes as ESX hosts. This will yield 2x2TB=4TB of storage plus a hot spare per node in RAID 10.
-Questions- The same regarding FreeNAS, or is there a better way to use ALL of the storage for ESX?

One way I can save some money is by only getting 1TB drives for the ones that will be used for storage, so for option 1 it would be 9TB drives and 3 small drives for the ESX installation.

We are currently only running 3 servers, but we have had the need arise for a few more, and the ones we have are starting to show their age, and I would like to balance the workload by creating multiple server VMs so that we don't rely on each single server as heavily. Right now if one of our major servers goes down, EVERYTHING goes down (file shares, printing, DHCP, DNS, etc.)

Any advice would be great, our budget is only about $1,000, but I might be able to push it to $1,500. Thanks in advance, and sorry for the novel.
 
Last edited:

Jaknell1011

New Member
May 14, 2013
18
0
0
Our 2 largest servers are using a total of 1.5 TB of storage. This is actual usage, no free space included. If I were to purchase a large SSD, how would I go about keeping all of this data used by these 2 VMs? This was my reasoning for one large RAID array. Let it hold all of the data and have the hosts pull from the same datastore.

Sorry for the newbie questions. I have some experience with ESX in a test rack at home, but I have always just used whatever drives were in the Host. Thanks for the help.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Our 2 largest servers are using a total of 1.5 TB of storage. This is actual usage, no free space included. If I were to purchase a large SSD, how would I go about keeping all of this data used by these 2 VMs? This was my reasoning for one large RAID array. Let it hold all of the data and have the hosts pull from the same datastore.

Sorry for the newbie questions. I have some experience with ESX in a test rack at home, but I have always just used whatever drives were in the Host. Thanks for the help.
I think DBA (and most others who use VMs) assume there is a separation between the VM's running disk (boot & applications) and its datastore. Most of what was described above applies primarily to the VM "machine disk" itself. The issue that requires high IOPs is the random nature of paging and access to the VM disk itself. Seek contention between VMs makes this "difficult" using spinny disks and much better on SSD.

Holding the datastore on a NAS, SAN or other appropriate storage node is perfectly appropriate. In your case do what DBA suggests while keeping the larger datastore for your VMs application data on the head node as you originally suggested (assuming remote access to the data via iSCSI, SMB or NFS meets your needs).

Hopefully the systems you are virtualizing don't just hold all of their data in one big pile on the system disk. That would make what you propose a little tougher.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Storage-heavy VMs can almost always be broken down to an OS/App partition and a data partition. If you have lots of data, put it on a separate big/cheap disk, either directly attached to the node to continue the theme or, depending on your needs, on a separate storage node or (even better) a pair of storage nodes with HA. At this point, the possibilities are endless and I don't know enough about your requirements. I'll just say this: If you are planning a shared-storage VM architecture, calculate your throughput and IOPS, both disks and network, to make sure that you aren't creating a bottleneck that will waste resources.


I think DBA (and most others who use VMs) assume there is a separation between the VM's running disk (boot & applications) and its datastore. Most of what was described above applies primarily to the VM "machine disk" itself. The issue that requires high IOPs is the random nature of paging and access to the VM disk itself. Seek contention between VMs makes this "difficult" using spinny disks and much better on SSD.

Holding the datastore on a NAS, SAN or other appropriate storage node is perfectly appropriate. In your case do what DBA suggests while keeping the larger datastore for your VMs application data on the head node as you originally suggested (assuming remote access to the data via iSCSI, SMB or NFS meets your needs).

Hopefully the systems you are virtualizing don't just hold all of their data in one big pile on the system disk. That would make what you propose a little tougher.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Great points. I have been doing this for some time with physical machines. Application data locally. Data stored on various NAS devices/ cloud storage. Case and point - I broke my Microsoft Surface Pro on a San Francisco to Chicago flight a few weeks back. Certainly not a fun experience. Great news was that I had "hot" docs sync'd using Microsoft SkyDrive across my main machines and phone. Other larger items are stored on a NAS that provides both CIFS and iSCSI to VMs, workstations, laptops and etc.

I had a "node" go down (the Surface Pro literally fell and cracked on the seat in front of me). Net data loss - ~15 minutes worth of work that I started on in the flight. Was faster to just fire up the other laptop and start from my last sync point than to try reload data. The automatic sync works wonders.

Bottom line - a mix of shared and local storage seems to work really well. Tier your storage and ensure data is on a redundant tier that can get data to machines fast enough.
 

Fzdog2

Member
Sep 21, 2012
92
14
8
Not sure if its the 'best' way to go, but in my virtual structure at home I have a FreeNAS VM that serves up my large disk storage as CIFS shares to my Windows machines/VM's. My ESXi hosts use various SSD's for local datastores.
 

doup93

New Member
Feb 26, 2013
23
3
3
Hi all,

I'm sure the question's been asked before but can't I'd like to use one of these (maybe 2) in a home lab environment and would like to know how long these are expected to last. I saw the other thread where someone had one node die on them soon after the warranty expired and was wondering how reliable these will be. Thanks all!
 
Last edited:

Jaknell1011

New Member
May 14, 2013
18
0
0
The 2 main servers I have running right now are HP DL380 G5's, and they are just running a RAID array of 7200 rpm drives. Since this is the case and they are handling the load just fine, do I really need to move up to SSDs for these as VMs?

My initial thought was to use 2 nodes for storage, 5 drives each, one for VM storage and the other for Data storage. Both nodes would be running FreeNAS. I would then spin up my VMs, point their D: data drives to the storage node, and the C: system drive to the VM storage node.

These machines don't get hit often. One is mostly for data storage, the other handles things like DHCP, DNS, GPOs, and is the print server.

***On a separate note, a big hats off to this community for being so active and involved. I think I may have found my go to forum for this kind of thing!***
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The 2 main servers I have running right now are HP DL380 G5's, and they are just running a RAID array of 7200 rpm drives. Since this is the case and they are handling the load just fine, do I really need to move up to SSDs for these as VMs?

My initial thought was to use 2 nodes for storage, 5 drives each, one for VM storage and the other for Data storage. Both nodes would be running FreeNAS. I would then spin up my VMs, point their (D:) data drives to the storage node, and the (C:) system drive to the VM storage node.

These machines don't get hit often. One is mostly for data storage, the other handles things like DHCP, DNS, GPOs, and is the print server.

***On a separate note, a big hats off to this community for being so active and involved. I think I may have found my go to forum for this kind of thing!***
My general rule of thumb is that 150 IOPS/VM is the minimum for a low-end virtual desktop and 1000 IOPS/VM is the minimum for a small generalized server VM. Of course particular workloads may have needs different than these rules of thumbs. I'm not saying that having less IO won't be "good enough" by some measure. I use these rules of thumb to make sure that I'm not wasting other resources. For example, a quad Intel Xeon E5 file server with 200 IOPS worth of disk will "feel" almost exactly as fast as a single CPU file server with the same disk. Both might be "good enough" for a given deployment, but in the Xeon case most of that CPU is wasted.
 

shindo

New Member
May 7, 2013
26
0
1
WI, USA
I can verify that the forward breakout cable from Monoprice combined with the original cable works to connect six disks to one node (4 on Monoprice, 2 on original). While the Monoprice cable is a bit longer than necessary, I had no problem finding a place to tuck the excess without impacting airflow as far as I can tell. I much prefer the $10 price point to what I've seen for the V91FW.
 

OrangesOfCourse

New Member
May 15, 2013
21
0
1
Hi guys,

I know that this must be a stupid question but I would love some help.

I just got my C6100 today and i'm trying to test it out by installing Windows Server 2012 on a SATA hard drive that I pulled from another machine. My problem is that Windows Server setup is unable to locate the hard drive. I can see the hard drive in the bios. I've tried a few Intel RST drivers to no avail. Any idea?

I got this server to play around with my home lab and learn new things. Any help would be much appreciated.
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Hi - put sata controller into ahci mode. f2 to get to bios setup. you probably have it in raid mode. another option get intel ich10r or ich9r drivers and the windows setup disk. another thought use an ipmi mount or usb cd to do ws so hd0 = your drive
 

markpower28

Active Member
Apr 9, 2013
413
104
43
is the drive happen from a array? try to use low level format tool first. (ex. ultimate boot cd)