Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Clownius

Member
Aug 5, 2013
85
0
6
Im using a SaS Raid card rather than a HBA. But as long as its half height (or it hits the RAM) and half length (or it hits the CPU Heatsink) it should fit ok.

I have an LSI 9280 8e (Raid Card) that i tested and works fine and a LSI 9200 8e (this ones an actual HBA) also works fine. Its doesnt need to be Dell specific parts just check the size against those two cards. Nothing bigger will fit. Thats basically it.

I had a couple of HP P400's but they never worked in any machine. Clearance issues with the Internal cable and a heatsink. But they mostly fit minus the backup battery.
 

PimpSmurf

New Member
Aug 4, 2013
23
0
0
If you are looking for a piece of furniture, your only bet is to build.
It is going to be pretty nice looking, but the primary reason for building it is sound reduction. I had a moment of inspiration last night and came up with the design. It's should be hella quiet. I'm actually going to use the fans in the C6100 as the cooling system fans. It only has to hold 5U, on rollers with a low footprint. It will be ~29" tall. :D

When I digitize the design I'll post it up.
 

PimpSmurf

New Member
Aug 4, 2013
23
0
0
Looking forward to seeing it!
Well the design came down the the rails. They are very standard but supporting the weight became an issue.

It occurred to me that these things are pretty sturdy. It seems that there would be no problem with having these things on their sides instead of on their bottoms. Can anyone think of a reason not to?

This would allow me to build a simple wooden rolling cart to hold everything and put it in the closet I already wired up specifically for it. Then sound and attractiveness won't be much of an issue, and some simple foam work will suffice. Plus any money I don't spend on the mount for it is money I can spend on ram! :)
 

33_viper_33

Member
Aug 3, 2013
204
3
18
Most cases are designed to have their weight distributed top to bottom, not side to side. If you have rails front and back to mount to and horizontal rails to distribute the weight, I think you will be fine. I had a rackmount shipping create that I would store on its side with equipment in it. It didn’t seemed to be bothered by it.

If you proceed with this, it would be awesome to start another thread and document your build. Please let me know if you do. I would like to follow your progress for my media center build.

How are you using your C6100? Is it an always on server?
 
Last edited:

PimpSmurf

New Member
Aug 4, 2013
23
0
0
It will be an always on server running Eucalyptus. I'm using it to learn Eucalyptus, but it will host many virtual machines for various tasks. I have built some software for my dad which he uses daily and I thought it would be a great test bed to learn with. I have no experience with rackmount systems. I'm very much looking forward to this project and happy to document it. I will likely make a thread for discussion and simply update my wiki for the project.

My 24 port 1000 base-t switch arrived! Trying to figure out vlans now! :)
 

PersonalJ

Member
May 17, 2013
127
11
18
It seems like the pricing on these is going up, I'm glad we ended up buying a few prior to the price hikes but I wouldn't mind one of these for my home lab. L5520s have increased in price a bit as well while the L5639 CPUs continue to drop in price. Are the C6100s not coming off lease anymore in large quantities?
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Was going to post on this later.

Actually, was in touch with some of the resellers recently. Basically the L5520 machines have dried up to a large extent and now we have L5639 based C6100s being shipped out in their place.

Now, I also hear that there are a few smaller lots of L5520's going out so there is still some supply coming down the pipe.

More to come.
 

PimpSmurf

New Member
Aug 4, 2013
23
0
0
Fantastic news! With the new gear hitting the market perhaps we find some cheaper prices on L5639 CPU upgrades! :)
 

PimpSmurf

New Member
Aug 4, 2013
23
0
0
I have had to call my credit card company 5 times today because I kept triggering fraud alerts... :)

Ordered:
500gb external usb drive for a raspberry pi remote syslog server (no remote access, terminal only via modded Atrix Lapdock). This is also my serial port interface device! :)
2000 VA / 1200W UPS (OPTI-UPS TS2250B UPS - Newegg.com)
20 2.5" HDD trays to fill the slots.
20 10k SAS 73gb drives
2nd power supply
1x LSI MegaRAID 9240-8i NiB w/ cables/brackets. (I plan to upgrade my storage controller to large/fast SSDs, and want at least raid 5 for now)
3x Y8Y69 raid cards with cables/brackets/etc. I guess I can raid 10 or raid 1E with this thing.
24Gb of ram to max out the 4G configuration on my storage controller/etc.

I feel like $1,000,000 right now, but it was only about $1400. :D

I'm going to start a build thread for the cabinet I'm building and document the setup for my server closet. I'll link it when I take a picture in a few minutes.
EDIT: http://www.subproto.com/wiki/index.php/CloudySmurf
 
Last edited:

techwerkz

New Member
Aug 20, 2013
12
0
0
Fort Myers, FL
www.techwerkz.com
So making my first post here after lurking for awhile. Another local guy showed me these servers and this website. He posted in this thread (swflmarco).

However I have my own set of questions. I just ordered two C6100's with L5639 procs and 192GB of memory. The plan is to run a single storage node running Nexenta or FreeNAS. At first I thought about wiring all 12 to a single node in a 12x2TB setup, but since this will be a production server, the need to pull the storage node might come up some day. So I axed that idea and decided to just go for a 6x4TB setup to a single node. This also gives me the potential to add a second storage node and cluster/HA Nexenta.

That's all well, but this is where I am stuck. At first I was just going to buy a MegaRAID SAS 9266-8i (if it would fit), setup the 6 disks in RAID 10, and call it a day. Then I got to thinking and reading. With how much hardware there will be in this node ... why not go for a software raid JBOD ZFS setup? Now I understand I would limit myself to the on board limits of 3.0 Gb/s instead of the full potential of the 6.0 Gb/s disks. However something like a LSI SAS 9211-8i HBA would fix that.

All the other nodes except one (Oracle DB) will be running ESXi 5.1 (installed to USB) and use iSCSI to connect to the storage node. The Oracle node will also be getting six drives dedicated. For that node I was just going to go with the onboard Intel RAID, but I wouldn't be against running the same JBOD ZFS setup with the SAS 9211-8i. I mean with 48GB on that node, ARC would have a lot of cache. Does anyone know if a SSD will fit nicely somewhere inside the node? Then I could add L2ARC to the setup for even more caching. The other 6 drives on chassis 2 will be dedicated to another single node for Exchange database storage. Which once again I find myself in the same dilemma.

Anyone have any opinions on this?
 

33_viper_33

Member
Aug 3, 2013
204
3
18
I've been kicking solutions around to this same problem. My thought is to use an mSATA to SATA adapter like this one. Newegg.com - SYBA SI-ADA40066 50mm (1.8") mSATA SSD to 2.5" SATA Converter Adapter The other option is a simple micro SSD or 2.5” SSD that consists of just the PCB and no protective case. These appear to be limited in capacity and relatively expensive.

Mounting points are one of the challenges. My thought was to make a small bracket out of a strip of aluminum. It would only require cutting to size and drilling 4 holes in the correct locations. I planned to attach it just behind the mid plane SATA board using the mid plane’s mounting points. My only concern would be the structural integrity of the mod since its only using 2 mounting points for each connection. If it’s going to sit in a rack and never move, it’s probably ok. If you move and reconfigure as much as I do, I would worry a bit more. Creating some sort of standoff and mounting to four points is better, just more engineering and tooling required.

Providing power is the other major problem. You will need to splice into the power supply somewhere. This thread talked a little about this problem if you go back through the pages. With the 2.5" option, the power requirement is lower and only uses 5v. I believe it is possible to power it off USB power. There are a couple places to tap into. This thread details one option: http://forums.servethehome.com/processors-motherboards/1638-c6100-extender-board-usb.html. There is another thread here there that outlines how to make an adapter to utilize the 2 onboard USB headers.

Have you explored using the SAS daughter card for your raid setup? If you don't want to use the RAID functionality of the daughter card, you can pass the controller to a ZFS installation. That would allow you to use your PCIE slot for an SSD. There are several different options and configurations out there from fast to affordable. If the nodes had one more PCIE port, that is what I would do. However, I have 10Gb NICs in my PCIE slots and SAS cards for the daughterboard.
 
Last edited:

Clownius

Member
Aug 5, 2013
85
0
6
Im going to recommend you just bite the bullet and buy a Raid Card. Unless your really clever 6 drives to a Node is going to be about your limit. I did it with one of my Nodes. Short of taking a drill to the chassis i cant see how your going to wire one node to 12.

As for ZFS in general. How much have you used it? Because the few people i know who have dropped it after weeks of screwing around and just dropped back to Linux software raid. It was considerably faster and more reliable in their experience.

After seeing what they went through i brought the Raid Card instead as setting up a good Raid is a few minutes work not a few months. The 9260 8i's i used are a dream to work with and the price difference was minor compared to a HBA. Picking up battery backup for the cache etc seemed well worth it to me compared to the experience's i have had with software Raid in the past.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
ZFS definitely has a learning curve to it. My biggest complaint is the inability to grow the array by adding single disks. You can only grow the array by adding another array in a stripe with the original array. However, there are significant advantages to ZFS over RAID cards. Take it from someone who had a raid card fail (luckily it was under warranty), you never want to rely on a single device. Software raid doesn't care about hardware. I use a mixture of both. My main array uses hardware RAID with new disks so I can expand as needed. I use all my old drives in ZFS for a backup that is very rarely turned on. When a new size disk comes out (usually every other generation or two), I upgrade to keep a minimum number of disks spinning in the main array.

One other thing to consider is the timeout problem with newer drives under hardware raid cards. My Areca doesn't like the 4tb Seagate drives I have. Consumer drives wait a long time attempting to read bad sectors prior to timeout. Many RAID cards aren't patient enough and determine the disk is bad. It results in the array rebuilding itself every month or so. ZFS doesn't have this problem. If you are going hardware raid, think enterprise drives. Price analysis didn't work out for me though.

The c6100 platform is a bit limited if you want to have a lot of drives in a single enclosure. 6 drives is about all you are going to get on a single node easily. However, a good HBA with external SAS and an external enclosure is another option. The good news is, secondhand external RAID adapters are fairly cheap since they are less popular among home users.
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You can use two raid cards with one array. That is NSPOF. LSI can do PI which is the same as ZFS as far as checking every READ for bit rot except it uses hardware to do so.

The timeout problem is real - if you do not respond for say 8 seconds to esxi due to a drive lagging out, you will cause a datastore heartbeat fail which will stun the vm's and cause all heck.

Conversely if you drop a drive because it has 0 TLER (AV) then you don't face that problem that causes all heck but have drives dropping like flies.

The best idea so far was the lefthand VSA which allows you to treat each unit as a building block of cheap storage and network allows you to have cheaper units of storage fail. ie 2 lefthand VSA, one fails, life keeps on going.

My problem is so far no ZFS is production ready as in bulletproof as storage should - uptime 1000 days at 100% load style. I wish someone would take ZFS seriously enough to make it production ready.

The lack of leveling is odd. You add drives and the raid should rebalance, you lose a drive, the raid rebalances, lose two drives as long as free space exists you rebalance again. It's like using free space for redundancy so if you have 50% free speed, you use it to create more resilience!

What I find amazing on the C6100 is the fact it is the only SR-IOV LSI raid setup. Nowhere can you create VF's with SR-IOV and split up the raid card for vm's but the C6100 (aka vrtx) can do this right now. Pisses me off.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
mrkrad,

How are you reballancing. From everything I have read, the only way to reballance is to backup, add disk, destroy and restore your data. If there is a way, is it presented in napp-it?

-V
 

techwerkz

New Member
Aug 20, 2013
12
0
0
Fort Myers, FL
www.techwerkz.com
I am fine with 6 drives to a node. I purchased the 4TB WD Enterprise drives for this setup. Since this is going in a collocation, it won't be moved much. I don't mind spending money on the RAID controller or an HBA, just want to figure out if a HBA + ZFS RAID + ARC setup will be faster/more beneficial to a hardware controller RAID 10 setup. Obviously the latter is much easier to implement as ZFS does have a bit of a learning curve. Either way, I am going to be using FreeNAS or Nexenta on this node. Power completely slipped my mind. Honestly though with as much memory that's available in the node, I might not need a L2ARC yet. I'll read through that thread for power options.

Little confused on why you feel ZFS isn't production ready? It's been used on high dollar Oracle systems as a software raid for awhile now. I would consider it as production ready as it can get.

As far as networking goes, I want to keep the mezzanine open for that purpose. 10Gbe cards, or I also found a dual port Intel gig nic that would expand the current 2 1Gbe ports on board. We won't be using this at full capacity for some time, so honestly I don't even know if there is a benefit to going to a 6.0Gbs HBA/RAID as I won't have the throughput to really push it that far, at least now. I have a pretty broad budget for this project as the C6100's themselves save a good amount up front that it will allow for upgrading switches or other hardware as needed. For now I guess I will just need to lab.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
You can use two raid cards with one array. That is NSPOF. LSI can do PI which is the same as ZFS as far as checking every READ for bit rot except it uses hardware to do so.

The timeout problem is real - if you do not respond for say 8 seconds to esxi due to a drive lagging out, you will cause a datastore heartbeat fail which will stun the vm's and cause all heck.

Conversely if you drop a drive because it has 0 TLER (AV) then you don't face that problem that causes all heck but have drives dropping like flies.

The best idea so far was the lefthand VSA which allows you to treat each unit as a building block of cheap storage and network allows you to have cheaper units of storage fail. ie 2 lefthand VSA, one fails, life keeps on going.

My problem is so far no ZFS is production ready as in bulletproof as storage should - uptime 1000 days at 100% load style. I wish someone would take ZFS seriously enough to make it production ready.

The lack of leveling is odd. You add drives and the raid should rebalance, you lose a drive, the raid rebalances, lose two drives as long as free space exists you rebalance again. It's like using free space for redundancy so if you have 50% free speed, you use it to create more resilience!

What I find amazing on the C6100 is the fact it is the only SR-IOV LSI raid setup. Nowhere can you create VF's with SR-IOV and split up the raid card for vm's but the C6100 (aka vrtx) can do this right now. Pisses me off.
You must mean ZFS+Linux isn't production ready. ZFS+Solaris is rock solid.

I know the new Dell VRTX has SAS SR-IOV with you install the new Dell Perc8 card, but I have never heard of that feature on a c6100. What have you heard and what have you tested yourself?