Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rain

Active Member
May 13, 2013
276
124
43
In general "Enhanced Pointer Precision" doesn't play well with IPKVMs.
Depending on OS (and as it turns out BMC vendor) you need to play with the settings for enhancing mouse accuracy. Turns out Windows likes one setting, Linux another.
Thanks, guys! Next time I have to boot Windows on a node I guess I'll just have to play around with it.
 

thecubed

New Member
Sep 9, 2013
4
0
0
Wow, looks like you got your hands full there...
DCS tend to be a little on the unpredictable side when it comes to dealing with the firmware and so on. If i understand correctly its easy to "brick" nodes when dealing with DCS. Dell support themselves wont even touch a DCS model. However they would be happy to send you over to the DCS department. I just happen to have a rep name and number if you want to give them a shot. His name is Robert Michael and his number should be (512) 724-6773. I have yet to talk to him though so I am not sure how far he might be able to help with this. make sure you do have your DCS express # on hand..its all numbers not letters.

As for all firmware updates and including historical ones... this is linked at the front of the thread. It might help, but keep in mind, there is a risk in this situation. More so for me as you seem to know what your dealing with a lot more. :D

Drivers for PowerEdge C6100

Hope this helps, and I am sure others on this thread might have a better answer.
--
Jared
Thanks Jared for the reply!
I've looked around on the poweredgec.com website, and I've actually flashed 1.30 to the node, and it's still not letting me use iKVM sadly.

I'm hesitant to just start flashing firmwares willy-nilly, however next on my list is 1.25, as I hear it is a little different from the other versions.
As for brickage, it'd be inconvenient, but I have JTAG equipment and the ability to write directly to the serial flash chip on the motherboard (if I could find the proper pins, that is).

If only I could now break out of the SMASH SOL redirection and into the actual SMASH CLI I could get a real busybox console, then I could play around with the kernel modules and get the keyboard to show up.

I've confirmed that virtual media DOES indeed work, so the USB controller is functional, it just is being told by "adviserd" not to create the endpoints via it's sysfs interface. All in all, once I get a real SSH console, I'll have full ability to get iKVM working.

I've also toyed with the possibility of connecting a serial console to the AST1100's UART pins (apparently they're onboard somewhere) and tweaking U-boot to make it report that it's an AST2050.
The full PDF of the specs and register map for the AST1100 and AST2050 seem to indicate that the only difference between the two is a model number, so I'm confident that this can be made to work, even if I have to flash custom firmware (fairly easy, actually, it's my next step if I can't get out of the SMASH SOL redirection)

Anyway, if anyone has that 1.11 firmware and can spare a link, I'd greatly appreciate it!

Thanks!
 

MACscr

Member
May 4, 2011
119
3
18
I apologize if this was discussed earlier, but whats the best way to reroute the SFF-8087 cables so you can have 6 drives to a single node? When I did it with my first C6100, it was a pain the arse and I had to pull out the fans a bit and unhook the front disk chassis to give me a few more inches to work in.
 

Clownius

Member
Aug 5, 2013
85
0
6
You could unscrew the entire front drive cage and move it forward slightly for access too. No matter which method though your going to find it a PITA. Its a tiny space to work in unless you leave plenty of spare cable around. Thats bad for airflow.
 

MACscr

Member
May 4, 2011
119
3
18
Too? Thats what i had already mentioned doing. Im just using the stock cables, so was hoping there was a different solution since its obviously made to work with up to six disks. Eh, guess i dont have a choice.
 

Clownius

Member
Aug 5, 2013
85
0
6
Maybe im misunderstanding the problem. I have the 12 bay version with 6 to one node and 2 to each of the others. It was a bitch of a job but i did it.

I cheated in that this time i brought the 2.5" version so i dont need to go through that again. It should have 6 connected to each node by default.
 

jared

New Member
Aug 22, 2013
38
0
0
Maybe im misunderstanding the problem. I have the 12 bay version with 6 to one node and 2 to each of the others. It was a bitch of a job but i did it.

I cheated in that this time i brought the 2.5" version so i dont need to go through that again. It should have 6 connected to each node by default.

Funny you mentioned this, as when i was shopping around, I also noticed 3 drive bays would "drive" me crazy in the long run. ;)
I was lucky in that when I called my seller, all they did was take the 4 nodes that had the 6core processors I was looking for in them, and put them into a 2.5" chassis. Once it confirmed working they sent it to me.

I looked all over and absolutely could not find anyone selling, the 2.5" chassis with the L5639s in them to begin with, so i was pretty happy when i found Deep Discount Servers who happily swapped the old nodes running the 4cores, out for me.
 

33_viper_33

Member
Aug 3, 2013
204
3
18
I did it by lifting the fans out, loosening the two fan supports and cutting the S**T out of my hands rerouting the cables. It didn't take more than 40 minutes but was definitely a PITA (or at least fingers...).
 

Clownius

Member
Aug 5, 2013
85
0
6
Sounds like a few of us have done the same thing lol

I only have L5520's in my old one and L5530's in the one i should see in a day or two.

I wonder how much power consumption the L5639's would add to my power budget. These things already chew a lot of power. Mainly because its 4 servers when i need one but still. Its adding considerably to my colo costs. So im not sure im game to add much processing power
 

33_viper_33

Member
Aug 3, 2013
204
3
18
What are your running that needs so many nodes and horse power? Is virtualization an option for you. Vmware is doing wonders for my electric bill.

Node 1 (vmware production):
pfsense
openindiana (ZFS)
Windows 2012 essentials
Windows 7
Ubuntu

Node 2 (Xen cloud test bed)
Windows 2012
windows 7
openindiana
freenas
ubuntu
pfsense

Node 3 (Xen cloud test bed)
Windows 2012
windows 7
openindiana
freenas
ubuntu
pfsense

Node 4 (random testbed)
Ubuntu for now...
 

Clownius

Member
Aug 5, 2013
85
0
6
Two different uses.

One C6100 going to one location to run a website and its databases etc. 2 Nodes dedicated to mysql alone. One to a program the site needs and the last one running as a webserver. Its overkill but it had outgrown one server and this was cheaper than buying the 2 possibly 3 we needed.

The second ones going to a different county. Running a different site. That covers one of the 4 nodes. Its on a rented dedi currently and having IO issues so throwing a Raid Card, 6 SSD's in Raid 5 or 6 and 96GB RAM at the problem. A second node will replace another pair of rented Dedi's running some small scale forums and stuff. It will not stress anything. The last 2 nodes will be spares at the moment. Will likely stay off for now.

Power wise the first C6100 is a power hungry beast. 4 Raid Cards and 12 x 3.5" 15k SaS Drives appear to use the bulk of the power. mysql has eaten a few disks already on that setup. So we went with SaS rather than SSD's. We once worked out we wrote almost 1TB a day. Enterprise level SSD's are just so damned expensive we had to try a SaS solution.

Second C6100 will be considerably better power wise as its going to use a Sata setup. That means i only need Raid card if i want Raid. Im also going for consumer SSDs which are a shedload better power wise. That and not running all the Nodes.
 

ecosse

Active Member
Jul 2, 2013
463
111
43
Ideally I want to buy a C6100 and have the following role / disk configuration:

Node 1: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 2: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 3: ESXi Server - 1 x SSD Local Host Caching
Node 4: ESXi Server - 1 x SSD Local Host Caching

Is this possible from a disk configuration point of view? Does the 6GBps mezzanine card support SSD caching?

Thanks
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Ideally I want to buy a C6100 and have the following role / disk configuration:

Node 1: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 2: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 3: ESXi Server - 1 x SSD Local Host Caching
Node 4: ESXi Server - 1 x SSD Local Host Caching

Is this possible from a disk configuration point of view? Does the 6GBps mezzanine card support SSD caching?

Thanks
The 6G mezzanine is JBOD or RAID0/1/10 only and does not have SSD caching. Of course you don't need the card to cache if you are using ZFS or Storage Spaces V2, each of which are able to use SSD drives as read or write caches.

To get four 3.5" disks to your first two nodes, you will need to do some non-standard cabling. There are details and part numbers elsewhere in the forum, but basically you need to find two Dell c6100 SFF-8087 to 4xSATA midplane cables from a 24-bay c6100 to replace the SFF-8087 to 3xSATA midplane cables. You will also need to replace the 3xSATA cables in your sleds with the 6xSATA cables from a 24-bay c6100 sled.

If you can live with three 3.5" disks instead of four, you can instead add the neat little mSATA to PCIe adapter card that another STH member discovered (Amazon.com: MP3S (mSATA to SATA Adapter for PCIe Slot): Computers & Accessories). This will let you run an mSATA SSD like the Samsung in the PCIe slot and leaves the three 3.5" disk bays free.
 
Last edited:

MACscr

Member
May 4, 2011
119
3
18
Ideally I want to buy a C6100 and have the following role / disk configuration:

Node 1: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 2: Storage Server, 4 x 3TB Disks in RAID 10, 1 x SSD Caching
Node 3: ESXi Server - 1 x SSD Local Host Caching
Node 4: ESXi Server - 1 x SSD Local Host Caching

Is this possible from a disk configuration point of view? Does the 6GBps mezzanine card support SSD caching?

Thanks
Not without custom cabling and doing that is a mess and only recommended for home use.

Your options are 3 disks per mode or 6 disks per node (but you have to reroute existing cables)
 

jared

New Member
Aug 22, 2013
38
0
0
Time to be daring...

I successfully updated my BMC to 1.33 the latest you can find for Dell...not listed on the dell download page at the front of this forum which lists 1.30 as latest...not sure why dell didnt include it in their all inclusive firmware page that Patrick put up for us at the first posting on this thread?

However i was able to pull it up with a service tag on one of my "service Tagged" nodes : JBF5WL1, uploaded successfully via IMPI going into maintenance mode on the web gui.
As far as I know it should be compatible with anyone elses who is NOT a DCS model node and is running 6core processors. Go to dells support site and get it form there using that service tag.

As for daring...I just discovered that I read my BIOS version wrong a few days ago, and that my AMI bios is at 1.47 ..HAHA....that is so old its not even listed in the historical firmware downloads anywhere....so on that note, I have no idea if this truly is a DCS model...it might not actually be one..just a reeeally old bios on a service tagged model, which considering that 6core processors were put out in force around 2011 for these things supposedly...the bios itself is 2010 which All the historical versions dont go below 2011 anywhere it seems, kinda odd...

So considering I attempted going into maintenance mode on one of my "Anonymous unservice tagged" nodes and tried to update the BIOS from there doesn't work because of unverified version errors, even though i probably have the correct rom according to Dell for the "service tageged" nodes, not to mention im not sure you CAN update the Bios from the IPMI web interface, especially when it has no clue what you are trying to update when it cant recognize what it is =D.

So on that note, I will be using the Bootable DOS USB method as seen here: How to Create a Bootable DOS USB Drive
I downloaded the self extracting flash utility from dell for V1.70 extracted it and dumped the flash.bat and the rom file onto the bootable DOS USB, from that point, after setting the USB to boot first, I will then run the flash.bat and put in the new rom into the the bios, effectively Bricking my node......Er I Mean Flashing my Node.

I will let you guys know how it works out. =)

--
Jared
 

doofoo

Member
Aug 28, 2013
36
0
6
Time to be daring...

I successfully updated my BMC to 1.33 the latest you can find for Dell...not listed on the dell download page at the front of this forum which lists 1.30 as latest...not sure why dell didnt include it in their all inclusive firmware page that Patrick put up for us at the first posting on this thread?

However i was able to pull it up with a service tag on one of my "service Tagged" nodes : JBF5WL1, uploaded successfully via IMPI going into maintenance mode on the web gui.
As far as I know it should be compatible with anyone elses who is NOT a DCS model node and is running 6core processors. Go to dells support site and get it form there using that service tag.

As for daring...I just discovered that I read my BIOS version wrong a few days ago, and that my AMI bios is at 1.47 ..HAHA....that is so old its not even listed in the historical firmware downloads anywhere....so on that note, I have no idea if this truly is a DCS model...it might not actually be one..just a reeeally old bios on a service tagged model, which considering that 6core processors were put out in force around 2011 for these things supposedly...the bios itself is 2010 which All the historical versions dont go below 2011 anywhere it seems, kinda odd...

So considering I attempted going into maintenance mode on one of my "Anonymous unservice tagged" nodes and tried to update the BIOS from there doesn't work because of unverified version errors, even though i probably have the correct rom according to Dell for the "service tageged" nodes, not to mention im not sure you CAN update the Bios from the IPMI web interface, especially when it has no clue what you are trying to update when it cant recognize what it is =D.

So on that note, I will be using the Bootable DOS USB method as seen here: How to Create a Bootable DOS USB Drive
I downloaded the self extracting flash utility from dell for V1.70 extracted it and dumped the flash.bat and the rom file onto the bootable DOS USB, from that point, after setting the USB to boot first, I will then run the flash.bat and put in the new rom into the the bios, effectively Bricking my node......Er I Mean Flashing my Node.

I will let you guys know how it works out. =)

--
Jared
It's so strange, I really haven't gotten any feedback on my issue. I purchased a C6100 w/ L5639's (6 Core) from eBay.

I asked for the service tags for the chassis as well as the node's in it and got the following:
Chassis: FR21GN1 - Shows up in Dell Service Tag Lookup as: (WCYH11Base, Server, DCS 6100, V11, 3.5, Data Center Solutions, 81TFG1ASSEMBLY..., BASE (ASSEMBLY OR GROUP)..., SERVER, SERVER CHASSIS..., 6100, V11, L10, 3.5 C6100)
Nodes: G231GN1.1
G231GN1.2
G231GN1.3
G231GN1.4

The node's do not show up on the lookup took, but if I remove the .1, .2, .3, .4 - The chassis ID for the node's shows up, and has drivers, bios upgrades, etc listed under it.

Are these upgradable?
 

ecosse

Active Member
Jul 2, 2013
463
111
43
dba, MACscr - thanks a lot for the advice. Hmmm not sure whether to go with datastores on the ESXi server or stick with my original plan of using storage external to the ESXi servers so I can use cachecade. At the moment I have a bunch of Supermicro X8DTT's but thinking of moving to a Dell C6100 cos the Supermicro LSI2008 "mezzanine" SAS cards are too expensive (so I can find so far anyway). Choices choices!!
 

jared

New Member
Aug 22, 2013
38
0
0
It's so strange, I really haven't gotten any feedback on my issue. I purchased a C6100 w/ L5639's (6 Core) from eBay.

I asked for the service tags for the chassis as well as the node's in it and got the following:
Chassis: FR21GN1 - Shows up in Dell Service Tag Lookup as: (WCYH11Base, Server, DCS 6100, V11, 3.5, Data Center Solutions, 81TFG1ASSEMBLY..., BASE (ASSEMBLY OR GROUP)..., SERVER, SERVER CHASSIS..., 6100, V11, L10, 3.5 C6100)
Nodes: G231GN1.1
G231GN1.2
G231GN1.3
G231GN1.4

The node's do not show up on the lookup took, but if I remove the .1, .2, .3, .4 - The chassis ID for the node's shows up, and has drivers, bios upgrades, etc listed under it.

Are these upgradable?
Ok here is my update and it should also answer your question.


I chickend out at the last second, called up a rep at the DELL DCS team, who told me that my current bios 1.47 is a specific DCS BIOS, and that since the ESM is soddered on and not socketed, that doing ANYTHING to it will brick it.
So....those 2 DCS nodes are being shipped back to seller ,and seller is shipping me back some service tagged nodes.

To answer your question DooFoo,
Yes you can upgrade your nodes, using the bootable USB DOS method, make sure to download the 6100v170.rom and use AFUDOS to flash it from DOS.
The reason i say this, is because it doesnt so much matter what the chassis is as long as you are able to bring up your Service taggs on the nodes, and in this case it appears you can actually SEE your service tags in your bios, which means your a step ahead of me.

So, with that in mind as long as you can bring up the drivers for your service tag, which is sound like you did, you are good to go.

You can flash your BMC by going into MAINTENANCE MODE from the remote management service via your network. you will possibly want to consider the latest 1.33 which you will only find on your dell driver list after inserting your tag.

Be sure to use the Bootable DOS USB method with AFUDOS
The command will be the following:

afudos 6100v170.ROM /p /b /n /c /x
at which point it will bypass the SLP HOL protection, and skip the rom check, while forcing the update on your system.

This worked out perfectly for me, after following the guys on this thread here where I learned about it: http://forums.servethehome.com/processors-motherboards/1707-dell-c6100-xs23-ty3-firmware-updated-2.html#post16371
 
Last edited:

doofoo

Member
Aug 28, 2013
36
0
6
Ok here is my update and it should also answer your question.


I chickend out at the last second, called up a rep at the DELL DCS team, who told me that my current bios 1.47 is a specific DCS BIOS, and that since the ESM is soddered on and not socketed, that doing ANYTHING to it will brick it.
So....those 2 DCS nodes are being shipped back to seller ,and seller is shipping me back some service tagged nodes.

To answer your question DooFoo,
Yes you can upgrade your nodes, using the bootable USB DOS method, make sure to download the 6100v170.rom and use AFUDOS to flash it from DOS.
The reason i say this, is because it doesnt so much matter what the chassis is as long as you are able to bring up your Service taggs on the nodes, and in this case it appears you can actually SEE your service tags in your bios, which means your a step ahead of me.

So, with that in mind as long as you can bring up the drivers for your service tag, which is sound like you did, you are good to go.

You can flash your BMC by going into MAINTENANCE MODE from the remote management service via your network. you will possibly want to consider the latest 1.33 which you will only find on your dell driver list after inserting your tag.

Be sure to use the Bootable DOS USB method with AFUDOS
The command will be the following:



at which point it will bypass the SLP HOL protection, and skip the rom check, while forcing the update on your system.

This worked out perfectly for me, after following the guys on this thread here where I learned about it: http://forums.servethehome.com/processors-motherboards/1707-dell-c6100-xs23-ty3-firmware-updated-2.html#post16371

I'll make note of this, but to be clear I cannot see the service tags in the bios yet (I don't know if they are viewable). The service tags were given to me on the chassis and the node's by the eBay seller. I cannot pull the .1, .2, .3, etc up on the dell site, but the main service code before the .1, .2, .3, .4 pulls up fine. Do the .1', .2, etc work for other people when looking up?
 

Clownius

Member
Aug 5, 2013
85
0
6
Interesting i just got an unexpected bonus with my new C6100. 4 of the Mezzanine Cards.

Considering the merits of swapping out a couple of my Raid Cards for Mezzanine cards. I can think of plenty of things to use the Raid Cards for in other systems. Im sure the 2 drive Raid1 Nodes will work fine using a Mezzanine card instead of the full blown LSI 9260