Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

darkconz

Member
Jun 6, 2013
193
15
18
You can. Try using IE and/or Firefox to see if that works. Usually pretty simple step.
I haven't tried Firefox yet but IE does not work. Like I said, I have no problem opening the jViewer from Chrome and that I can view the remote console. The node just reboots itself every now and then, it is when I lose the iKVM connection. How would you update it remotely? Is it via the HTTPS website (through Firmware Update page)?
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Part of the BMC upgrade requires that the BMC resets itself...which will kill your iKVM connection. You'll not be able to see any messages that occur between the start of the reset and the BMC being ready to accept new connections again.

Because of this I would strongly reccomend you get a monitor and keyboard out at least for the BMC upgrade.
 

darkconz

Member
Jun 6, 2013
193
15
18
Well, I found out why I was getting disconnects. When I hooked up the server, I hooked up 2 CAT5's to each node (1 for the LOM, 1 for Management). It appears the setting for IPMI is on shared with LOM1 and hence, everytime the server rebooted, I would lose connection. I got my wife to unplug the LOM wire as a test and voila, no connection to IPMI. I guess I have to connect a monitor to it and a keyboard to configure the BIOS before running it headless.

I read (this was done about 2 months ago) somewhere on here, that there is a precaution on updating the firmwares. I can't find it now. Does anybody have a direct link?
 

darkconz

Member
Jun 6, 2013
193
15
18
I plugged the monitor in and updated:

BMC -> 1.33
BIOS -> 1.71
FCB -> 1.20

Everything went smoothly. I was right with the shared/dedicated IPMI thing. I don't get dropped connection now when I connect to the remote console via jViewer.
 

WScott66

Member
Nov 11, 2013
71
4
8
So, I ordered my XS32-TY3 today with 4 nodes configured with 2x L5520 CPU's, 24GB memory and 1 250GB HDD directly from eSISO for 650.00!

I cant wait for it to ship and arrive as this will be used to maintian my IT Certifications VCP, MCSE, etc..
So far I would have to say that I would refer anyone interested in this equipment to send a direct email to them instead of going through ebay which adds the costs of posting and the percentage they take for the deal (Win/Win).

Ill post a picture once I recieve the server(s) and let you alll know how the deal completes.
 

ShockwaveCS

New Member
Nov 12, 2013
1
0
0
I saw the blog at http://www.servethehome.com/25-drives-35-hotswap-drive-bays-good-bad-drive-adapter-options/ on 3.5 to 2.5 options for this server, which path did you guys take for SSD drives? I currently have this drive bay http://www.laptoppartsexpert.com/images/F76443213.png for the 3.5s.
I am using the 2.5" to 3.5" converter by IcyDock EZ Convert -- Newegg Item N82E16817994064

I have about 6 of them and they are great. Tool less design is always good.

As noted in the article, get the MB882SP-1S-1B and not the "1S-2B" for the 7mm compatibility.
 
Last edited:

Mikeynl

New Member
Nov 16, 2013
15
0
1
Hi guys,

With big interested i readed the full thread to get satisfaction on most of my question. But...

Is there any way to recognise a dcs version besides the tags ? Somewehere in al the postings i saw about no expansion slot for the menazine slot and a version with only 3 sata ports. Also if i am right, the bios isnt upgradable. Is there any need to update the bios ? If so, for what reason.

I did a bid on a version that has only two modules, and from what i saw on the picture, the chassis isnt ready for the third and forth node.

For what i need i had the following in mind, 6* 1 tb raid with the LSI SAS 1068e Daughter Card. I only need some extra advise for the ssds. I am looking for a pcie card that has 6gbs ports that is also working with esx 5.1 I hope u guys can help me with that.

Thanks !
 

Clownius

Member
Aug 5, 2013
85
0
6
The DCS versions are semi custom jobs. I have 5 DCS Nodes that are basically the same as the general nodes. I havent felt any need to update the BIOS on any of my systems.

Generally the 2 node version just has false plates at the back similar to the ones with only one PSU. Pull those out and Nodes slide right in. The seller probably stripped those two nodes and parted them out.

I dont know esx 5.1 so maybe someone else can advise but they LSI 9260 8i is what these systems use generally and its a 6gb card but its also SaS and i believe you loose some performance running Sata over SaS. Enough to worry about i just dont know.
 

Mikeynl

New Member
Nov 16, 2013
15
0
1
Hi Clownius,

Thank you for your answer. I work allready so long with computers, i think i can count how many times i flashed a bios. In the p3 time, it was a couple of time for newer cpus. After that barely any more. So just wondering.

From what i saw on multiple raid cards, its that they offer 6gb sas, but sata is back on 3. Thats why i use the raid extension for the normal harddrives. And local just for a card with 6gbs sata 3 ports that also will work in esx. Not seeking for that part any raid feauture.

Also, getting normal power inside the chassis, anyone tried to get power from the backplane, put extra wires and molex connectors ? Would be nice to put the ssds inside the chassis in the blank fillers.

-Merci

// edit,

Is it possible to boot esx from a usb key ?
 
Last edited:

darkconz

Member
Jun 6, 2013
193
15
18
It is possible to boot ESXi from USB. In BIOS, you just need to force boot USB device first. I think the entire backplane gets power even though you only have 2 nodes. As soon as one node powers on, I think all HDD gets power so you wouldn't need external power.

Hi Clownius,

Thank you for your answer. I work allready so long with computers, i think i can count how many times i flashed a bios. In the p3 time, it was a couple of time for newer cpus. After that barely any more. So just wondering.

From what i saw on multiple raid cards, its that they offer 6gb sas, but sata is back on 3. Thats why i use the raid extension for the normal harddrives. And local just for a card with 6gbs sata 3 ports that also will work in esx. Not seeking for that part any raid feauture.

Also, getting normal power inside the chassis, anyone tried to get power from the backplane, put extra wires and molex connectors ? Would be nice to put the ssds inside the chassis in the blank fillers.

-Merci

// edit,

Is it possible to boot esx from a usb key ?
 

darkconz

Member
Jun 6, 2013
193
15
18
I did a little snooping around today with my C6100. I installed Windows 2012 and I was curious so I downloaded CPUZ. The system reads 24GB (per node) of memory from BIOS, System Setting in Windows 2012 and even CPUZ. However, CPUZ could only read 3/12 slots which were populated with 4GB DIMMs. What happened to the other 3 DIMMs/node that CPUZ couldn't read in the SPD tab?

Also, I noticed the L5639's I have were identified as E5606 but specification says its L5639. Kinda strange...
 

Mikeynl

New Member
Nov 16, 2013
15
0
1
It is possible to boot ESXi from USB. In BIOS, you just need to force boot USB device first. I think the entire backplane gets power even though you only have 2 nodes. As soon as one node powers on, I think all HDD gets power so you wouldn't need external power.
From what i understand in the whole thread is that the whole backplane got powered up. Its more that i want some extra molex / sata power connectors. Great to here about usb, i didnt see anyone mentioning it so far in the thread.
 

PersonalJ

Member
May 17, 2013
127
11
18
So, I ordered my XS32-TY3 today with 4 nodes configured with 2x L5520 CPU's, 24GB memory and 1 250GB HDD directly from eSISO for 650.00!

I cant wait for it to ship and arrive as this will be used to maintian my IT Certifications VCP, MCSE, etc..
So far I would have to say that I would refer anyone interested in this equipment to send a direct email to them instead of going through ebay which adds the costs of posting and the percentage they take for the deal (Win/Win).

Ill post a picture once I recieve the server(s) and let you alll know how the deal completes.
Who at eSISO did you email?
 

nickveldrin

New Member
Sep 4, 2013
23
3
3
I did a little snooping around today with my C6100. I installed Windows 2012 and I was curious so I downloaded CPUZ. The system reads 24GB (per node) of memory from BIOS, System Setting in Windows 2012 and even CPUZ. However, CPUZ could only read 3/12 slots which were populated with 4GB DIMMs. What happened to the other 3 DIMMs/node that CPUZ couldn't read in the SPD tab?

Also, I noticed the L5639's I have were identified as E5606 but specification says its L5639. Kinda strange...
More than likely the reason why you could only see half of the ram is the NUMA separation. If you don't know what NUMA is, i'd recommend to spend a couple of minutes doing some searches to get a better handle/understanding, since if you wanted to go for the highest performance within computing, you're isolate the NUMA nodes and ensure each cpu request you'd make would stay within its confines and not cross over to its NUMA partner. In some benchmarks, isolating NUMA provides 10-30% increases in latency and raw computing power.
 

darkconz

Member
Jun 6, 2013
193
15
18
Is there any way to check all DIMM model number/serial number? I would like to know exactly what is in the node without taking it apart. CPUZ only reports half of the RAM, it doesn't change to the other 3 DIMMs even if I selected the second processor on the main screen... (may sound dumb)

More than likely the reason why you could only see half of the ram is the NUMA separation. If you don't know what NUMA is, i'd recommend to spend a couple of minutes doing some searches to get a better handle/understanding, since if you wanted to go for the highest performance within computing, you're isolate the NUMA nodes and ensure each cpu request you'd make would stay within its confines and not cross over to its NUMA partner. In some benchmarks, isolating NUMA provides 10-30% increases in latency and raw computing power.
 

Clownius

Member
Aug 5, 2013
85
0
6
Well i swapped in my L5639's and found the old bios (1.04) on the Nodes i was using didnt support them. Updated 4 of my 9 nodes (i brought a spare) with no issues to bmc 1.33 and bios 1.71
The CPUs work now :)

Of those nodes 3 were DCS nodes according to Dells website. Was really expecting to brick one...

Used this Guide
Upgrading Dell C6100 BIOS | Copy Error (aka Altered Realms)
and FreeDOS on a usb stick

Hope that helps anyone whos considering doing it.
 

root

New Member
Nov 19, 2013
23
0
1
Is there any limit of number of HDDs per node

Hi,

I am almost ready to order a C6100, just need to know if it is possible to run 8 or more HDDs off a single node. Let's say I'll get a 12 3.5" HDDs and I'd like to have 4 of them connected to one node and the remaining 8 to second one. Node #3 and #4 will boot from USB stick.

Thanks.


EDIT: Found in another post earlier, still will be great if somebody can confirm:

"As delivered you can fairly easily wire it for a "node" in the chassis to support 6 drive slots. You could also physically wire it to support more than 6 drives, but you would be left with a configuration where you can't slide out the node w/out taking off the lid and removing the SAS/SATA cables first.

The drive connections are wired through an "interposer card" that and a connector between the MB tray and the node. The interposer card only has 6 connectors for drives on it (amusingly, mine was also shipped with a 1068e-base SAS mezzanine care and Dell designed it to expose exactly 6 of the 8 ports the chip natively supports - 4 on an 8087 connector and then two more on individual SAS/SATA connectors). They appear to have done this for to support a version with only two nodes, each node connected to 4 drives.

I would imagine that there exists a 12-port (or at least an 8 port) version of the interposer card used on the 2.5" drive chassis. I was actually hoping to locate one because I think - long term - that I may want a 4-node system with 8 drives on one node and each of the others having exactly one drive. Not top of my priority list but something for later."
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Hi,

I am almost ready to order a C6100, just need to know if it is possible to run 8 or more HDDs off a single node. Let's say I'll get a 12 3.5" HDDs and I'd like to have 4 of them connected to one node and the remaining 8 to second one. Node #3 and #4 will boot from USB stick.

Thanks.


EDIT: Found in another post earlier, still will be great if somebody can confirm:

"As delivered you can fairly easily wire it for a "node" in the chassis to support 6 drive slots. You could also physically wire it to support more than 6 drives, but you would be left with a configuration where you can't slide out the node w/out taking off the lid and removing the SAS/SATA cables first.

The drive connections are wired through an "interposer card" that and a connector between the MB tray and the node. The interposer card only has 6 connectors for drives on it (amusingly, mine was also shipped with a 1068e-base SAS mezzanine care and Dell designed it to expose exactly 6 of the 8 ports the chip natively supports - 4 on an 8087 connector and then two more on individual SAS/SATA connectors). They appear to have done this for to support a version with only two nodes, each node connected to 4 drives.

I would imagine that there exists a 12-port (or at least an 8 port) version of the interposer card used on the 2.5" drive chassis. I was actually hoping to locate one because I think - long term - that I may want a 4-node system with 8 drives on one node and each of the others having exactly one drive. Not top of my priority list but something for later."
I can confirm this. I previously used 7 HDDs to one node. I used a M1015 (supports 8 drives) with 1m cables wired through the chassi. Its tricky and the space for wiring is tight but its doable.

So what you will need is 2 raid-cards and long cables. I.e 2xM1015 and 3xcables where one has 8 drives and one 4.
 

chune

Member
Oct 28, 2013
119
23
18
This has probably been asked already, but this thread is getting pretty large: Is it possible to hook all 12 drive bays to one pcie HBA on one node? What are the connectors on the backplane? 4x SFF-8087? Would be nice to have one loaded node for ZFS using all bays, then rely on iscsi boot for the remaining nodes and go diskless.