Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

alan

New Member
Oct 24, 2013
20
0
0
^^ It may not be an error, it doesn't seem to test for java, just tells you that you need it. open the file it downloads when you click.

My question is if there is a way to get rid of all of the security warnings about the certificate?
 

lmk

Member
Dec 11, 2013
128
20
18
Hello all.
At my current workplace, we had ordered 4 full Nodes with 96G ram in each sled. A nice piece of kit (sadly without any raid cards) but perfect for our current task for the moment.

However it turned out a few sleds would randomly lock up just sitting idle with ESX5.5 running. Only 2 of them locked up and one Node had a fan error issue.
Hi,

Would you mind saying who it was you bought them from? Might help others out :)

Thanks!
 

PigLover

Moderator
Jan 26, 2011
2,975
1,283
113
Very nice.

Do be careful and watch the heat on the SSD. The big chip with the heatsink that is sitting right under them is the Intel 5520 IOP, which has a reputation for temps @ 70C or higher being considered "normal". Not quite sure what 70C (or anything close to it) will do to your SSDs. Probably OK - just keep an eye on it.
 

IM0001

New Member
Dec 18, 2013
3
0
0
^^ It may not be an error, it doesn't seem to test for java, just tells you that you need it. open the file it downloads when you click.

My question is if there is a way to get rid of all of the security warnings about the certificate?
If it doesn't test for java, then why does it grey out the Remote Console button which is what you need to click to get the console file to download?

I believe the company we got them from is Lextec Components.


It is almost an exact incident at the guy here earlier in this thread However no change in BMC FW has worked so far.
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
Very nice.

Do be careful and watch the heat on the SSD. The big chip with the heatsink that is sitting right under them is the Intel 5520 IOP, which has a reputation for temps @ 70C or higher being considered "normal". Not quite sure what 70C (or anything close to it) will do to your SSDs. Probably OK - just keep an eye on it.
Quite true! Like kev009, I added a bracket to the c6100 that allows an SSD to sit where the mezzanine card usually lives. The chip underneath is indeed rather warm, so I tacked a small sheet of aerogel insulation to the bottom of the SSD - got it as a free sample.
 

Rain

Active Member
May 13, 2013
246
88
28
If it doesn't test for java, then why does it grey out the Remote Console button which is what you need to click to get the console file to download?

I believe the company we got them from is Lextec Components.


It is almost an exact incident at the guy here earlier in this thread However no change in BMC FW has worked so far.
Log into the BMC, and then try to browse directly to the jviewer.jnlp file:
Code:
https://10.1.1.230/Java/jviewer.jnlp
(Obviously replacing the IP w/ the IP of your node's BMC)
 

RimBlock

Member
Sep 18, 2011
788
8
18
Singapore
If it doesn't test for java, then why does it grey out the Remote Console button which is what you need to click to get the console file to download?

I believe the company we got them from is Lextec Components.


It is almost an exact incident at the guy here earlier in this thread However no change in BMC FW has worked so far.
Look at the boards and check the ASpeed chipset version. One of them does not support KVMoIP. The page checks for the chipset version and will grey out the button if the non KVM version is detected.

Had the same issue with a unit that came with no mezzanine connectors and was clearly a more custom DCS version than the more common units.

The ASpeed chipset without KVM is the AST1100 (taken from my thread about this issue here).
 

IM0001

New Member
Dec 18, 2013
3
0
0
Look at the boards and check the ASpeed chipset version. One of them does not support KVMoIP. The page checks for the chipset version and will grey out the button if the non KVM version is detected.

Had the same issue with a unit that came with no mezzanine connectors and was clearly a more custom DCS version than the more common units.

The ASpeed chipset without KVM is the AST1100 (taken from my thread about this issue here).
I will double check, but the funny part is 2 of the 4 sleds in this node are ones we had in the office before, working, with KVM... So how does it work before, and not now?

Also I have tried the manual download of the jviewer which works, but has no mouse/keyboard input.
 

shanghailoz

New Member
May 10, 2013
9
0
0
Have the cases + risers now in the office, I was away in Zhuhai for a week, so they were sitting there waiting, like little xmas presents hehe.

Mezzanine to PCI looks like this:



Cases are surprisingly nice, although I haven't mounted anything yet. Quality is quite good. These 3rd generation cases are waaaaaay better than the previous ones I bought at the beginning of the year in Feb.

Photos here -

C6100 Hackintosh - a set on Flickr

Will be adding more as I mount stuff in the next hour or two.
 

Clownius

Member
Aug 5, 2013
85
0
6
Ok i have to know where to get all that gear.

The case really interests me. The one where you slide the node in looks awesome. Especially if it allows for 2 full size pci-e cards like it looks to.

Mine wont be a Hackintosh it will be a home server running Ubuntu server probably.

I have to ask what the noise levels are like. Because thats the main reason i cant just use a full C6100. Buggers way too noisy even living in the garage
 

Dino

New Member
Dec 23, 2013
4
0
0
England
First post.. been reading avidly this rather long thread - so please excuse me if may have missed out answer in the some 71 pages!
Have just bought a C6100 on eBay in England, but on opening the box the C6100 doesnt look like anyone else's nor the Dell C6100 hardware manual. The model number on the lid says "XS23-TY" and not "XS23-TY3". There is no Dell Service tag. Came with 4 nodes, 8Gb RAM per node and twin L5520 Xeons.
On the inside, there are only 9 x DIMMs, not 12, and the nodes are not hot-plug, as the SAS/SATA cables and power cables have to be disconnected before withdrawing the nodes.
Anyone any idea what flavour of C6100 this is? I havent done any firmware updates as yet. The BIOS is v02.61.
Have attached few pics.
If anyone seen one of these before all info greatly received!



 

s0lid

Active Member
Feb 25, 2013
259
34
28
Tampere, Finland
Well it's totally different system.
PSU backplane is different.
Fan controller is different.
Sata backplane is different.
Fans are different size.
 

Clownius

Member
Aug 5, 2013
85
0
6
Well it's totally different system.
PSU backplane is different.
Fan controller is different.
Sata backplane is different.
Fans are different size.
Yeah that has to be some sort of custom DCS job. A particularly woeful one at that.

Especially the no hotplug isnt something i would expect people to shortcut on even to save $$$$. Would love to know the history on this one. They had to be ordered up by an accountant not a tech. One failed RAm stick on the wrong node and your pulling at least 2 nodes apart and have downtime on 4 nodes......

If that was advertised as a TY3 i would be demanding a return and refund.

If you have to keep it plan around a max 6 sticks of RAM per Node (why in gods name half the RAM slots on the second CPU it makes no sense at all) and stay away from the Hex Core CPUs as they may (probably will) require a BIOS update and i doubt that uses a standard BIOS its so different.

Those fans just have to be absolute screamers. So small

Its so non-standard i dont even know where to start. Most DCS nodes are fairly close to standard. Heck some like mine are identical. This is almost totally different. Only the back view looks the same.

Edit: Actually im wrong the rear hot swap handles are missing too.

Which seller (so people can avoid them) and can you link it please (i want to check what the advertisement said). I wouldnt touch them with a 10ft barge pole.
 
Last edited:

shanghailoz

New Member
May 10, 2013
9
0
0
Will do, also going to take some more photos of the cases.
I should probably take photos of the older ones too for comparison.

The larger case does indeed allow for 2 full size pci cards, the smaller allows for 2 half height.
Quality on both cases is way better than I expected.

I started to mount the little case out, will do the larger one tomorrow.
Initial pics for smaller one:



Mezzanine pci adaptor in situ


Again, I'm quite impressed with build quality, they're not cheap, but they're done well.
 

Clownius

Member
Aug 5, 2013
85
0
6
Dino, thanks for posting the photo of your oddball c6100 (or c6100-like) system. Was that bought from eBay? If so, please post the auction so others can avoid that model.
My quick search suggests this

DELL PowerEdge C6100 4 Node 8x Xeon Quad Core 2.26GHz 32GB DDR3 Cloud Server VTx | eBay

Listed as

Form Factor:
Rackmount
Processor Speed: 2.26GHz
Brand: Dell
Number of Installed Processors: 8
Model: C6100 XS23-TY
Memory Type: DDR3 SDRAM
MPN: Blade Server Rack mount Server
Memory (RAM) Capacity: 36GB
Processor Manufacturer: Intel

Very strange units.

He hasnt listed them as TY3's or suggested the Nodes are hot-swappable. Other specs suggest they are not standard machines but i noticed the photos are very limited. Its like someone didnt want anyone to realise this wasnt a standard C6100 XS23-TY3. One inside pic and most people wouldnt go near them.
Specs suggest it only takes up to 4GB RAM sticks not the usual 16GB RAM sticks too. Wow its a seriously hobbled version of the normal C6100. Almost no upgrade path sadly.

If it wasnt for Dino's pics i would have thought they were the normal type. Wow

Edit: Ugh ebay.co.uk is evil. Someone else has C6105's (the AMD version) listed as C6100's too. They even link to the C6105 spec sheet in their description. Are there no consumer protections in the UK or something and they can get away with false advertisements?
Never mind found those are a US seller. Still very very rude
 
Last edited:

Dino

New Member
Dec 23, 2013
4
0
0
England
Cheers for speedy replies. That's the link.
The company seems quite reputable on eBay - 33000+ feedback and 99.9% positive.
Not sure whether they know its DCS bespoke build or just stuff that's come in for refurb. They have done a good job with the refurb though as its spotless inside - even the fans are clean.

There is nothing in the advert that is wrong... but nothing that says its a DCS Custom server either :mad:

The more I look at it, the more it is removed from standard. Have attached a couple more photos below. The fan arrangement is daft as offset so get practically three fans for one side servers and two fans for the other. It is very noisy... too noisy in my garage which is part of the house. The RAM layout seems daft as well. Photo below shows lid label with RAM info.

I will be stuck with current firmware levels and dont know what RAM is supported. To balance the CPUs would I have to put bigger sticks in? e.g. 6 x 4 and 3 x 8? for 48Gb total?
The seller has 3 other specs of the same C6100 on offer, all say -TY and not -TY3. One has 144GB RAM - I wonder what sticks it has!?
Changing the fans for 4 better ones going to be awkward as is getting to the lower two nodes :(

They have responded to my queries and asked me to call them after the hols when they are back in the office on the 27th Dec.

I really want a C6100 for a home lab and was hoping to play over Christmas... but now not so sure what to do :confused:





 

idea

Member
May 19, 2011
86
5
8
I am designing one node to be a file server. Can someone do a sanity check on this setup for me?

The goals:
  • Utilize the 6x 2.5" disks up front for 10K SAS storage (for write intensive items like databases, VM guests, etc)
  • Utilize the 20-bay JBOD expansion unit for slow archived storage
  • Boot the node (rpool) from the HP SAS Expander inside the 20-bay unit (because all 6 disks up front will be used)
The hardware:
  • JBOD SAS Expander with 20 bays (Custom built Norco 4220 with HP SAS Expander)
  • Dell C6100 24-disk chassis
  • PCI card: LSI SAS2008 9200-8e SAS HBA with 8 external ports
  • Mezz card: LSI 1068e (I wish I could justify spending the money on an LSI SAS2008 mezz card)
  • 20x 3.5" SATA drives
  • 6x 2.5" SAS drives
  • 2x 2.5" SATA drives that I am going to hide inside the 20-bay SAS expander
The plan in order of steps:
  1. I will stuff both the PCI and Mezz card into one node
  2. The PCI card will connect the 20-bay JBOD expander via it's external port
  3. The Mezz card will connect the 6 SAS drives from up front
  4. Create ZFS mirror rpool on 2x SATA disks from the JBOD expander
  5. Create ZFS mirrors striped together from the 6 SAS disks (zpool create tank1 mirror disk1 disk2 mirror disk3 disk4 mirror disk5 disk6)
  6. Create two zpools of RAIDZ2 on the 20-bay (zpool create tank2 raidz2 disk1 disk2 disk3.....)