Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OrangesOfCourse

New Member
May 15, 2013
21
0
1
So did you manage to solve the problem with the cables? where you able to buy them?

I'm planning to buy one FBC PCI18 from ebay but I have to get 3 different cable types to connect them. Please let me know when you can.

Thank you.

Adam
Adam,

I ended up talking to my ebay seller and he sent me a new enclosure with the PIC18 board and corresponding PDU boards/cables. Let me look at the difference between the two enclosure and document the changes for you. It'll take me a little time so please bear with me.
 

amlife

New Member
May 30, 2013
8
0
0
Adam,

I ended up talking to my ebay seller and he sent me a new enclosure with the PIC18 board and corresponding PDU boards/cables. Let me look at the difference between the two enclosure and document the changes for you. It'll take me a little time so please bear with me.

Thanks man I really appreciate your help.

I live in Canada and every box I buy from U.S I have to pay extra $100 - $200 for customs! so it would be much eiser just to replace the card and cables at this point.

The box with FBC PCI16 works fine and it's not loud (I don't really care much since it's running in the datacenter).

FCB PCI16 oprate at 7700 RPM (that takes about extra 1 AMP).

FCB PCI18 oprate at 4800 RPM. (Good Power savings).

So If I simply fix this issue I can oprate 1U server or additional node at no additional cost.
 

OrangesOfCourse

New Member
May 15, 2013
21
0
1
Adam,

This is what i've been able to come up with. I was unable to find the cables on ebay but maybe you can message the ebay seller to see if they can find it. The pages i refer to are from the Hardware Owner's Manual.

Difference between the PIC16 and PIC18 enclosure:

Fan Control Board - (PIC18 Ebay Link)
Power Distribution Boards - (PIC18 PDB Ebay Link)
Cables between the FCB and PDBs - (Wasn't able to find on Ebay):
  1. system fan board connectors - Page 138 - Figure 5-12 - #10 and #11 | Page 139 - Figure 5-13 - #2
  2. system fan board power connectors - Page 138 - Figure 5-12 - #5 and #8 | Page 139 - Figure 5-13 - #5
Fans - (PIC18 Fans Ebay Link)

The midplane and backplane seem to be the same. There is a revision difference between the 2 backplanes (A02 vs A04) but all other numbers seem to match. I can't tell any difference other than those listed. This was a quick poke into the systems. I really don't want to take apart the systems right now to document further but if you have any specific questions I can dig deeper.

I hope this helps. Let me know if you need more information.
 

amlife

New Member
May 30, 2013
8
0
0
Adam,

This is what i've been able to come up with. I was unable to find the cables on ebay but maybe you can message the ebay seller to see if they can find it. The pages i refer to are from the Hardware Owner's Manual.

Difference between the PIC16 and PIC18 enclosure:

Fan Control Board - (PIC18 Ebay Link)
Power Distribution Boards - (PIC18 PDB Ebay Link)
Cables between the FCB and PDBs - (Wasn't able to find on Ebay):
  1. system fan board connectors - Page 138 - Figure 5-12 - #10 and #11 | Page 139 - Figure 5-13 - #2
  2. system fan board power connectors - Page 138 - Figure 5-12 - #5 and #8 | Page 139 - Figure 5-13 - #5
Fans - (PIC18 Fans Ebay Link)

The midplane and backplane seem to be the same. There is a revision difference between the 2 backplanes (A02 vs A04) but all other numbers seem to match. I can't tell any difference other than those listed. This was a quick poke into the systems. I really don't want to take apart the systems right now to document further but if you have any specific questions I can dig deeper.

I hope this helps. Let me know if you need more information.

I can't thank you enough for your detailed information. I actually just came back from the datacenter and while I was there I monitored power usage for both nodes (one with PCI18 and one with PCI16) and I found they are consuming about the same amount of power (each node takes 1 AMP).

Therefore now i can say with confidence that while the node with PCI16 model operate at 7000 RPM it does not consume lot amount of power that requires change or upgrade, I think that is the reason why Dell did not release FCB Firmware upgrade for it. So unless you have other problems there is no reason to upgrade it. It's good the way it is.

I also think if you would need to reduce fan speed, it would be better just to change the fans to different module that operates at lower speed.

That was my conclusion for this issue, but in the future when buying C6100 I will make sure that I get PCI18.

Thanks

Adam
 
Last edited:

OrangesOfCourse

New Member
May 15, 2013
21
0
1
I just ordered a pair of Samsung 840 pros and was wondering what adapter you guys are using to make it work with the 3.5" hard drive trays?

Any help would be much appreciated.
 

gtallan

New Member
Apr 25, 2013
12
0
1
Minnesota
My problem was that it wouldn't ever slow the fans down. It would start at a high speed and stay there. Very loud. :| I tried all the different power management settings to no avail. Another problem was that it was not displaying anything for fan rpm under the server health section of IPMI. I talked to my ebay seller (pdneiman - Amazing guy! Loads of help) and he sent me a new enclosure (no nodes or power supply) with the PIC18 board in it. That has solved all my problems. Now the fans go to full power at boot and then cycle down to about 3600rpm. I still have the old enclosure (need to ship soon) and can document the changes between the two if you like. The only obvious differences are the PDBs and FCB.

I'm also looking at the San Ace fans but for now the sound level is acceptable to me. Might do it in the near future anyways to cut down on the sound some more.
It does sound a bit like your PIC16 FCB was faulty, rather than an inherent problem with it. Totally agree about pdneiman too - super helpful. Glad you got the problem fixed. From your description, I would say my C6100 is working ok, the PIC16 FCB does report fan speeds etc back to the BMC.

I seem to have fixed my sound issues for now by a less technical means, involving 1.5 sheets of plywood and an old rack fan panel...



had been working on that enclosure for a while but when I first heard the C6100 take off I thought it couldn't have any significant effect. But actually it does... I can hear it when in the same room but not at all outside. Seems to be cooling ok so far too. I may revisit the San Ace fans to get the sound level lower still, but it's not so urgent now...
 

OrangesOfCourse

New Member
May 15, 2013
21
0
1
It does sound a bit like your PIC16 FCB was faulty, rather than an inherent problem with it. Totally agree about pdneiman too - super helpful. Glad you got the problem fixed. From your description, I would say my C6100 is working ok, the PIC16 FCB does report fan speeds etc back to the BMC.

I seem to have fixed my sound issues for now by a less technical means, involving 1.5 sheets of plywood and an old rack fan panel...

had been working on that enclosure for a while but when I first heard the C6100 take off I thought it couldn't have any significant effect. But actually it does... I can hear it when in the same room but not at all outside. Seems to be cooling ok so far too. I may revisit the San Ace fans to get the sound level lower still, but it's not so urgent now...
I was looking into something like that for myself. I'm thinking of putting my server in the attic but with Texas heat its impossible without an AC unit. Was looking to get a cheap portable AC unit and making an insulated rack to house the server in. I'll have to figure out what to do with it soon.

Is that a HP 1810-24G switch? How do you like it? I'm looking for a switch myself (already was short on ports and now with the C6100 i'm out) and the HP switch seemed like the best for the price.
 

amlife

New Member
May 30, 2013
8
0
0
It does sound a bit like your PIC16 FCB was faulty, rather than an inherent problem with it. Totally agree about pdneiman too - super helpful. Glad you got the problem fixed. From your description, I would say my C6100 is working ok, the PIC16 FCB does report fan speeds etc back to the BMC.

I seem to have fixed my sound issues for now by a less technical means, involving 1.5 sheets of plywood and an old rack fan panel...



had been working on that enclosure for a while but when I first heard the C6100 take off I thought it couldn't have any significant effect. But actually it does... I can hear it when in the same room but not at all outside. Seems to be cooling ok so far too. I may revisit the San Ace fans to get the sound level lower still, but it's not so urgent now...
Wow your server will get baked before noon with that enclosure. I'm actually surprised to see people buy these servers to be used at home.

I really think you should buy portable quarter rack and hookup your servers to it as for the noise, you may switch the fans or simply find a place to host it. Your server will not last with this kind setup.
 

Dr_Drache

New Member
Jun 7, 2013
26
0
1
Wow your server will get baked before noon with that enclosure. I'm actually surprised to see people buy these servers to be used at home.

I really think you should buy portable quarter rack and hookup your servers to it as for the noise, you may switch the fans or simply find a place to host it. Your server will not last with this kind setup.
I really wish I didn't just read this. this is serve the home, and from my short time here, I've come to understand this isn't "colocate the home"
ANYONE who've used servers for any period of time, understand they will take ALOT more abuse than most people give credit for.
I have a IBM X3550 M1, that's been running since day 1 (well, honestly only 4ish years), with no A/C, in a garage, where temps have ranged from -15F to 90F.

airflow is very important, but to assume something is going to fry, just because it might be hotter than your datacenter, is asinine.
/rant
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I very much appreciate the need to quiet these c6100s, and good on you for building your own cabinetry - which is no easy task. That said, the cabinet has me worried. I see two small fans that are likely exhausting air, but I don't see a set of vents to let air in. Even if you do add another vent, the intake air for the server will be mixed with some of the output air, raising the intake temperature significantly. On a ninety degree day, the interior of the cabinet could easily reach 110-120 degrees.

Ignoring the other equipment for a minute, you could add a vertical baffle to separate the cabinet into left and right halves, with large vents in each half. The server fans would then be actively inhaling (relatively) cool air from the vents on the left half of the baffle, pushing it through the server, and then exhausting it through the vents on the right half. With a tight fitting baffle separating the two halves, you might not even need any fans aside form those already in the server. You may also want to add an air filter to the intake side, something with plenty of surface area.

It does sound a bit like your PIC16 FCB was faulty, rather than an inherent problem with it. Totally agree about pdneiman too - super helpful. Glad you got the problem fixed. From your description, I would say my C6100 is working ok, the PIC16 FCB does report fan speeds etc back to the BMC.

I seem to have fixed my sound issues for now by a less technical means, involving 1.5 sheets of plywood and an old rack fan panel...



had been working on that enclosure for a while but when I first heard the C6100 take off I thought it couldn't have any significant effect. But actually it does... I can hear it when in the same room but not at all outside. Seems to be cooling ok so far too. I may revisit the San Ace fans to get the sound level lower still, but it's not so urgent now...
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
First of, being from California, and Silicon Valley, I am totally jealous of the fact you have a big garage/ basement space!

Second, I think dba is probably right. You may be able to get better airflow if you have intake and exhaust. Servers are made to move a lot of air from cold to hot aisles so keeping that in mind may help.

Third, what HP switch and what is that Cooler Master chassis below?
 

swflmarco

Member
Mar 28, 2013
39
0
6
Fort Myers, FL USA
Have we had fun this week!
Received 2 chassis on Monday, both chassis had the #1 & 2 sleds release tab bent or broken off!
Replacements came in yesterday. Updated BIOS and BMC, one after another "bricked" on the BMC updates, after several hours i gave up and restored a backup rom.
 

Jaknell1011

New Member
May 14, 2013
18
0
0
Drive activity LEDs without SGPIO

Like many here I've been working on re-cabling a C6100 to have two nodes connected to 6 drives each. I have one node with the LSI 1068E mezzanine card and the other with the correct Dell cable to connect all 6 on-board SATA ports to the midplane.

Since the Dell-specific cables for this config from midplane to backplane (HJ6F0 and 334VV) are basically unavailable or obscenely overpriced, I got a couple of the monoprice mini-SAS to 4xSATA cables referenced earlier in the thread (product id 8186), but I was really unhappy with the idea of having no drive activity lights.

Well... on the backplane there is a jumper labelled "LED control". No description of what that does, or apparent google hits about it. On a whim, I enabled that jumper, and... I now have activity LEDs on all drives. Seems to work fine for drives connected to onboard SATA or the LSI controller. Presumably it means the LED is controlled by the drive itself rather than the controller...

Graham
I have the M1015 on order and it should be in this week. I am trying to determine the best way to connect the drives all to one node. I found the jumper you referenced and have it enabled. Now I am trying to determine the best way to go about connecting all drives to the motherboard SATA ports and the M1015. I am thinking connect 6 drives from the backplane directly to the SATA ports on the motherboard, then using 8087 cables to connect the other 6 (3 per cable) to the M1015 ports.

"Backplane Column 1" - 3 drives connected to M1015 port 1 using 8087 breakout cable
"Backplane Column 2" - 3 drives connected to M1015 port 2 using 8087 breakout cable
"Backplane Column 3" - 3 drives connected to SATA ports 1,2,3 on node motherboard using SATA cables (recommendations...?)
"Backplane Column 4" - 3 drives connected to SATA ports 4,5,6 on node motherboard using SATA cables (recommendations...?)

Does anyone know of a good cable similar to the stock light blue Dell SATA cables that I can use to connect the 6 SATA ports on the backplane directly to the ports on the node motherboard? I measured it out and it seems they will need to be ~32 inches. I can find SATA cables but none of which are bundled nicely to fit easily. These cables look great but I don't think it's long enough.

6J3R2 DELL CABLE SATA ASSEMBLY FOR DELL C6100

Any other ideas or thoughts? Does this setup checkout OK as long as I get the 6 port SATA cable figured out? Thanks for the help. Trying to get into production in the next week or so.

*EDIT* Also this should mean that the activity LEDs will be functional for all drives, correct? And do I lose anything by not using the SGPIO cables?
 
Last edited:

khvahik

New Member
May 23, 2013
6
0
1
Hi Guys
We have a small network in our office (21x Win 7 Pro PC + 4xWin 2008 R2 servers that two of them are Domain Controllers). My boss is looking for limiting users (it's already done by group policy) and keep their files in a safe place (not on their desktop PC-s). We tried Windows ROAMING PROFILES , but the performance was not good because our NAS drive is connected to the switch with 2xGBit LANs. All users are using Quick Books 2013 Ent. edition and Microsoft office 2010 (Word, Excel and Outlook). We are thinking to setup a remote desktop server and put all user profiles on it, but my concern is HDD read/write speed, because the average size of .pst file for each user is around 3-4GByte. I am thinking to buy a DELL C6100 server with 4 nodes, change CPU-s of first node to 6 core Xeon L5639 and put 48 gig Ram on it to handle the remote desktop users, and setup other 3 for Active Directory, and a Linux Firewall.
I have 2 questions:
1- Is it capable to handle different type of CPU-s in different nodes?
2- Is there any suggestion to attach first 6 Hard disks as RAID5 to first node, and set remaining 6 Hard disks to other 3 nodes (2 for each one) as RAID1?