PEF Action refers to a "platform event filter" being triggered. It is a core concept of IPMI-based management. See pages 65ff of this document for more information.Does anyone know what the PEF Action sensor/alert is in the BMC? Sometimes when I boot the system, it immediately has a status of "PEF Action" and all the lights (even on powered off nodes) flash amber (running nodes alternate).
Occasionally, when I fire a node up, this does not happen and the lights remain green for some amount of minutes, and then the PEF Action sensor switches from "Normal" to "PEF Action" and the lights switch to amber again.
Any ideas? Google has failed me (Silly Google thinks PEF means PDF, even with -pdf) and the manual doesn't offer that much information, unless I missed it.
Thanks for the quick reply! I checked the sensor tab and two strange things stand out: ACPI Pwr State is "Legacy ON State" and "PSU 1 AC Status" and "PSU 1 Present" are both "Not Available". (Link to screenshot)PEF Action refers to a "platform event filter" being triggered. It is a core concept of IPMI-based management. See pages 65ff of this document for more information.
In the case of the C6100 it means one or more BMC alarms have been triggered, and the flashing yellow light on the node indicates which node raised the alarm. Usually this is a sensor alarm. Log into the BMC and look at the "Event Log" under the "Server Health" tab. For a more real-time look with specific data open the "sensor readings" tab and find the sensor with an abnormal status.
You have to populate both CPUs to use all 12 DIMM slots in a node. With only one CPU you can only use 6 DIMMs.
Max is 12x 8gb per node when using X55xx or L55xx CPUs.
I think you need to double check that you actually see and can use all of the memory you have plugged in. You likely have a system that POSTs and boots but only actually "sees" 48gb of the 128 you've got installed.FWIW I was able to use 8x16GB with a single L5520 CPU. I haven't yet tried two 5520's but I'll report the results here if/when I do. This was an XS23-TY3.
So no onboard internal USB ports or headers, the only option is soldering our own? Kind of a bummer there. I'm really not too keen on experimenting with soldering on one of these right out of the gate. Oh well.You found it - the c6100 extender board USB thread above has the information that you need.
The suspense is killing me, but your pic is dead...No need to mess with the motherboard - there is a connector already in place. The only soldering you'll need to do is one cable. See post number twenty four by chuckleb.
I have 12x 16GB of dual rank memory in a set of C6100's and XS23-TY3's, and they all run at 1333 mhz. The memory isn't hard to find at all, it just costs more.If you use more than 3 DIMMs/CPU and the DIMMs are quad-rank (most RDIMMs are) then memory speed will be reduced to DDR3-800. Some people have reported that using Dual-Rank RDIMMs avoids this speed downgrade, but dual-rank are hard to find at higher capacities.
Ah, thank you. Very helpful!Drive activity LEDs without SGPIO
Like many here I've been working on re-cabling a C6100 to have two nodes connected to 6 drives each. I have one node with the LSI 1068E mezzanine card and the other with the correct Dell cable to connect all 6 on-board SATA ports to the midplane.
Since the Dell-specific cables for this config from midplane to backplane (HJ6F0 and 334VV) are basically unavailable or obscenely overpriced, I got a couple of the monoprice mini-SAS to 4xSATA cables referenced earlier in the thread (product id 8186), but I was really unhappy with the idea of having no drive activity lights.
Well... on the backplane there is a jumper labelled "LED control". No description of what that does, or apparent google hits about it. On a whim, I enabled that jumper, and... I now have activity LEDs on all drives. Seems to work fine for drives connected to onboard SATA or the LSI controller. Presumably it means the LED is controlled by the drive itself rather than the controller...
Maybe this is widely known already, but I hadn't seen anything about it.
I *had* hoped to re-use the standard Dell mini-SAS to 3xSATA cable for drives 5-6 on each node, but it's really too short to route it sensibly. probably better just to get a couple more of the 4-way monoprice cables for this purpose.
Hope someone finds this useful.
Graham