Anyone familiar recovering bricked Dell PSU from firmware update failure?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BLinux

cat lover server enthusiast
Jul 7, 2016
2,757
1,128
113
artofserver.com
I've had a stack of Dell servers sitting around for a few months that I really need to sell. This week, I forced myself to move forward on this and start getting the servers ready for sale. I usually like to update all the BIOS/ firmware and clean the servers and test everything before I sell them. Well, when it comes to Dell PSU firmware, I guess "don't fix what isn't broken" really applies... apparently the firmware update is tricky and prone to brick the PSUs. I've now got at least 2 bricked PSUs. I'm not going to do those updates anymore, but now I'm wondering if this can be fixed? I have a few other PSUs of the same model that work and I'm wondering if I can read the firmware off the working units and flash them onto the non-working units? Anyone attempt this before and know what's involved? I do have a USB->SPI/I2C adapter, but not sure what hardware is in the PSU yet...

Just wanted to reach out to the collective at STH ... if this is futile effort, I'll not waste my time at it. If no one has tried it, i may spend a little time to investigate.
 

NashBrydges

Member
Apr 30, 2015
86
24
8
59
I had the exact same outcome when trying to upgrade firmware on my Dell R620. Bricked both PSUs and nothing I did could revive them. Luckily it was still under warranty and Dell sent replacements. The moral of this story for me is that I don't do firmware upgrades on PSUs anymore.
 

John Piontkowski

New Member
Dec 1, 2019
8
4
3
Austin, TX
I've updated hundreds of those in R510's without an issue. I always use the update thru the lifecycle controller. Not updating the firmware can cause the PSU to lose communication and may not fail over to the 2nd PSU properly. You might try putting a "bricked" one in the 2nd slot with a working one in the 1st slot and try the update again. follow the update instructions carefully, the box needs to reboot itself during the process, dont reset it yourself
 

larryl79

New Member
Mar 26, 2024
7
0
1
I've got the same issue. PSU bricked and reports 0W output, and an impossible fw version number. so still there's a communbication, but nothing want's update the fw on it. Any ideas?
 

chuckachow

New Member
Apr 18, 2025
1
4
3
I just wanted to confirm that I was able to unbrick two PSUs that had failed during a firmware update:

Server: Dell VXRail E560F (which is a rebadged PowerEdge R640)
PSU: Liteon 1100W Model: L1100E-S1 PN: 0CMPGMA02 or 0CMPGM or CMPGM
Latest Firmware: 00.25.32 22 Nov 2022
Older Firmware: 00.23.32 22 Mar 2017

Always wait until the Job Queue shows 100% complete - failed is ok, just try again and again.

Method I used:
1) Use a good PSU in slot 2, and the faulty PSU in slot 1
2) Download the firmware version that matches the good PSU - i used the 23.32 version because that's what my good PSU had.
2) Flash the firmware, and wait until the Job Queue shows 100% complete. The faulty PSU should have a green light, but still not report it's firmware correctly.
3) Run the firmware update again, wait until the Job Queue shows 100% complete.
4) update to the latest firmware if required - but only update one at a time. I have had a lot of success flashing these devices one at a time.

Keep trying/experimenting until you find a version that works - try the 32 and 64 bit versions too - in theory, this shouldn't make a difference, but i feel like i had more success uploading the 32 bit exe, which should still have the same firmware binary as the 64bit exe
 

fernandolcx

New Member
Sep 29, 2024
1
2
3
I just wanted to confirm that I was able to unbrick two PSUs that had failed during a firmware update:

Server: Dell VXRail E560F (which is a rebadged PowerEdge R640)
PSU: Liteon 1100W Model: L1100E-S1 PN: 0CMPGMA02 or 0CMPGM or CMPGM
Latest Firmware: 00.25.32 22 Nov 2022
Older Firmware: 00.23.32 22 Mar 2017

Always wait until the Job Queue shows 100% complete - failed is ok, just try again and again.

Method I used:
1) Use a good PSU in slot 2, and the faulty PSU in slot 1
2) Download the firmware version that matches the good PSU - i used the 23.32 version because that's what my good PSU had.
2) Flash the firmware, and wait until the Job Queue shows 100% complete. The faulty PSU should have a green light, but still not report it's firmware correctly.
3) Run the firmware update again, wait until the Job Queue shows 100% complete.
4) update to the latest firmware if required - but only update one at a time. I have had a lot of success flashing these devices one at a time.

Keep trying/experimenting until you find a version that works - try the 32 and 64 bit versions too - in theory, this shouldn't make a difference, but i feel like i had more success uploading the 32 bit exe, which should still have the same firmware binary as the 64bit exe
You just saved my ass.

Thank YOU so much;
 
  • Like
Reactions: chuckachow and Rock

xsmarty

New Member
Aug 2, 2025
1
2
3
+1 for ass saving. Took me many more trial and errors..

Slight variation for me :
After Updating, Both PSU got bricked ( system powered down, idrac was showing 1% for 8h then machine powered down).
Idrac connection lost after 8h, server was off coudn't bring back up, psu plugged both had amber lights blinking.
Swapped one psu for a working one from another server, got idrac back. Amber blink on one, green on the other.


Logs for firmware update failed
Maintenance > Licecycle Log :
JCP042 Job JID_540278145967 failed because The Job was in running state and made no progress for the allocated time.

Logs for blinking amber PSU
The error messages in Maintenance > Licecycle Log :
PSU0915 The Power Supply Unit (PSU) 2 firmware is not responding.
PSU0003 The Power Supply Unit (PSU) 2 is not receiving input power because of issues in PSU or cable connections.

Error in System Power :
PS1 Status Presence Detected| Stuck in bootloader 1260 1100 1100 0.0.0 Wide Range AC
PS2 Status Presence Detected 1260 1100 1100 00.23.32 0CMPGMA03 Wide Range AC

--------------------------
Server: Dell 7920 Rack (~PowerEdge R740 with GPU support)
PSU: Liteon 1100W Model: ? PN: 0CMPGMA03
Latest Firmware: 00.25.32 22 Nov 2022
Older Firmware: 00.23.32 22 Mar 2017

PSU 1 & PSU 2 0CMPGMA03 got bricked when updating 00.23.32 -> 00.25.32 (server 1)
PSU 3 & PSU 4 0CMPGMA03 OK on 00.25.32 (server 2)

What worked for me:
0) PSU in redundant mode
1) good PSU3 in slot 1, faulty PSU2 in slot 2 ( other way around.. who knows )
2) Apply update via idrac9 of the Downgraded firmware 00.23.32 ( using same firmware as the last working firmware on the PS1)
3) Flash the firmware, and wait until the Job Queue shows 100% complete.
The faulty PSU should have a green light, but still not report it's firmware correctly.
SUP0538 Unable to update [PSU-2] , 00.00.00, .
SUP0536 Successfully updated [PSU-1] PWR SPLY,1100W,RDNT,LTON, 00.23.32, A03.
4) Flash again
Dashboard -> Recent logs:
The input power for power supply 2 has been restored. Mon Aug 04 2025 16:40:51
The issue in the Power Supply Unit (PSU) 2 because of the stuck in the bootloader is now resolved. Mon Aug 04 2025 16:40:51
The power supplies are redundant. Mon Aug 04 2025 16:40:47
The Power Supply Unit (PSU) 2 is not receiving input power because of issues in PSU or cable connections. Mon Aug 04 2025 16:17:48
The Power Supply Unit (PSU) 2 firmware is not responding. Mon Aug 04 2025 16:17:48
Power supply redundancy is lost. Mon Aug 04 2025 16:17:48
The power supplies are redundant. Mon Aug 04 2025 16:08:28
 

asg2ki

New Member
Jan 7, 2026
1
1
1
+1 more for ass saving. I also had to experiment a bit until I got the PSU's back to normal but long story short, I was upgrading all my PSU's from 00.23.32 to 00.25.32 over six R740xd servers. Five of them worked out without any issues and only the last one got stuck in bootloader mode on both of its PSU's. Downgrating the PSU's back to 00.23.32 helped reviving them back.

Longer explanation... Since the last server got completely stuck on its power, I had to swap one of the PSU's from another server to get iDRAC back to operational mode. Once iDRAC was available, I tried to reapply the 00.25.32 firmware to the box several times but unfortunately the faulty PSU didn't want to come back online even though the update job ran on 100% without indicating any issues. At that time the faulty PSU was showing steady green light at the back, however within the iDRAC status page it was still showing as stuck in bootloader mode. I tried reseating the faulty PSU (basically taking it out, wait until the green light vanished and putting it back inside), however once it was back in it's cage, the amber blinking light came back as a result, therefore it was still stuck and inoperable. I tried few more times applying 00.25.32 firmware but the result was all the same in the end.

So as the next step I took the downgrade approach to 00.23.32. At that stage I still had one faulty PSU and another one that was already upgraded to 00.25.32. Once I uploaded the older version of the firmware, the healthy PSU got successfully downgraded while the faulty one was successfully reset and finally came back online. So at that stage I had both PSU's applied with 00.23.32 and in fully operational state but of course I still had to fix the second PSU that previously failed the upgrade and was still stuck in bootloader mode.

I took out the now revived PSU from its slot and place the faulty one in its place. Did the same "downgrade" procedure which although was indicated as successful on 100%, the PSU was still stuck in bootloader mode, however this time the status message in iDRAC was a bit different (sorry didn't take a note of it) besides showing the bootloader status. So I decided to reapply the 00.23.32 once again to the machine (as per the previous suggestions in this thread), and voila the faulty PSU finally got revived.

Now I had 3 PSU's out of 12 that were still on 00.23.32 version, therefore as the very next step I decided to retry upgrading them but this time on a one by one basis. The upgrades to 00.25.32 went just fine.

So lessons learnt:
* Make sure to upgrade PSU's on a one by one basis instead of in bulk. Because of such behavior with bulk upgrades, this might be one of the main reasons why the firmware package for PSU is never made available through Dell OME and its catalogs but is kept as a stantalone package, so that you don't get an entire server farm down should something go wrong. The latter is just an assumption on my end.
* If you get stuck on upgrading from 00.23.32 to 00.25.32, make sure to have at least one PSU still available with the old version so that you can at least start the server's iDRAC and be able to reapply the old package for reviving purposes. If you have a working PSU with a different firmware version, that should also work (like in my case with 00.25.32), however keep in mind that the revival will modify the firmware version on that PSU too.
* Reapplying 00.25.32 firmware multiple times over a bricked PSU doesn't seem to have any effect, however 00.23.32 seems to have something different in its logic as it was able to flip the necessary bits in order to move out of the stuck bootloader situation. Just for the record, I tried like 20 times to reapply the 00.25.32 over a faulty PSU and that doesn't seem to help at all, however with 00.23.32 downgrade I was able to do the revival on the very first try. This of course might vary but at least we know that 00.23.32 is capable of resolving the situation after multiple retries (how many exactly is a very good question that may never get an answer).
* While I haven't tested this myself, nor I could provided any facts / confirmations, it might be worth starting an upgrade procedure to 00.25.32 by applying / re-applying 00.23.32 package to all PSU's including those that are already on this version and are pending upgrade to 00.25.32. Not sure if that would help "resetting" anything in the PSU's that may potentially fail the 00.25.32 upgrade but it might be worth considering it as a preleminary step. I can only say that once I revived the faulty PSU's, the subsequent upgrades to the new version went just fine without any issues and on the very first try. I'm not sure if the stuck to bootloader situation might be specifically induced when trying the upgrade over both PSU's at the same time but I wouldn't be surprised if that's the case. As stated already, try to avoid PSU upgrades in bulk but rather plan for individual upgrades even though it would mean more time and efforts.

Special thanks to all who commented here with their solutions before me. Those pointers were extremely helpful and useful.
 
  • Like
Reactions: mostlycorn

mostlycorn

New Member
Jan 27, 2026
6
0
1
Thanks for this! Ultimately, I was able to restore two PSUs.

I had initially messed them up during a failed upgrade from 00.23.32 to 00.25.32.

What didn’t work -- Using a good PSU running version 00.23.32, I attempted to fix a busted one by installing both, booting to Lifecycle Controller, then applying 00.23.32. This appeared to succeed and the busted PSU went from blinking amber to solid green. However, in Lifecycle Controller, its version continued to read as 00.00.00.. I repeated this process several times. No matter how many times I tried to reapply 00.23.32, I could not get the busted PSU back to a working state. Upon disconnecting power and reconnecting it, the busted PSU would revert to blinking amber and show as “stuck in bootloader”.

What did work -- Same as what didn’t, but instead of trying to apply 00.23.32 over and over, I applied 00.25.32 and it worked the first time.
 

Fritz

Well-Known Member
Apr 6, 2015
3,689
1,642
113
71
This thread reminds me of an ordeal I just went thru with a server I bought on eBay. It had a SM X10SRM-CF MB that had a proprietary BIOS Complete with the company's logo on the BIOS splash screen. I had to jump thru hoops to get the old old BIOS updated to the latest SM version. It all has to be done in the proper order, do it wrong and it's no joy. The original BIOS was programmed toi only use 4 cores no matter how many the CPU had. Never seen this before but upgrading the BIOS and BMC to the latest made this problem go away.
I also recently bought a WattBox locked with an unknown user/password. Everything I found said only the customer it was registered to could change the password and then only with the help of Snap One's assistance. Then I stumbled upon a command that revealed the user/password and so was able to get in and change it. I would never have guessed the password because it wasn't a word, it was gibberish. Don't ya just love solving riddles like this?