[Linux] [perccli?] Dell H330: how to expose drives in a Hardware RAID to the operating system?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vovanx

New Member
Oct 3, 2021
12
0
1
TL;DR:

to switch the drive into the JBOD mode:

0) make sure you have megaraid_sas kernel module loaded; and libncurses5 installed (is is safe to symlink libncurses.so.6 or libncursessw.so.6 to libncurses.so.5)
1) download and unpack the RPM file from inside this ZIP archive: https://docs.broadcom.com/docs-and-...-controllers-common-files/8-07-14_MegaCLI.zip to /opt/ directory
2)
Code:
cd /opt/MegaRAID/MegaCli/
./MegaCli64 -PDList -aALL      ####  note the IDs of enclosure, target drive, and controller
./MegaCli64 -PDOffline -PhysDrv[EEE:DDD] -aCCC #### where "EEE" is "enclosure ID", "DDD" is "drive ID", "CCC" is "controller ID"
./MegaCli64 -PDMarkMissing -PhysDrv [EEE:DDD] -aCCC
./MegaCli64 -AdpSetProp -EnableJBOD -1 -aCCC
./MegaCli64 -PDMakeJBOD -PhysDrv[EEE:DDD] -aCCC
./MegaCli64 -AdpAutoRbld -Dsbl -aCCC

to update the drive firmware through the PERC card:

0) disconnect the drive from the PERC card and connect directly to the SAS/SATA port on the motherboard.



------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


Hello,

As you might already know, the Samsung 870 EVO drives have a critical flaw that destroys data on the SSD after several months of usage.
( Samsung 870 EVO - Beware, certain batches prone to failure! | Careful: Samsung EVO 4TB SSD (high failure rates) | https://www.reddit.com/r/synology/comments/tueopq )

I have these drives in a hardware RAID made with the H330 RAID controller, so the drives are not listed as a separate entities like "/dev/sda", "/dev/sdb", sdc, sdd, etc in the operating system, but as a single device "/dev/sda". And, for example, to see the S.M.A.R.T report I have to pass a "-d megaraid,N" option to the smartctl, where N is the drive number, like "smartctl -a /dev/sda -d megaraid,5".
And as a result - the Samsung firmware updater program does not detect these drives.

Is there any way to make the controller "pass" the devices to the operating system as a separate entities, rather then the single entity?
Something like switching the controller into the HBA mode but without destroying the hardware RAID array?
Maybe there are some perccli commands to achieve this? For example, I've found these commands in the perccli manual:

"perccli /cX set expose" "Enables device drivers to expose enclosure devices"
"perccli /cX set exposeencldevice" "Enables/disables device drivers to expose enclosure devices; for example, expanders, SEPs.

- are these options what I am looking for? Will they expose the drives to the operating system as a separate devices?


At first I thought that I could simply reboot the server, enter the RAID controller configuration utility, and set the HBA mode to all drives rather than the RAID mode, then boot from Samsung's LiveCD to update the firmware, then reboot again and turn the RAID mode back.
But I've googled a bit and it seems that switching the drive from the RAID mode into the HBA mode will destroy the data on the drive, even if I turn the RAID mode back later. Is that right?

Unfortunately taking the drives out from the server and updating the firmware by connecting them to another PC is not an option because the server is hosted in a datacenter in a different city, so everything has to be done via iDRAC.




> Link to Samsung update?

full page: Samsung Magician & SSD Tools & Software Update | Samsung Semiconductor Global
direct link: https://semiconductor.samsung.com/resources/software-resources/Samsung_SSD_870_EVO_SVT02B6Q_Win.iso

It is a LiveCD ISO - requires rebooting the server in order to update the firmware. However it is possible to extract the actual firmware update and the updater program to update the firmware from the running operating system:

Code:
mkdir -p /dev/shm/iso
mount -o loop /path/to/Samsung_SSD_870_EVO_SVT02B6Q_Win.iso /dev/shm/iso/
mkdir -p /dev/shm/extract;
cd /dev/shm/extract
cp /dev/shm/iso/initrd /dev/shm/extract/initrd.gz
gzip -d initrd.gz
cpio -i --make-directories --no-absolute-filenames < ./initrd
cp -r root/fumagician/ /dev/shm/
ls /dev/shm/fumagician/
# DSRD.enc  fumagician  fumagician.sh  SVT02B6Q.enc
Where "fumagician" is the Samsung firmware updater program.

As I use a hardware RAID the "fumagician" program tells that it could not detect the drives.



> This may help you Dell EMC PowerEdge RAID Controller Command Line Interface Reference Guide


As far as I understand, I need to do the following:

1) "perccli ... set good force" to make one of the drives "unconfigured good"
2) "perccli ... set jbod" to convert that drive to "JBOD" / "HBA mode" which will expose the drive to the operating system
3) update the firmware with the "fumagician" program
4) "perccli ... set good" to convert the "JBOD mode" drive back to "RAID mode"
5) "perccli ... start rebuild" to rebuild the RAID array
6) wait until RAID rebuilds, then repeat the sequence for all remaining drives.
Is that right?

Most importantly - will setting the JBOD mode destroy the data on the drive?

Repeating the RAID rebuild after updating the firmware on each of the drives will take an extremely large amount of time, so I would prefer to set the JBOD mode to all drives at once, update the firmware on all drives at the same time, set the RAID mode for all drives back, and rebuild the RAID only once.

But if setting the JBOD mode will destroy the RAID array header on the drive then this will not work - I will have to update the firmware for each of the drives separately and wait until RAID rebuilds each time.



# copy of this thread on Dell forum: [Linux] [perccli?] How to expose drives in a Hardware RAID to the operating system?
 
Last edited:

nabsltd

Active Member
Jan 26, 2022
339
207
43
Most importantly - will setting the JBOD mode destroy the data on the drive?
The first command in your list will destroy the data on that physical drive, but that's what you want and need to do.

The first changes the mode of the drive from "in an array" to "not in an array, but could be used in an array". This changes the signature on the drive in such a way that you can't recover the data on that drive. The second command changes it to "can't be used in an array".

But, after you update the firmware on that physical drive, you change the drive back to being able to be used in an array, and rebuild the array, so you will not lose any data from the array.
 

vovanx

New Member
Oct 3, 2021
12
0
1
I have found how to set the JBOD mode for the individual drives using the standard LSI kernel driver and CLI utility (megaraid_sas + MegaCli64):

1) note down the ID numbers of the "adapter", "enclosure device", drives, and the serial numbers of the drives:

Code:
# ./MegaCli64 -PDList -aALL
In my case they were like this:

Code:
Adapter #0
Enclosure Device ID: 32

Slot Number: 2
Firmware state: Online, Spun Up
Inquiry Data: blablaSERIAL111     Samsung MODEL                SVT01B6Q

Slot Number: 3
Firmware state: Online, Spun Up
Inquiry Data: blablaSERIAL222     Samsung MODEL                SVT01B6Q
and so on.

SVT01B6Q is the problematic firmware of 870 EVO that you must update to the version SVT02B6Q.

Then note down the information about the Logical Volumes in adapter ID 0:

Code:
# ./MegaCli64 -LDInfo -LALL -a0
In my case the target RAID array looked like this:

Code:
Virtual Drive: 1 (Target Id: 1)
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
State               : Optimal

2) enable the JBOD functionality on the controller. This should not destroy the existing Logical Volumes, at least it did not for me and for two other guys from google results.

Code:
# ./MegaCli64 -AdpSetProp -EnableJBOD -1 -a0

Adapter 0: Set JBOD to Enable success.

Exit Code: 0x00
3) try to set the JBOD mode for the individual device (in my case - device in the enclosure ID 32, slot ID 2)

Code:
# ./MegaCli64 -PDMakeJBOD -PhysDrv[32:2] -a0

Adapter: 0: Failed to change PD state at EnclId-32 SlotId-2.

Exit Code: 0x01
4) google the error and find out that the controller will not let setting the JBOD mode to the "good" drive in a logical volume. So you need to make it a "bad condition" drive. First set the drive into "Offline" state and then to "Missing" state.

Code:
# ./MegaCli64 -PDOffline -PhysDrv[32:2] -a0

Adapter: 0: EnclId-32 SlotId-2 state changed to OffLine.

Exit Code: 0x00

# ./MegaCli64 -PDMarkMissing -PhysDrv [32:2] -a0

EnclId-32 SlotId-2 is marked Missing.

Exit Code: 0x00

# ./MegaCli64 -PDMakeJBOD -PhysDrv[32:2] -a0

Adapter: 0: EnclId-32 SlotId-2 state changed to JBOD.

Exit Code: 0x00
If you check the Logical Volume you will see that its status now is "Degraded":

Code:
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 1 (Target Id: 1)
Name ...
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
...
State               : Degraded

5) check "dmesg | tail" to see if a new drive appeared. If it did not - execute "partprobe" command.
In my case the system reported about new SATA device "/dev/sdc", and I have verified this is the correct drive by executing "smartctl -a /dev/sdc" and checking the serial number of the drive, noted in the 1st step.

6) run the extracted "fumagician" program to update the firmware:

Code:
cd /path/to/fumagician/
./fumagician
...and in my case it did not succeed:

Code:
  ERROR (21)

  Firmware Files Required For the Firmware Update Process
  are Not Located in the Correct Path!!!
The "fumagician" program correctly detects the drive, its serial number and firmware version, but "could not" update due to some error lack of debugging information. Google shows only 1 (ONE) result for this error message, and that person from linuxquestions forum "resolved" the problem by disconnecting the drive and connecting it to the different computer, which is not possible in my case.

I do have all the prerequisites installed, such as "gzip", "unzip", etc.
I have tried moving the "fumagician" folder to "/root" where it resides on the LiveCD, still no success.
I have even tried chrooting inside the LiveCD:

Code:
for i in /dev /sys /proc /run; do mount --bind $i /dev/shm/extract/$i; done
chroot /dev/shm/extract/ /bin/sh
cd root/fumagician
./fumagician
but it still gave the same error.

I have run "fumagician" inside "strace" and scrolled through the strace log, but did not spot any problems. It seems to just throw that error without any reason.
The only strange thing I've found is strange permissions of the "decrypted" firmware file "SVT02B6Q.bin" created by the "fumagician":

Code:
#### fumagician asks if I want to update:
# ls -l /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/
-rw-r--r--  1 root root     176 May 26 23:57 DSRD.enc
-rw-r--r--  1 root root 2621472 May 26 23:57 SVT02B6Q.enc
#### I've pressed "Y", Enter. fumagician extracted the firmware prior to throwing the error:
# ls -l /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/
-rw-r--r--  1 root root     176 May 27 00:00 DSRD.enc
--w----r-T  1 root root 2621440 May 27 00:00 SVT02B6Q.bin
-rw-r--r--  1 root root 2621472 May 27 00:00 SVT02B6Q.enc
#### note that if you "press any key to exit" the "fumagician" will remove that directory, so copy the extracted firmware if you need it, before exiting from fumagician.
Despite root user should have read access to any file regardless of does it have "r" permission or not, I still tried to race against the fumagician with "while true; do chmod +r /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/SVT02B6Q.bin; done", but still did not succeed - got the same error message.


Well, now to the last step:

7) "replace" the "missing" drive by making its state "unconfigured good" and tell the controller to rebuild the RAID array "Virtual Drive 1":

Code:
# ./MegaCli64 -PDMakeGood -PhysDrv[32:2] -Force -a0

Adapter: 0: EnclId-32 SlotId-2 state changed to Unconfigured-Good.

Exit Code: 0x00

# ./MegaCli64 -PdReplaceMissing -PhysDrv [32:2] -Array1 -row0 -a0

Adapter: 0: Missing PD at Array 1, Row 0 is replaced.

Exit Code: 0x00

# ./MegaCli64 -PDRbld -Start -PhysDrv [32:2] -a0

Started rebuild progress on device(Encl-32 Slot-2)

Exit Code: 0x00


-------------------------------------------------------------------------------

At first I hoped to update the firmware for all drives from the host OS at once, then rebuild the RAID array only once, then reboot the server only once.
However I understood that I could have to rebuild the RAID array four times - for all four drives. And hoped to reboot the server only once.
But now it seems that I have to:

1) set the JBOD mode for one drive
2) reboot the server to Samsung's LiveCD and update the firmware from the LiveCD instead of running the "fumagician" program from inside my host OS. This step includes disabling the Secure Boot, changing the boot mode to "Legacy/CSM" instead of UEFI, etc.
3) boot the host OS back and rebuild the RAID array (enabling UEFI and Secure Boot back again)
4) set the JBOD mode for the second drive, ..., repeat for all drives.

...which will take forever. I need to mentally prepare for this.
I have to do that anyway because the data safety is more important than the server uptime. I will report later how it worked out.
or not :)
 
Last edited:

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
You can do this in a completely different way if you have a second system:

1. Switch off your server
2. Note which drive is in which slot (most RAID controllers live in the 90's)
3. Take each drive out (in turns if you want to) and plug them in a second system where you run samsung's updater
4. Once you have updated all drives and they are back in their positions, apply power to your server again and boot as normal

If you don't have a second system: you are screwed. This might be a good moment to reconsider hardware raid and switch to ZFS since you won't be able to do this in a reasonable way without simply transferring the data off of the array and copying it back. Also, rebuilds are hard on consumer SSD's.
 

vovanx

New Member
Oct 3, 2021
12
0
1
Alas,
Unfortunately taking the drives out from the server and updating the firmware by connecting them to another PC is not an option because the server is hosted in a datacenter in a different city, so everything has to be done via iDRAC.
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Bummer, I completely missed that part.

That means you have two options remaining:

- Somehow have more storage added or have the array reduced in size by 50%; put the 'freed up' disks in a JBOD & upgrade
- Copy the data to something else, JBOD everything, upgrade and ZFS it (or back to hardware raid :( )

Technically, it should be possible to just export the current RAID controller configuration, have it detach the disks and JBOD them, and after upgrading load the configuration back in. But with the 90's in mind, I doubt the RAID controller can do that. The stupid thing about it is that even without all of this a RAID controller could just be an ATA/SCSI filter that embeds a DCO and still exposes the rest of the drivers to the OS for management, but no.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Can you extract the Firmware from the ISO?
If yes, the raidcontroller should have functionality to Update connected devices firmware
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Can you extract the Firmware from the ISO?
If yes, the raidcontroller should have functionality to Update connected devices firmware
Doesn't the H330 specifically prevent that on 'non certified' drives? Or am I thinking of a different PERC here...
 

vovanx

New Member
Oct 3, 2021
12
0
1
Well, it is officially a fail. After (changing boot mode from UEFI to BIOS and) rebooting to the Samsung LiveCD I've got the same error message as that only result from the google - "terminate called after throwing an instance of 'FeatException'". And that guy from google results "fixed" it by physically disconnecting the drive and connecting it to the different PC, which is impossible in my case (I am way too far away from the server).


And a side note: after getting that error for the first time I've rebooted the server and checked BIOS settings - Device settings - PERC H330 configuration - Physical Disk Management - and found out that the drive [32:2] is in "Rebuilding 0%" state. I have pressed "stop operation", rebooted the server, verified that now the drive is in "Offline" state, booted the Samsung firmware updater iso again, but got the same "featexception" error.

I think I will have to hire the datacenter staff to manually update the firmware by physically removing the drives from the server.
 

Attachments

vovanx

New Member
Oct 3, 2021
12
0
1
Another update.

I've found out what a "FeatException" error is - this error appears when "fumagician" tries to read the Virtual Media drive file in the /dev/ directory. After I renamed the Virtual Media device file in /dev/ the "fumagician" stopped throwing the FeatException error and found the SSD drive successfully, however it still showed the error "21" about not finding the firmware files.

Also I've spotted that setting JBOD mode with MegaCli works only while the OS is running, and rebooting the OS makes the RAID controller change the drive mode back to "Rebuilding". And I could not change the drive to JBOD in the RAID controller BIOS - the "Make Non-Raid" option is grayed out, regardless of the current drive state - ether "Rebuilding" or "Offline".

Then I have googled for another guide on making the JBOD mode with MegaCli program, and that guide advised to set the drive to "Unconfigured Good" instead of "Missing" as I have decided first. So I've tried following that guide:

# ./MegaCli64 -PDMakeGood -PhysDrv[32:2] -Force -a0
# ./MegaCli64 -PDMakeJBOD -PhysDrv[32:2] -a0
# ./MegaCli64 -PDList -a0 2>&1 | grep JBOD
> Firmware state: JBOD

Then I've rebooted the server, but the drive state changed to "Rebuilding" again after the reboot, and MegaCli showed "Firmware state: Offline" instead of JBOD.

And then I realized that the RAID controller is set to automatically rebuild the array, that's why it messes with my drive settings.

So I have disabled the auto rebuild:

# ./MegaCli64 -AdpAutoRbld -Dsply -a0
# ./MegaCli64 -AdpAutoRbld -Dsbl -a0
#### (to enable it back run: "./MegaCli64 -AdpAutoRbld -Enbl -a0")
# ./MegaCli64 -PDMarkMissing -PhysDrv [32:2] -a0
# ./MegaCli64 -PDMakeJBOD -PhysDrv[32:2] -a0

Then I've rebooted the server and, at last, saw the "1 Non-Raid disk handled by BIOS" line at the RAID controller initialization stage.

Booted the Samsung LiveCD ISO again, pressed D after it failed to find the drive and asked to press any key to reboot the server, renamed the Virtual Media file (mv /dev/sdd /dev/Xdd) and ran the "fumagician" program again, agreed with updating the firmware, and got "ERROR (21) Firmware Files Required For the Firmware Update Process are Not Located in the Correct Path!!!"

So now it is really officially a fail.
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
434
247
43
NH, USA
...
I have run "fumagician" inside "strace" and scrolled through the strace log, but did not spot any problems. It seems to just throw that error without any reason.
The only strange thing I've found is strange permissions of the "decrypted" firmware file "SVT02B6Q.bin" created by the "fumagician":

Code:
#### fumagician asks if I want to update:
# ls -l /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/
-rw-r--r--  1 root root     176 May 26 23:57 DSRD.enc
-rw-r--r--  1 root root 2621472 May 26 23:57 SVT02B6Q.enc
#### I've pressed "Y", Enter. fumagician extracted the firmware prior to throwing the error:
# ls -l /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/
-rw-r--r--  1 root root     176 May 27 00:00 DSRD.enc
>>> --w----r-T  1 root root 2621440 May 27 00:00 SVT02B6Q.bin <<<
-rw-r--r--  1 root root 2621472 May 27 00:00 SVT02B6Q.enc
#### note that if you "press any key to exit" the "fumagician" will remove that directory, so copy the extracted firmware if you need it, before exiting from fumagician.
Despite root user should have read access to any file regardless of does it have "r" permission or not, I still tried to race against the fumagician with "while true; do chmod +r /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/SVT02B6Q.bin; done", but still did not succeed - got the same error message.
...
[/CODE]
[ Maybe it's not germane to your problem, but ] the above does not pass the "smell test".
What the heck is that (long ago deprecated) "sticky-bit" [the "T"] doing here?
Also, why the changed timestamps on the 2 .enc files? and why is the ("new") .bin truncated?

I learned to be very cautious/suspicious any time that "Samsung", "firmware", and "update" are in play. Anyone else remember the Spinpoint F4 firmware update (for a serious data-losing bug), and Samsung used the SAME version # in the update? (You could not distinguish a fixed vs not-fixed drive!)
 

vovanx

New Member
Oct 3, 2021
12
0
1
What the heck is that (long ago deprecated) "sticky-bit" [the "T"] doing here?
that's exactly what I meant! Read above:

>> The only strange thing I've found is strange permissions of the "decrypted" firmware file "SVT02B6Q.bin" created by the "fumagician":


Also, why the changed timestamps on the 2 .enc files? and why is the ("new") .bin truncated?
because I've closed the "fumagician" program and it deleted the files on exit, thus the note

>> #### note that if you "press any key to exit" the "fumagician" will remove that directory, so copy the extracted firmware if you need it, before exiting from fumagician.

so I've run it again and did not close to make that "ls -l" in another terminal.
The file is not truncated - fumagician decrypted it (still making another obfuscated binary file lol) and it seems that it removed 32 bytes (256 bit encryption key?) from it.


I learned to be very cautious/suspicious any time that "Samsung", "firmware", and "update" are in play.
if you have the 870 EVO drive then you have to update else you are risking your data.

Also judging by the same new firmware version number (2B6Q) for 870 QVO, 860 EVO, 860 QVO drives I suspect they have the same catastrophic bug in their stock firmware 1B6Q.
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
434
247
43
NH, USA
that's exactly what I meant! Read above:

>> The only strange thing I've found is strange permissions of the "decrypted" firmware file "SVT02B6Q.bin" created by the "fumagician":
It wasn't clear to me that you had taken notice of the "T", since your "chmod +r" does nothing to rectify it.
(Does chmod 644 SVT02B6Q.BIN clear that T-bit?)
Still, there is NO place for ANY "strangeness" in critical files involved in a firmware update procedure. It suggests to me (a seasoned veteran) that (some of) the people involved [at Samsung's end] are not up to the task.

Am I correct in assuming that fumagician.sh handles all the user interaction, file (extraction/) decryption/deletion, and invoking of fumagician itself, whose sole function is to update the (selected) drive's firmware?

Is there anything in fumagician.sh to explain the file-mode of the .BIN file?

When you run fumagician under strace, what is the pathname of the (.BIN) file it open()'d, and what is the return value of that call to open()? [Isn't this the crux of that Error-21 snafu?]

because I've closed the "fumagician" program and it deleted the files on exit, thus the note

>> #### note that if you "press any key to exit" the "fumagician" will remove that directory, so copy the extracted firmware if you need it, before exiting from fumagician.

so I've run it again and did not close to make that "ls -l" in another terminal.
The file is not truncated - fumagician decrypted it (still making another obfuscated binary file lol) and it seems that it removed 32 bytes (256 bit encryption key?) from it.
Thanks for clearing those up. (they were minor/while-I'm-at-it ??s; my main concern was the "strange" stuff.)
 
Last edited:

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Samsung is definitely not up to the task. They could have easily integrated with fwup or just allowed signed binary firmware images to be uploaded with standard posix tools that already exist. Instead they brand their tool 'magician' as if magic is supposed to instil confidence. (granted, it's a consumer product, but still)
 
  • Like
Reactions: vovanx

vovanx

New Member
Oct 3, 2021
12
0
1
(Does chmod 644 SVT02B6Q.BIN clear that T-bit?)
yes

Am I correct in assuming that fumagician.sh handles all the user interaction, file (extraction/) decryption/deletion, and invoking of fumagician itself, whose sole function is to update the (selected) drive's firmware?

Is there anything in fumagician.sh to explain the file-mode of the .BIN file?
no, this script simply runs the "fumagician" binary and reboots the computer if you press any key except "D":
pressed D after it failed to find the drive and asked to press any key to reboot the server
All "magic" like decryption/updating firmware is made by the "fumagician" binary.

When you run fumagician under strace, what is the pathname of the (.BIN) file it open()'d, and what is the return value of that call to open()? [Isn't this the crux of that Error-21 snafu?]
AFAIR there is nothing with code "21". The binary opens the ".enc" files, writes their copies to /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/, then writes the ".bin" file to the same dir, then just fails with the error.

(granted, it's a consumer product, but still)
their "enterprise grade" SSDs require the proprietary firmware updater too.



I was using almost exclusively Samsung SSDs for a decade, starting from 840 series, but after all this stuff I'll consider different brands for the future computers/servers.
BTW to be honest their old stuff was (and still is) really good, my 840 EVO and PRO in another laptop and server are still cool and dandy with Power_On_Hours > 50000.
 
Last edited:

UhClem

just another Bozo on the bus
Jun 26, 2012
434
247
43
NH, USA
... no, this script simply runs the "fumagician" binary and reboots the computer if you press any key except "D":

All "magic" like decryption/updating firmware is made by the "fumagician" binary.
Ah-hah ... That changes my view 180degrees. Instead of this updater merely trying to idiot-proof the procedure (assure the correct model #, etc.), I'm now thinking that it is a full-blown security-thru-obscurity of the actual firmware code. If that is the case, and they really want to make sure that THIS (entire process) is the ONLY way to update the firmware, (and they go about it competently,) then you are SOL.

It is possible that the fumagician executable will only run correctly in conjunction with the unique linux image on the .iso. Else, if it could run (correctly) under a "normal" linux image, one could modify the appropriate IOCTL(s) being used to send the DOWNLOAD_MICROCODE[_DMA] ATA commands so as to make a copy of the "raw" firmware payload.

[The "right" way to do this is within the DOWNLOAD_MICROCODE code in the firmware itself--if the SSD mfr actually wants that degree of proprietary protection. Else, by relying on mechanisms outside of the device, the payload is still available to hardware techniques (sniffers, bus-analyzers, etc.)]
AFAIR there is nothing with code "21". The binary opens the ".enc" files, writes their copies to /tmp/3A152894-1453-466D-AB4E-CB8D6DAD290A/, then writes the ".bin" file to the same dir, then just fails with the error.
In the interest of obscurity, the initial "cause" of the error should not be in location, or time, proximity to the announcement of the error. (And the announcement itself can be deceptive:).) [Don't give your adversary ANY useful info.]
their "enterprise grade" SSDs require the proprietary firmware updater too.
This sounds like a serious impediment to data-center (always-up) deployment.
Spread The Word !!