Guide: Flashing H310/H710/H810 Mini & full size to IT Mode

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Zilch

New Member
Feb 17, 2022
2
0
1
Disregard. Virtualization wasn't turned off. Make sure you fully read the intro guide too folks!
 

mzaferyahsi

New Member
Feb 22, 2022
2
0
1
Hi,

I have managed to get my hands on an r730xd which has h730p onboard set to HBA mode and I'd like to keep it as is but I've installed an h810 which I want to crossflash to pass-through to Truenas. In the gude, they advise to remove cards that will not be flashed but it's quite a hassle to do that for the internal one. Has anyone tried flashing a PCIe card while not disconnecting the onboard one? Any tips are appreciated :)
 

Dave Corder

Active Member
Dec 21, 2015
295
192
43
41
Hi,

I have managed to get my hands on an r730xd which has h730p onboard set to HBA mode and I'd like to keep it as is but I've installed an h810 which I want to crossflash to pass-through to Truenas. In the gude, they advise to remove cards that will not be flashed but it's quite a hassle to do that for the internal one. Has anyone tried flashing a PCIe card while not disconnecting the onboard one? Any tips are appreciated :)
You can do it, you just have to identify the card correctly in the lspci output and run all the commands manually one by one instead of using the handy scripts.
 
  • Like
Reactions: fohdeesha

egy87

New Member
Mar 5, 2022
1
0
1
Hello to everybody!

I'm using a H310 with standard firmware in pass trough mode, to expose every disks to Proxmox and ZFS running on server (R620). I wish to flash to IT mode in order to enlarge queue from 25 to 600 (because is becaming a bottleneck).

Because we are speaking about a production server, I'm scared about the possibility that after flashing new firmaware Proxmox (or ZFS) is not able anymore to recognize the disks and the pool as before. Moreover all disks are in a unique ZFS pool where lives also boot partition.

Anybody had similar experience with sucessfully flash without broke OS installation?

Thank you in advance for any help
 

W1ldCard**77

New Member
Apr 7, 2022
1
0
1
Hi all.
Just would like to get some information on the IT flash firmware.
I have Dell R620 with a Perc H710p mini, Currently running server 2016 Hyper v.
I am planning a a upgrade next weekend and installing Read centric Enterprise SSD in Raid 10.
Will I be able to flash the IT firmware before doing the new installation?
Will i be able to still raid 10 my SSD drives with the IT firmware and will it still boot into windows OS on the Raid 1?
Sorry for al the questions not sure if this firmware is meant for Linux or Windows as well.
Any help or advise or anyone running this on windows server with raid 1 and raid 10.

Thank you gents
 

fourlynx

New Member
Aug 15, 2018
5
6
3
Will i be able to still raid 10 my SSD drives with the IT firmware and will it still boot into windows OS on the Raid 1?
No. The whole point of IT mode is to not be encumbered by any hardware RAID functionality and present the disks as-is to the system.

Sorry for al the questions not sure if this firmware is meant for Linux or Windows as well.
OS is irrelevant in this case, firmware is executed on the controller. You do need drivers to interface with it, but for both IT and standard firmware they should come stock in most OSes.
 

fourlynx

New Member
Aug 15, 2018
5
6
3
I'm scared about the possibility that after flashing new firmaware Proxmox (or ZFS) is not able anymore to recognize the disks and the pool as before.
It should not be impactful except if you brick the controller.
If your "production server" is vital and you cannot afford accidents, consider investing in hardware adequate for the task.
 

craigh

New Member
Apr 6, 2022
3
0
1
Has anyone been able to get this to work on a r730? I keep getting stuck in linux, with it saying No LSI SAS adapters found.

Also, is it possible to download the 1.8 version of the isos and give those a try? Seems the links are gone.

Code:
root@debian:~# lspci -s 0000:0\5:00.0
05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

Code:
root@debian:~# D1-H710
rmmod: ERROR: Module megaraid_sas is not currently loaded
rmmod: ERROR: Module mptctl is not currently loaded
rmmod: ERROR: Module mptbase is not currently loaded
Errors above are normal!
Trying unlock in MPT mode...
Device in MPT mode
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Setting up HCB...
HCDW virtual: 0x7fa018800000
HCDW physical: 0x141800000
Loading firmware...
Loaded 809436 bytes
Booting IOC...
IOC is READY
IOC Host Boot successful.
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.
Pausing for 30 seconds to allow the card to boot

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found
thanks,
craig
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,737
3,099
113
33
fohdeesha.com
Has anyone been able to get this to work on a r730? I keep getting stuck in linux, with it saying No LSI SAS adapters found.

Also, is it possible to download the 1.8 version of the isos and give those a try? Seems the links are gone.

Code:
root@debian:~# lspci -s 0000:0\5:00.0
05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

Code:
root@debian:~# D1-H710
rmmod: ERROR: Module megaraid_sas is not currently loaded
rmmod: ERROR: Module mptctl is not currently loaded
rmmod: ERROR: Module mptbase is not currently loaded
Errors above are normal!
Trying unlock in MPT mode...
Device in MPT mode
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Setting up HCB...
HCDW virtual: 0x7fa018800000
HCDW physical: 0x141800000
Loading firmware...
Loaded 809436 bytes
Booting IOC...
IOC is READY
IOC Host Boot successful.
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.
Pausing for 30 seconds to allow the card to boot

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found
thanks,
craig
ensure you've set all the bios settings listed in the guide intro, and double check you don't have any other PCIe devices/cards in the system
 

craigh

New Member
Apr 6, 2022
3
0
1
ensure you've set all the bios settings listed in the guide intro, and double check you don't have any other PCIe devices/cards in the system
First, thanks for the help. I do appreciate it.

No other pcie cards in the server. Disabled those 3 things in the bios.

The server does have a integrated RAID card, but that was disabled in the BIOS as well.

When I load up the linux iso, here is some info.

lspci -nnv | grep LSI | cut -b -7
03:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)


dmesg, I can see this:

Code:
[   52.415178] mpt2sas_cm0: _base_spin_on_doorbell_int: failed due to timeout count(10000), int_status(c0000000)!
[   52.415316] mpt2sas_cm0: doorbell handshake int failed (line=5934)
[   52.415421] mpt2sas_cm0: _base_get_ioc_facts: handshake failed (r=-14)
[   52.415596] mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:11069/_scsih_probe()!

When I tail /var/log/messages while running D1-H710 I see the following:

Code:
Apr 14 15:39:48 localhost kernel: [ 1177.928296] mpt3sas version 35.100.00.00 unloading

Apr 14 15:40:05 localhost kernel: [ 1194.843780] pci 0000:03:00.0: [1000:0087] type 00 class 0x010700
Apr 14 15:40:05 localhost kernel: [ 1194.843845] pci 0000:03:00.0: reg 0x10: [io  0x3000-0x30ff]
Apr 14 15:40:05 localhost kernel: [ 1194.843899] pci 0000:03:00.0: reg 0x14: [mem 0x91f40000-0x91f4ffff 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.843960] pci 0000:03:00.0: reg 0x1c: [mem 0x91f00000-0x91f3ffff 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.844012] pci 0000:03:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref]
Apr 14 15:40:05 localhost kernel: [ 1194.845621] pci 0000:03:00.0: supports D1 D2
Apr 14 15:40:05 localhost kernel: [ 1194.909132] pci 0000:03:00.0: BAR 6: assigned [mem 0x91f00000-0x91ffffff pref]
Apr 14 15:40:05 localhost kernel: [ 1194.909190] pci 0000:03:00.0: BAR 3: no space for [mem size 0x00040000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909239] pci 0000:03:00.0: BAR 3: failed to assign [mem size 0x00040000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909296] pci 0000:03:00.0: BAR 1: no space for [mem size 0x00010000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909342] pci 0000:03:00.0: BAR 1: failed to assign [mem size 0x00010000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909391] pci 0000:03:00.0: BAR 0: assigned [io  0x3000-0x30ff]
Apr 14 15:40:05 localhost kernel: [ 1194.967358] mpt3sas version 35.100.00.00 loaded
Apr 14 15:40:05 localhost kernel: [ 1194.968121] mpt3sas 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
Apr 14 15:40:05 localhost kernel: [ 1194.969061] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (131913996 kB)
Apr 14 15:40:05 localhost kernel: [ 1195.026248] mpt2sas_cm0: sending diag reset !!
Apr 14 15:40:05 localhost kernel: [ 1195.199476] mpt2sas_cm0: Invalid host diagnostic register value
Apr 14 15:40:05 localhost kernel: [ 1195.199502] mpt2sas_cm0: System Register set:
Apr 14 15:40:05 localhost kernel: [ 1195.203484] 00000000: ffffffff
...
Apr 14 15:40:06 localhost kernel: [ 1195.456077] 000000fc: ffffffff
Apr 14 15:40:06 localhost kernel: [ 1195.456189] mpt2sas_cm0: diag reset: FAILED
Apr 14 15:40:06 localhost kernel: [ 1195.460487] mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:11069/_scsih_probe()!
Apr 14 15:40:36 localhost kernel: [ 1226.180459] Fusion MPT base driver 3.04.20
Apr 14 15:40:36 localhost kernel: [ 1226.180690] Copyright (c) 1999-2008 LSI Corporation
Apr 14 15:40:36 localhost kernel: [ 1226.189161] Fusion MPT misc device (ioctl) driver 3.04.20
Apr 14 15:40:36 localhost kernel: [ 1226.189625] mptctl: Registered with Fusion MPT base driver
Apr 14 15:40:36 localhost kernel: [ 1226.189908] mptctl: /dev/mptctl @ (major,minor=10,220)

And here is output from running the script:

Code:
root@debian:~# D1-H710
rmmod: ERROR: Module megaraid_sas is not currently loaded
rmmod: ERROR: Module mptctl is not currently loaded
rmmod: ERROR: Module mptbase is not currently loaded
Errors above are normal!
Trying unlock in MPT mode...
Device in MPT mode
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Setting up HCB...
HCDW virtual: 0x7ff11d400000
HCDW physical: 0x11e000000
Loading firmware...
Loaded 809436 bytes
Booting IOC...
IOC is READY
IOC Host Boot successful.
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.
Pausing for 30 seconds to allow the card to boot

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found
All Done! Continue following the guide to set SAS addr
 

Blue)(Fusion

Active Member
Mar 1, 2017
150
56
28
Chicago
Reading the instructions.....

Extra: Disable ThirdPartyPCIFanResponse


Does this disable the fans ramping up and down as necessary? Or does this set it to the static roughly 30% fan speed? My R710s are in an environment that averages around 75 degrees and sit mostly idle but occasionally ramp up to full utilization on dual X5680 CPUs which need ramped up airflow.
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,737
3,099
113
33
fohdeesha.com
Reading the instructions.....

Extra: Disable ThirdPartyPCIFanResponse


Does this disable the fans ramping up and down as necessary? Or does this set it to the static roughly 30% fan speed? My R710s are in an environment that averages around 75 degrees and sit mostly idle but occasionally ramp up to full utilization on dual X5680 CPUs which need ramped up airflow.
neither, it disables the default idrac behavior of loading a different (and much more aggressive) fan profile when it detects a pci card it doesn't have in it's database, so it will continue using the standard fan/temp profiles
 

fohdeesha

Kaini Industries
Nov 20, 2016
2,737
3,099
113
33
fohdeesha.com
First, thanks for the help. I do appreciate it.

No other pcie cards in the server. Disabled those 3 things in the bios.

The server does have a integrated RAID card, but that was disabled in the BIOS as well.

When I load up the linux iso, here is some info.

lspci -nnv | grep LSI | cut -b -7
03:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05)


dmesg, I can see this:

Code:
[   52.415178] mpt2sas_cm0: _base_spin_on_doorbell_int: failed due to timeout count(10000), int_status(c0000000)!
[   52.415316] mpt2sas_cm0: doorbell handshake int failed (line=5934)
[   52.415421] mpt2sas_cm0: _base_get_ioc_facts: handshake failed (r=-14)
[   52.415596] mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:11069/_scsih_probe()!

When I tail /var/log/messages while running D1-H710 I see the following:

Code:
Apr 14 15:39:48 localhost kernel: [ 1177.928296] mpt3sas version 35.100.00.00 unloading

Apr 14 15:40:05 localhost kernel: [ 1194.843780] pci 0000:03:00.0: [1000:0087] type 00 class 0x010700
Apr 14 15:40:05 localhost kernel: [ 1194.843845] pci 0000:03:00.0: reg 0x10: [io  0x3000-0x30ff]
Apr 14 15:40:05 localhost kernel: [ 1194.843899] pci 0000:03:00.0: reg 0x14: [mem 0x91f40000-0x91f4ffff 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.843960] pci 0000:03:00.0: reg 0x1c: [mem 0x91f00000-0x91f3ffff 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.844012] pci 0000:03:00.0: reg 0x30: [mem 0xfff00000-0xffffffff pref]
Apr 14 15:40:05 localhost kernel: [ 1194.845621] pci 0000:03:00.0: supports D1 D2
Apr 14 15:40:05 localhost kernel: [ 1194.909132] pci 0000:03:00.0: BAR 6: assigned [mem 0x91f00000-0x91ffffff pref]
Apr 14 15:40:05 localhost kernel: [ 1194.909190] pci 0000:03:00.0: BAR 3: no space for [mem size 0x00040000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909239] pci 0000:03:00.0: BAR 3: failed to assign [mem size 0x00040000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909296] pci 0000:03:00.0: BAR 1: no space for [mem size 0x00010000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909342] pci 0000:03:00.0: BAR 1: failed to assign [mem size 0x00010000 64bit]
Apr 14 15:40:05 localhost kernel: [ 1194.909391] pci 0000:03:00.0: BAR 0: assigned [io  0x3000-0x30ff]
Apr 14 15:40:05 localhost kernel: [ 1194.967358] mpt3sas version 35.100.00.00 loaded
Apr 14 15:40:05 localhost kernel: [ 1194.968121] mpt3sas 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
Apr 14 15:40:05 localhost kernel: [ 1194.969061] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (131913996 kB)
Apr 14 15:40:05 localhost kernel: [ 1195.026248] mpt2sas_cm0: sending diag reset !!
Apr 14 15:40:05 localhost kernel: [ 1195.199476] mpt2sas_cm0: Invalid host diagnostic register value
Apr 14 15:40:05 localhost kernel: [ 1195.199502] mpt2sas_cm0: System Register set:
Apr 14 15:40:05 localhost kernel: [ 1195.203484] 00000000: ffffffff
...
Apr 14 15:40:06 localhost kernel: [ 1195.456077] 000000fc: ffffffff
Apr 14 15:40:06 localhost kernel: [ 1195.456189] mpt2sas_cm0: diag reset: FAILED
Apr 14 15:40:06 localhost kernel: [ 1195.460487] mpt2sas_cm0: failure at drivers/scsi/mpt3sas/mpt3sas_scsih.c:11069/_scsih_probe()!
Apr 14 15:40:36 localhost kernel: [ 1226.180459] Fusion MPT base driver 3.04.20
Apr 14 15:40:36 localhost kernel: [ 1226.180690] Copyright (c) 1999-2008 LSI Corporation
Apr 14 15:40:36 localhost kernel: [ 1226.189161] Fusion MPT misc device (ioctl) driver 3.04.20
Apr 14 15:40:36 localhost kernel: [ 1226.189625] mptctl: Registered with Fusion MPT base driver
Apr 14 15:40:36 localhost kernel: [ 1226.189908] mptctl: /dev/mptctl @ (major,minor=10,220)

And here is output from running the script:

Code:
root@debian:~# D1-H710
rmmod: ERROR: Module megaraid_sas is not currently loaded
rmmod: ERROR: Module mptctl is not currently loaded
rmmod: ERROR: Module mptbase is not currently loaded
Errors above are normal!
Trying unlock in MPT mode...
Device in MPT mode
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Device in MPT mode
Resetting adapter in HCB mode...
Trying unlock in MPT mode...
Device in MPT mode
IOC is RESET
Setting up HCB...
HCDW virtual: 0x7ff11d400000
HCDW physical: 0x11e000000
Loading firmware...
Loaded 809436 bytes
Booting IOC...
IOC is READY
IOC Host Boot successful.
Device in MPT mode
Removing PCI device...
Rescanning PCI bus...
PCI bus rescan complete.
Pausing for 30 seconds to allow the card to boot

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found

LSI Logic MPT Configuration Utility, Version 1.72, Sep 09, 2014

0 MPT Ports found
All Done! Continue following the guide to set SAS addr
weird, it's succesfully hostbooting the card which is the hardest part. I think someone else on non-dell hardware with the same issue found out the non-dell hardware was just taking forever for the new hostbooted card to show up, so they got around it by editing the delay. you might try this on your r730

start the guide from scratch again, but this time when you get into linux, as root, don't run the D1-H710 command yet. edit the script first (nano /usr/local/bin/D1-H710) and on line 19 change "sleep 30" to "sleep 180" and save it. then run it
 

nas-builder11759

New Member
May 8, 2022
7
5
3
Just to share my experience with the flashing a Dell R520 with PERC H710 mini.

Follow the guide and wasn't able to setsas (error was No LSI SAS adapters found!). Re-read the entire guide again, and found out that I forgot to turn off CPU virtualization settings. Reverted card firmware with B0REVERT and start all over again, and didn't seem to work. Try again a few times/reboot/cold boot/unseat/reseat card, and suddenly setsas worked... then finished up by flashboot /root/Bootloaders/mptsas2.rom and now FreeBSD boots happily from the H710. Will power cycle it a few times in the follow days to make sure everything works before reinstall FreeBSD fresh again for production to replacing my old NAS.

General advice: retry a few times if you get errors, as these commands appear could be run repeatedly (@fohdeesha to keep me honest) and like others, take a break or overnight and retry. You may eventually get it to work.

Again, like all others before me, I want to give a sincere thanks to @fohdeesha.
 
  • Like
Reactions: fohdeesha

fohdeesha

Kaini Industries
Nov 20, 2016
2,737
3,099
113
33
fohdeesha.com
Just to share my experience with the flashing a Dell R520 with PERC H710 mini.

Follow the guide and wasn't able to setsas (error was No LSI SAS adapters found!). Re-read the entire guide again, and found out that I forgot to turn off CPU virtualization settings. Reverted card firmware with B0REVERT and start all over again, and didn't seem to work. Try again a few times/reboot/cold boot/unseat/reseat card, and suddenly setsas worked... then finished up by flashboot /root/Bootloaders/mptsas2.rom and now FreeBSD boots happily from the H710. Will power cycle it a few times in the follow days to make sure everything works before reinstall FreeBSD fresh again for production to replacing my old NAS.

General advice: retry a few times if you get errors, as these commands appear could be run repeatedly (@fohdeesha to keep me honest) and like others, take a break or overnight and retry. You may eventually get it to work.

Again, like all others before me, I want to give a sincere thanks to @fohdeesha.
yeah, there's some stupid race condition somewhere in these dells that creates situations like yours where it doesn't work for some reason...then suddenly does a few reboots later, it generates me like 5 emails a day in my inbox and I wish I knew what it was but I just don't have the time to fiddle with this project anymore. Some modifications like making people disable virt and building the new debian ISO with IOMMU disabled in the kernel has significantly cut down on the occurrences of lsiutil failure, but it still happens obviously. I highly suspect it's iDRAC related, it's an entirely separate PC running linux that clearly has a channel into the PERC device (which is how on stock percs it shows all of its stats, config, etc in the idrac web UI), and I believe it's occasional traffic/queries towards the perc might have something to do with why sometimes lsiutil (and its commands like setsas, flashing, etc) works and sometimes they don't

I thought about redoing the scripts to where before running anything, they call ipmitool to issue an iDRAC reset, putting the nix install running on the idrac into reboot so it shuts up, then running lsutil before the idrac comes back up. but alas, lazy. And then, of course, any users running the flashing guide over virtual KVM/idrac (which is like, 90% of you), would suddenly lose access for the ~3 minutes it takes for idrac to come back up, you'd lose the ISO virtual mount, etc. not pretty
 

nas-builder11759

New Member
May 8, 2022
7
5
3
yeah, there's some stupid race condition somewhere in these dells that creates situations like yours where it doesn't work for some reason...then suddenly does a few reboots later, it generates me like 5 emails a day in my inbox and I wish I knew what it was but I just don't have the time to fiddle with this project anymore. Some modifications like making people disable virt and building the new debian ISO with IOMMU disabled in the kernel has significantly cut down on the occurrences of lsiutil failure, but it still happens obviously. I highly suspect it's iDRAC related, it's an entirely separate PC running linux that clearly has a channel into the PERC device (which is how on stock percs it shows all of its stats, config, etc in the idrac web UI), and I believe it's occasional traffic/queries towards the perc might have something to do with why sometimes lsiutil (and its commands like setsas, flashing, etc) works and sometimes they don't

I thought about redoing the scripts to where before running anything, they call ipmitool to issue an iDRAC reset, putting the nix install running on the idrac into reboot so it shuts up, then running lsutil before the idrac comes back up. but alas, lazy. And then, of course, any users running the flashing guide over virtual KVM/idrac (which is like, 90% of you), would suddenly lose access for the ~3 minutes it takes for idrac to come back up, you'd lose the ISO virtual mount, etc. not pretty
There is no iDRAC installed. I installed everything direclty from vga console, just FYI.