How to clean up orphaned SATA disk on omniosce?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dswartz

Active Member
Jul 14, 2011
610
79
28
So I just finished migrating from Linux ZFS back to omniosce. Everything is working fine, except I had a brain cramp and forgot to pull the sata ssd that had centos8 installed. Solarish OS don't by default hot-plug sata disks, so now the format command shows:

AVAILABLE DISK SELECTIONS:
0. c0t5000C5002E3AA680d0 <ATA-ST32000644NS-GG15-1.82TB>
/scsi_vhci/disk@g5000c5002e3aa680
1. c0t5000C5002E38E0EBd0 <ATA-ST32000644NS-GG15-1.82TB>
/scsi_vhci/disk@g5000c5002e38e0eb
2. c0t5000C50041BD3E87d0 <SEAGATE-ST1000NM0001-0002-931.51GB>
/scsi_vhci/disk@g5000c50041bd3e87
3. c0t5000C50055E9A7A3d0 <SEAGATE-ST1000NM0001-A003-931.51GB>
/scsi_vhci/disk@g5000c50055e9a7a3
4. c0t5000C50055E99CDFd0 <SEAGATE-ST1000NM0001-A003-931.51GB>
/scsi_vhci/disk@g5000c50055e99cdf
5. c0t5000C50056ED546Fd0 <SEAGATE-ST1000NM0001-0002-931.51GB>
/scsi_vhci/disk@g5000c50056ed546f
6. c0t5000C500302A54CFd0 <SEAGATE-ST960FM0003-0007-894.25GB>
/scsi_vhci/disk@g5000c500302a54cf
7. c0t5000C500302A5213d0 <SEAGATE-ST960FM0003-0007-894.25GB>
/scsi_vhci/disk@g5000c500302a5213
8. c0t5000C500412EE41Fd0 <SEAGATE-ST1000NM0001-PN04-931.51GB>
/scsi_vhci/disk@g5000c500412ee41f
9. c0t5000C500426C6F73d0 <SEAGATE-ST1000NM0001-0002-931.51GB>
/scsi_vhci/disk@g5000c500426c6f73
10. c0t5000C50057575FE3d0 <SEAGATE-ST1000NM0001-0002-931.51GB>
/scsi_vhci/disk@g5000c50057575fe3
11. c0t5000C5005621857Bd0 <SEAGATE-ST1000NM0001-0002-931.51GB>
/scsi_vhci/disk@g5000c5005621857b
12. c0t50025385500E3F63d0 <drive type unknown> <==== WHOOPS!!!
/scsi_vhci/disk@g50025385500e3f63
13. c2t4d0 <ATA-SuperMicro SSD-SOB20R-29.50GB>
/pci@0,0/pci15d9,834@1f,2/disk@4,0
14. c2t5d0 <ATA-SuperMicro SSD-SOB20R-29.50GB>
/pci@0,0/pci15d9,834@1f,2/disk@5,0
15. c5t5000CCA04DB0D739d0 <LENOVO-X-HUSMM1620ASS20-K4C7-186.31GB>
/pci@0,0/pci8086,6f08@3/pci1000,30a0@0/iport@f0/disk@w5000cca04db0d739

devfsadm -Cv doesn't help here. My 12 vsphere guests are now on a datastore on this server, so a reboot is not convenient. Nothing bad happening that I can see, but my OCD doesn't like this. Any hints/tips welcome