How to blink or identify NVMe drives connected to an LSI 9500 tri-mode controller.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

patkoscsaba

New Member
Aug 16, 2022
4
0
1
Hi, we have the following setup in Supermicro server:
1. LSI 9400 -> expander -> 10 x HDD
2. LSI 9500 -> expander -> 2 x NVMe



Code:
|------------|                             |-----------|

| LSI 9400   |      |--------------| ----->|  HDD x 10 |

|------------| ---->|  Expander    |       |-----------|

                    |              |

|------------| ---->|              |       |-----------|

| LSI 9500   |      |--------------| ----->| NVMe Intel|

|------------|                         |   |-----------|

                                       |

                                       |   |-----------|

                                       |-->| NVMe Intel|

                                           |-----------|


We have no problem blinking any of the bays hosting HDDs, but blinking the NVMe bays does nothing.

I would like to achieve any of these two solutions:
1. Optimal solution - blink the bays containing the NVMe controller running on 9500 tri-mode
2. Alternate solution - find a link/value/information that will allow me to associate an NVMe with a physical port on the LSI 9500 controller. I am thinking about something like "Look in the file /<some_path>/<some_file> and there you will find the ID of the port." More complex associations are also welcome. No problem if there are several values we have to corelate.

Operating sytem: Rocky Linux, fully under our control, we can do anything on it, no restrictions.
Server configuration: It runs an ESXi with both controllers in passthrough to the Rocky Linux VM.

So far I did the following investigations and experiments.

1. Try blinking it with `ledctl` -> no error, no blinking
2. Try blinking with `sg_ses` -> no error, no blinking. Here are some commands, trimmed to eliminate the rest of the disk.

Basically, what I want to know is: If a drive fails, which one to remove? The answer can be a blink of a led or running a command that would say "top drive" or something like that.


Code:
[root@echo-development ~]# lsscsi -g
[1:0:0:0]    enclosu BROADCOM VirtualSES       03    -          /dev/sg2
[1:2:0:0]    disk    NVMe     INTEL SSDPE2KX01 01B1  /dev/sdb   /dev/sg3
[1:2:1:0]    disk    NVMe     INTEL SSDPE2KX02 0131  /dev/sdc   /dev/sg4

[root@echo-development ~]# sg_ses -vvv --dsn=0 --set=ident /dev/sg2
open /dev/sg2 with flags=0x802
    request sense cmd: 03 00 00 00 fc 00
      duration=0 ms
    request sense: pass-through requested 252 bytes (data-in) but got 18 bytes
Request Sense near startup detected something:
  Sense key: No Sense, additional: Additional sense: No additional sense information
  ... continue
    Receive diagnostic results command for Configuration (SES) dpage
    Receive diagnostic results cdb: 1c 01 01 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 60 bytes
    Receive diagnostic results: response:
01 00 00 38 00 00 00 00  11 00 02 24 30 01 62 b2
07 eb 55 80 42 52 4f 41  44 43 4f 4d 56 69 72 74
75 61 6c 53 45 53 00 00  00 00 00 00 30 33 00 00
17 28 00 00 19 08 00 00  00 00 00 00
    Receive diagnostic results command for Enclosure Status (SES) dpage
    Receive diagnostic results cdb: 1c 01 02 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 208 bytes
    Receive diagnostic results: response:
02 00 00 cc 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
    Receive diagnostic results command for Element Descriptor (SES) dpage
    Receive diagnostic results cdb: 1c 01 07 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 432 bytes
    Receive diagnostic results: response, first 256 bytes:
07 00 01 ac 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
00 00 00 00 4e 4f 42 50  4d 47 4d 54 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
00 00 00 00 4e 4f 42 50  4d 47 4d 54 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
    Receive diagnostic results command for Additional Element Status (SES-2) dpage
    Receive diagnostic results cdb: 1c 01 0a ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 1448 bytes
    Receive diagnostic results: response, first 256 bytes:
0a 00 05 a4 00 00 00 00  16 22 00 00 01 00 00 04
10 00 00 08 50 00 62 b2  07 eb 55 80 3c d2 e4 a6
23 29 01 00 00 00 00 00  00 00 00 00 96 22 00 01
01 00 00 ff 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
96 22 00 02 01 00 00 ff  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 96 22 00 03  01 00 00 ff 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  16 22 00 04 01 00 00 06
10 00 00 08 50 00 62 b2  07 eb 55 84 3c d2 e4 99
70 1d 01 00 00 00 00 00  00 00 00 00 96 22 00 05
01 00 00 ff 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
96 22 00 06 01 00 00 ff  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
  s_byte=2, s_bit=1, n_bits=1
Applying mask to element status [etc=23] prior to modify then write
    Send diagnostic command page name: Enclosure Control (SES)
    Send diagnostic cdb: 1d 10 00 00 d0 00
    Send diagnostic parameter list:
02 00 00 cc 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 80 00 02 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
    Send diagnostic timeout: 60 seconds
      duration=0 ms
[root@echo-development ~]# sg_ses -vvv --dsn=6 --set=ident /dev/sg2
open /dev/sg2 with flags=0x802
    request sense cmd: 03 00 00 00 fc 00
      duration=0 ms
    request sense: pass-through requested 252 bytes (data-in) but got 18 bytes
Request Sense near startup detected something:
  Sense key: No Sense, additional: Additional sense: No additional sense information
  ... continue
    Receive diagnostic results command for Configuration (SES) dpage
    Receive diagnostic results cdb: 1c 01 01 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 60 bytes
    Receive diagnostic results: response:
01 00 00 38 00 00 00 00  11 00 02 24 30 01 62 b2
07 eb 55 80 42 52 4f 41  44 43 4f 4d 56 69 72 74
75 61 6c 53 45 53 00 00  00 00 00 00 30 33 00 00
17 28 00 00 19 08 00 00  00 00 00 00
    Receive diagnostic results command for Enclosure Status (SES) dpage
    Receive diagnostic results cdb: 1c 01 02 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 208 bytes
    Receive diagnostic results: response:
02 00 00 cc 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
    Receive diagnostic results command for Element Descriptor (SES) dpage
    Receive diagnostic results cdb: 1c 01 07 ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 432 bytes
    Receive diagnostic results: response, first 256 bytes:
07 00 01 ac 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
00 00 00 00 4e 4f 42 50  4d 47 4d 54 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
00 00 00 00 4e 4f 42 50  4d 47 4d 54 00 00 00 00
00 00 00 1c 43 30 2e 30  00 00 00 00 00 00 00 00
    Receive diagnostic results command for Additional Element Status (SES-2) dpage
    Receive diagnostic results cdb: 1c 01 0a ff fc 00
      duration=0 ms
    Receive diagnostic results: pass-through requested 65532 bytes (data-in) but got 1448 bytes
    Receive diagnostic results: response, first 256 bytes:
0a 00 05 a4 00 00 00 00  16 22 00 00 01 00 00 04
10 00 00 08 50 00 62 b2  07 eb 55 80 3c d2 e4 a6
23 29 01 00 00 00 00 00  00 00 00 00 96 22 00 01
01 00 00 ff 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
96 22 00 02 01 00 00 ff  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 96 22 00 03  01 00 00 ff 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  16 22 00 04 01 00 00 06
10 00 00 08 50 00 62 b2  07 eb 55 84 3c d2 e4 99
70 1d 01 00 00 00 00 00  00 00 00 00 96 22 00 05
01 00 00 ff 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
96 22 00 06 01 00 00 ff  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
  s_byte=2, s_bit=1, n_bits=1
Applying mask to element status [etc=23] prior to modify then write
    Send diagnostic command page name: Enclosure Control (SES)
    Send diagnostic cdb: 1d 10 00 00 d0 00
    Send diagnostic parameter list:
02 00 00 cc 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 80 00 02 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00  00 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
01 00 00 00 01 00 00 00  01 00 00 00 01 00 00 00
    Send diagnostic timeout: 60 seconds
      duration=0 ms
We used `dns=0` and `dns=6` becuase it seems like end devices are connected to these two ports (output trim to relevant results):


Code:
[root@echo-development ~]# sg_ses -j /dev/sg2
  BROADCOM  VirtualSES  03
  Primary enclosure logical identifier (hex): 300162b207eb5580
[0,-1]  Element type: Array device slot
  Enclosure Status:
    Predicted failure=0, Disabled=0, Swap=0, status: Unsupported
    OK=0, Reserved device=0, Hot spare=0, Cons check=0
    In crit array=0, In failed array=0, Rebuild/remap=0, R/R abort=0
    App client bypass A=0, Do not remove=0, Enc bypass A=0, Enc bypass B=0
    Ready to insert=0, RMV=0, Ident=0, Report=0
    App client bypass B=0, Fault sensed=0, Fault reqstd=0, Device off=0
    Bypassed A=0, Bypassed B=0, Dev bypassed A=0, Dev bypassed B=0


[0,0]  Element type: Array device slot
  Enclosure Status:
    Predicted failure=0, Disabled=0, Swap=1, status: Unsupported
    OK=0, Reserved device=0, Hot spare=0, Cons check=0
    In crit array=0, In failed array=0, Rebuild/remap=0, R/R abort=0
    App client bypass A=0, Do not remove=0, Enc bypass A=0, Enc bypass B=0
    Ready to insert=0, RMV=0, Ident=0, Report=0
    App client bypass B=0, Fault sensed=0, Fault reqstd=0, Device off=0
    Bypassed A=0, Bypassed B=0, Dev bypassed A=0, Dev bypassed B=0
  Additional Element Status:
    Transport protocol: SAS
    number of phys: 1, not all phys: 0, device slot number: 4
    phy index: 0
      SAS device type: end device
      initiator port for:
      target port for: SSP
      attached SAS address: 0x500062b207eb5580
      SAS address: 0x3cd2e4dd23290100
      phy identifier: 0x0



[0,4]  Element type: Array device slot
  Enclosure Status:
    Predicted failure=0, Disabled=0, Swap=1, status: Unsupported
    OK=0, Reserved device=0, Hot spare=0, Cons check=0
    In crit array=0, In failed array=0, Rebuild/remap=0, R/R abort=0
    App client bypass A=0, Do not remove=0, Enc bypass A=0, Enc bypass B=0
    Ready to insert=0, RMV=0, Ident=0, Report=0
    App client bypass B=0, Fault sensed=0, Fault reqstd=0, Device off=0
    Bypassed A=0, Bypassed B=0, Dev bypassed A=0, Dev bypassed B=0
  Additional Element Status:
    Transport protocol: SAS
    number of phys: 1, not all phys: 0, device slot number: 6
    phy index: 0
      SAS device type: end device
      initiator port for:
      target port for: SSP
      attached SAS address: 0x500062b207eb5584
      SAS address: 0x3cd2e4a623290100
      phy identifier: 0x0
3. Find the `SAS address` from the output above in the list of drives.
Our `SAS address: 0x3cd2e4a623290100` should be found on a drive (NVMe, SSD, HDD, whatever). At least as I understood from `sg_ses` documentation and blog posts / forums from the Internet. But the SAS address on the NVMes are different, and the indicated SAS address from the controller cannot be found on any devices.

Code:
[root@echo-development ~]# cat "/sys/bus/pci/devices/0000:04:00.0/host1/target1:2:1/1:2:1:0/sas_address"
0x00012923a6e4d25c
4. Rely on HCTL -> does not work because HCTL changes after when I remove/reinsert a drive to the bay. It also resets on reboot to 1:2:0:0 and 1:2:1:0.
5. Associate `/sys/bus/pci/devices/0000:04:00.0/host1/target1:2:1/1:2:1:0/sas_device_handle` with a port on the controller. -> does not work, it increments every time a device is removed and reinserted.
6. Try to find any other associations between an NVMe drive and the controller port. -> I couldn't find.

Please let me know if there is anything else I could try or if you need any further information.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
It appears that the LSI 9500 has the required support: (from 9500 product brief)
[providing that LSI actually "walks the walk" :) ]
Control and management of multi-protocol (SAS/SATA/NVMe) backplanes
has been loosely defined in previous generations of products. Recognizing
this, Broadcom worked with key industry members to introduce Universal
Backplane Management (UBM) or SFF-TA-1005. UBM builds upon current
management frameworks to provide a comprehensive approach to
managing SAS, SATA, and NVMe. The 9500 series adapters are UBM ready,
and customers can immediately integrate these adapters into their U.3
backplanes utilizing UBM.
But does your Supermicro (NVMe) backplane have UBM support?
And, has support for SFF-TA-1005 made it to Linux/AnyOS tool-chain?

Probably "No" to at least one.
...
6. Try to find any other associations between an NVMe drive and the controller port. -> I couldn't find.

Please let me know if there is anything else I could try or if you need any further information.
So, you should investigate, by using lspci, the PCIe Bus_Id assignments for your NVMe devices/endpoints (and, hence, specific bay slots) and determine whether they remain consistent across system reboots, and device insertions/removals. [It will be necessary/helpful if your NVMe devices have different Vendor/:/Product ID#s (at least during the investigation phase)]. If they do (good chance), you should also find a consistent correlation between the Bus_Id and the /dev/nvmeX assignment.
[I found this to be the case for a PCIe(switch-based) NVMe HBA (no hot-swap & no enclosure involved)]
Please follow-up (or at least PM me)!
========
Jagger/Richards wrote the Hacker's Anthem :
You can't always get what you want
But if you try sometimes you just might find
You just might find that you
You get what you need
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
But does your Supermicro (NVMe) backplane have UBM support?
I think all their current stuff predates that standard (even the pcie 4.0 backplanes), at least their is no documentation about it (google with site:supermicro.com).
6. Try to find any other associations between an NVMe drive and the controller port. -> I couldn't find.
You didn't specify or link the exact supermicro server/chassis you have.
I have two 745b chassis where i replaced the sas backplanes with sas backplanes taht support up to 4 u.2/u.3 devices. The u.2/u.3 devices are directly mapped to "nvme" ports on the other side of the backplane and it's my repsoibility to connect these to the correct ports on a retimer/redriver add on card...
 

patkoscsaba

New Member
Aug 16, 2022
4
0
1
Sorry for the late followup. I had to attend emergencies at work and there was no time for this.
This is the server: SC826BAC4-R920LPB | 2U | Chassis | Products | Supermicro

lsipci does not show my NVMe drives, only the LSI 9500 adapter to which they are connected to.
Code:
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
02:01.0 SATA controller: VMware SATA AHCI controller
03:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI Controller (rev 02)
04:00.0 Serial Attached SCSI controller: Broadcom / LSI Fusion-MPT 12GSAS/PCIe Secure SAS38xx
0b:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
13:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
1b:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3224 PCI-Express Fusion-MPT SAS-3 (rev 01)
The NVMe drives appear as SAS devices according to
Code:
lsblk -d -o name,tran,type,vendor | grep -i nvme
Code:
sdb  sas    disk NVMe   
sdc  sas    disk NVMe
And they seem to have the
Code:
scsi_sg
driver loaded on them ... or on the controller. So, basically they are exposed as SCSI devices by the Tri-Mode LSI 9500-8i.
The controller appears in the system as Broadcom VirtualSES and it is on the PCI ID: 0000:04:00.0. Inside
Code:
/sys/bus/pci/devices/0000:04:00.0
there are two directories named with HTCTL in their names:
Code:
host1/target1:2:0/1:2:0:0
. But as I mentioned, the HCTL and these directory names change after each remove/insert operation.

I searched the whole /sys/bus/pci/devices structure, but I couldn't find anything NVMe relate except in the folders mentioned above.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
438
252
63
NH, USA
...
lsipci does not show my NVMe drives, only the LSI 9500 adapter to which they are connected to.
Code:
...
04:00.0 Serial Attached SCSI controller: Broadcom / LSI Fusion-MPT 12GSAS/PCIe Secure SAS38xx
...
The NVMe drives appear as SAS devices according to
Code:
lsblk -d -o name,tran,type,vendor | grep -i nvme
Code:
sdb  sas    disk NVMe 
sdc  sas    disk NVMe
And they seem to have the
Code:
scsi_sg
driver loaded on them ... or on the controller. So, basically they are exposed as SCSI devices by the Tri-Mode LSI 9500-8i.
...
PtBrk.jpg
( at [Link] )

That, likely, also means that there is no access to the entire nvme tool facility. Yuck! (Let's just call it a Try-mode controller)
Basically, what I want to know is: If a drive fails, which one to remove? The answer can be a blink of a led or running a command that would say "top drive" or something like that.
By any chance, when there is I/O activity on the faux-NVMe device(s), does the drive_bay's activity light blink? If so,
[Warning: Ugly Kluge Ahead]
You could do a few random reads, once per second, on the other (faux-NVMe) drive(s), and instruct the user to remove the one that isn't blinking.
"Desperate situations call for desperate measures."
 

patkoscsaba

New Member
Aug 16, 2022
4
0
1
It seems like
Code:
storcli
now works with controller in IT mode, not just MegaRaid / IR mode! And it also has useful information. It may need some coding to extract from it exactly what I want, but the code below (trimmed to remove noise) clearly specifies which device is on which physical port!

Code:
[root@echo-development storcli]# ./storcli64 /c0 show all
CLI Version = 007.2203.0000.0000 May 11, 2022
Operating system = Linux 4.18.0-372.19.1.el8_6.x86_64
Controller = 0
Status = Success
Description = None

[ ... ]

Physical Device Information :
===========================

Drive /c0/e0/s4 :
===============

Drive /c0/e0/s4 Device attributes :
=================================
Manufacturer Id = NVMe
Model Number = INTEL SSDPE2KX020T8
NAND Vendor = NA
SN = PHLJ0083011W2P0BGN
WWN = 3CD2E4A623290100
Firmware Revision = VDV10131
Raw size = 1.819 TB [0xe8e088af Sectors]
Coerced size = 1.819 TB [0xe8e088af Sectors]
Non Coerced size = 1.819 TB [0xe8e088af Sectors]
Device Speed = 8.0GT/s
Link Speed = 8.0GT/s
Sector Size = 512B
Config ID = NA
Number of Blocks = 3907029167
Connector Name = C0.0 x4


Drive /c0/e0/s4 Policies/Settings :
=================================
Enclosure position = 0
Connected Port Number = 0(path0) <<<<<<<<<<<<<<------------------ PORT 0
[ ... ]


Drive /c0/e0/s6 :
===============

Drive /c0/e0/s6 Device attributes :
=================================
Manufacturer Id = NVMe
Model Number = INTEL SSDPE2KX020T8
NAND Vendor = NA
SN = PHLJ008301YX2P0BGN
WWN = 3CD2E4DD23290100
Firmware Revision = VDV10131
Raw size = 1.819 TB [0xe8e088af Sectors]
Coerced size = 1.819 TB [0xe8e088af Sectors]
Non Coerced size = 1.819 TB [0xe8e088af Sectors]
Device Speed = 8.0GT/s
Link Speed = 8.0GT/s
Sector Size = 512B
Config ID = NA
Number of Blocks = 3907029167
Connector Name = C0.1 x4


Drive /c0/e0/s6 Policies/Settings :
=================================
Enclosure position = 0
Connected Port Number = 1(path0)     <<<<<<<<<<<<<<------------------ PORT 1
[ ... ]
 

rgysi

New Member
Aug 30, 2022
5
1
3
Maybe 'storcli /c0/e0/s6 start locate' will work, but it looks like it can't find the enclosure position as both are '0'
 

patkoscsaba

New Member
Aug 16, 2022
4
0
1
Maybe 'storcli /c0/e0/s6 start locate' will work, but it looks like it can't find the enclosure position as both are '0'
It doesn't blink. But that is OK. I can see which NVMe is on which physical port, and that's enough.