INTEL P4510 4TB - Upgrade Firmware | Visible in lspci only

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
Hi Everyone,

I bought some new Intel P4510 4TB and all came with firmware VDV10131. I'm always doing a firmware upgrade if available and I'm unable to find issues with it in the web. All drives were visible with 0 bits and bytes read/written. As no firmware is available anymore on the INTEL website, I downloaded firmware upgrade tool from the Solidigm website (Link: Latest Firmware For Solidigm™ (Formerly Intel®) Solid State Drives) and upgraded the firmware to version VDV10184. No issue were reported during the upgrading process.

Rebooted, and the drive is no longer visible via nvme-cli, lsblk and the like. The only way to find the drive is via lscpi -vv which gives me the following output
root@gen2oo --> # lspci -vv | grep NVMe
19:00.0 Non-Volatile memory controller: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] (prog-if 02 [NVM Express])
Subsystem: Intel Corporation NVMe Datacenter SSD [3DNAND] SE 2.5" U.2 (P4510)
If I boot into the Solidigm firmware upgrade tool again, it sees the nvme and shows me that no new software is available. So the drive can't be dead.

Did someone have any similar issues with a nvme after upgrading the firmware without any indication for an issue during the process? If yes, were you able to solve the issue?

Please let me know. Much appreciated.
 
  • Sad
Reactions: Whaaat

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,620
2,037
113
Ugh, such a bummer I wondered how this transition was affecting drive updates... hopefully resolved ASAP for you.
 

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
Contacted Solidigm support. Lets see their reaction time. Also tested the drive with HD Sentinel Pro from the PartedMagic boot CD and it also sees the drives with all parameters you would expect from a 'nvme smart-log /dev/nvmeXnY' command. But still can't do anything with these. I just hope these are not some Chinese fake drives who run with one specific firmware and any upgrade just brakes them. Obviously I stopped upgrading the other drives. Need to find a way of checking serials against the drives, but ....
 

cageek

Active Member
Jun 22, 2018
93
104
33
Did someone have any similar issues with a nvme after upgrading the firmware without any indication for an issue during the process? If yes, were you able to solve the issue?

Please let me know. Much appreciated.
I have a 1TB P4511 NVME M.2 running VCV10384 (Solidigm says this is the latest):

16:00.0 Non-Volatile memory controller: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] (prog-if 02 [NVM Express])
Subsystem: Intel Corporation NVMe Datacenter SSD [3DNAND] SE M.2 (P4511)

Have an older version of their command-line software - SST_CLI_Linux_1.3.zip (dated ~09/2022). Release notes suggest firmware has not changed recently from current.

Also have pre-solidigm version of command-line software - intel-mas-cli-tool-linux-1-9.zip (dated ~07/2021). Release notes suggest firmware versions P4511 = VCV10370 (M.2) and P4510 = XCV10132 (2.5-inch) XC311132 (M.2)).

There is a download available here for the intel mas cli tool - version 2.2 - AUR (en) - intel-mas-cli-tool (dated ~12/2022) that suggests in July 2021 your P4510 was updated to VDV10182 in July 2021 with additional updates in Aug & Sept. of 2021 which would make is ~VDV10184? as of about Sept. 2021.

Hopefully that helps. If you want any of my old files let me know, but I might try the intel 2.2 and see if there is a force option (or you can use another firmware slot).
 

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
So, first of all Solidigm was super quick to react to a support request and gave me a list to provide more information about what was happening, environment the NVMe's are working etc. Always received a reply withing 24 hours at max. That's a positive.

However, it seems the following was happening. I need to note there's still no conclusion why it may happened. It seems the firmware upgrade to the newest and correct version VDV10184 caused a disconnection of the controller from the namespaces set up on the NVMe and the purge of all namespaces. Data wise, we all know what that means. I now tried the same on a Debian 12 test system instead of my Gentoo workstation and the same happened.

Therefore I could see the devices via lspci | grep Non-Volatile and later nvme list -v.
root@debian:~# nvme list -v
Subsystem Subsystem-NQN Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys1 nqn.2014.08.org.nvmexpress:80868086PHLJ012507HZ4P0DGN INTEL SSDPE2KX040T8 nvme1
nvme-subsys0 nqn.2014.08.org.nvmexpress:80868086PHLJ930005JW4P0DGN INTEL SSDPE2KX040T8 nvme0

Device SN MN FR TxPort Address Subsystem Namespaces
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
nvme1 PHLJ012507HZ4P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:42:00.0 nvme-subsys1
nvme0 PHLJ930005JW4P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:41:00.0 nvme-subsys0

Device Generic NSID Usage Format Controllers
------------ ------------ -------- -------------------------- ---------------- ----------------
root@debian:~#
I wasn't even able to set up both NVMe's with any 'nvme' command from the nvme CLI. So I used their Solidigm™ Storage Tool (For Data Center & Legacy Client SSDs) and from there everythign went uphill. I was able to at least gather some information about the status of the drive 0 with
sst show -ssd 0. It showed me an namespace ID of 4294967295 which I've never seen before.
root@debian:~# sst show -ssd 0

- PHLJ930005JW4P0DGN 4294967295 -

Bootloader : 0181
Capacity : 4.00 TB (4,000,787,030,016 bytes)
DevicePath : /dev/nvme0
DeviceStatus : Healthy
Firmware : VDV10184
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 0
ModelNumber : INTEL SSDPE2KX040T8
NamespaceId : 4294967295
ProductFamily : Intel SSD DC P4510 Series
SMARTEnabled : True
SectorDataSize : 512
SerialNumber : PHLJ930005JW4P0DGN
It became also clear that the firmware upgrade to VDV10184 also results in a change of the bootloader as drive information from a not upgraded drive show below.
root@debian:~# sst show -ssd 1

- PHLJ2301023B4P0VGN 1 -

Bootloader : 0203
Capacity : 4.00 TB (4,000,787,030,016 bytes)
DevicePath : /dev/nvme1n1
DeviceStatus : Healthy
Firmware : VDV10131
FirmwareUpdateAvailable : Firmware=VDV10184 Bootloader=VB1B0181
Index : 1
MaximumLBA : 976754645
ModelNumber : INTEL SSDPE2KX040T8
NamespaceId : 1
PercentOverProvisioned : 100.00
ProductFamily : Intel SSD DC P4510 Series
SMARTEnabled : True
SectorDataSize : 4096
SerialNumber : PHLJ2301023B4P0VGN

root@debian:~#
So direction to go became more clear and a I created a namespace to test with sst create -namespace -ssd 0 size=100000. Then I just had to attach the namespace to the controller via sst attach -namespace 1 -ssd 0, make a reboot and the drive is back. Looks relative easy once someone realized that only the Solidigm tool gives you the platform to configure the upgraded drive.

From a Solidigm standpoint, it's still not clear
  1. why the firmware upgrade detached the controller
  2. deleted all namespaces and why then the nvme CLI di not let me re-configure the drive.
Also, it seems, no matter how the namespace is configured in terms of blocks to be used, the max usable size of the drive is now 3.2TB while the old firmware gives me 3.6TB available effectively. All strange ... Maybe some automatic over-provisioning or something like that. Will work on that too.
 
Last edited:

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
I have a 1TB P4511 NVME M.2 running VCV10384 (Solidigm says this is the latest):

16:00.0 Non-Volatile memory controller: Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller] (prog-if 02 [NVM Express])
Subsystem: Intel Corporation NVMe Datacenter SSD [3DNAND] SE M.2 (P4511)

Have an older version of their command-line software - SST_CLI_Linux_1.3.zip (dated ~09/2022). Release notes suggest firmware has not changed recently from current.

Also have pre-solidigm version of command-line software - intel-mas-cli-tool-linux-1-9.zip (dated ~07/2021). Release notes suggest firmware versions P4511 = VCV10370 (M.2) and P4510 = XCV10132 (2.5-inch) XC311132 (M.2)).

There is a download available here for the intel mas cli tool - version 2.2 - AUR (en) - intel-mas-cli-tool (dated ~12/2022) that suggests in July 2021 your P4510 was updated to VDV10182 in July 2021 with additional updates in Aug & Sept. of 2021 which would make is ~VDV10184? as of about Sept. 2021.

Hopefully that helps. If you want any of my old files let me know, but I might try the intel 2.2 and see if there is a force option (or you can use another firmware slot).
If you could share all intel-mas-cli-tool-linux of any version prior including VDV10184, that would be awesome. Little bit confusion here between Intel and Solidigm as Solidigm™ Storage Tool, (SST) v 1.9 is including VDV10184. Not sure why they were not able to keep Intel versioning ... But hey ...
 

dbTH

Member
Apr 9, 2017
148
59
28
So, first of all Solidigm was super quick to react to a support request and gave me a list to provide more information about what was happening, environment the NVMe's are working etc. Always received a reply withing 24 hours at max. That's a positive.

However, it seems the following was happening. I need to note there's still no conclusion why it may happened. It seems the firmware upgrade to the newest and correct version VDV10184 caused a disconnection of the controller from the namespaces set up on the NVMe and the purge of all namespaces. Data wise, we all know what that means. I now tried the same on a Debian 12 test system instead of my Gentoo workstation and the same happened.

Therefore I could see the devices via lspci | grep Non-Volatile and later nvme list -v.

I wasn't even able to set up both NVMe's with any 'nvme' command from the nvme CLI. So I used their Solidigm™ Storage Tool (For Data Center & Legacy Client SSDs) and from there everythign went uphill. I was able to at least gather some information about the status of the drive 0 with
sst show -ssd 0. It showed me an namespace ID of 4294967295 which I've never seen before.

It became also clear that the firmware upgrade to VDV10184 also results in a change of the bootloader as drive information from a not upgraded drive show below.

So direction to go became more clear and a I created a namespace to test with sst create -namespace -ssd 0 size=100000. Then I just had to attach the namespace to the controller via sst attach -namespace 1 -ssd 0, make a reboot and the drive is back. Looks relative easy once someone realized that only the Solidigm tool gives you the platform to configure the upgraded drive.

From a Solidigm standpoint, it's still not clear
  1. why the firmware upgrade detached the controller
  2. deleted all namespaces and why then the nvme CLI di not let me re-configure the drive.
Also, it seems, no matter how the namespace is configured in terms of blocks to be used, the max usable size of the drive is now 3.2TB while the old firmware gives me 3.6TB available effectively. All strange ... Maybe some automatic over-provisioning or something like that. Will work on that too.
Your 4TB P4510 behaves weir. From the "sst" tool output you provided above, not only the "NamespaceId" is not correctly represented after the upgrade, the "PercentOverProvisioned' parameter seems is also missing.

I don't believe your issue was caused by this specific VDV10184 firmware version or due to the Solidigm "sst" cli, but probably some sort firmware metadata corruption within the drive. Also the linux nvme cli tool would also allow you to create and attach a namespace.

I have a 8TB version of P4510, and too had it upgraded to firmware version VDV10184 using the Solidigm "sst" cli tool. The upgrade was a breeze. The NamespaceId and PercentOverProvisioned values are all correctly represented after the upgrade as shown below:


Bootloader : 0181
Capacity : 8.00 TB (8,001,563,222,016 bytes)
DevicePath : /dev/nvme0n1
DeviceStatus : Healthy
Firmware : VDV10184
FirmwareUpdateAvailable : The selected drive contains current firmware as of this tool release.
Index : 0
MaximumLBA : 15628053167
ModelNumber : INTEL SSDPE2KX080T8
NamespaceId : 1
PercentOverProvisioned : 100.00
ProductFamily : Intel SSD DC P4510 Series
SMARTEnabled : True
SectorDataSize : 512


Btw, which vendor did you buy the P4510 from (that we need to stay away)? Did they sell you some bad batch of drives?
Further, the drive with SerialNumber: PHLJ930005JW4P0DGN appears to be an old stock drive
 
Last edited:

Whaaat

Active Member
Jan 31, 2020
301
157
43
From the "sst" tool output you provided above, not only the "NamespaceId" is not correctly represented after the upgrade, the "PercentOverProvisioned' parameter seems is also missing.
did you notice that not only 'MaximumLBA' disappeared, but 'SectorDataSize' changed from 4096 to 512, which is 100% goodbye to data integrity. Downgrading of the bootloader version is also lacks any meaning.
 

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
I more and more believe there's more to it.

It seems this maybe related to my Supermicro backplanes BPN-SAS3-826EL1-N4 and BPN-SAS3-826A-N4. First of all I have always issues to get the firmware upgrade tool loaded via USB. If the drives are connected to the backplanes above, boot process will be stuck at some point or kernel will panic immediately when its loaded. When update the firmware with drives connected through a backplane, I can reproduce the issues above. Once drives are connected to a PCIe adapter card, either directly or via SFF-8643 to 8639 cable, the update tool boots imemdiately and the drives are available immediately after the firmware update.

However, something still does not play nicely with this new firmware while using the NVMe-Cli commands or I'm just misusing the NVMe-Cli. Once I detach the namespace from the controller and delete the namespace ... I'm lost with the NVME-Cli as per below.
root@debian:~# nvme list -v
Subsystem Subsystem-NQN Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys3 nqn.2014.08.org.nvmexpress:80868086PHLJ242100814P0VGN INTEL SSDPE2KX040T8 nvme3
nvme-subsys2 nqn.2014.08.org.nvmexpress:80868086PHLJ242100FZ4P0VGN INTEL SSDPE2KX040T8 nvme2
nvme-subsys1 nqn.2014.08.org.nvmexpress:80868086PHLJ230100SD4P0VGN INTEL SSDPE2KX040T8 nvme1
nvme-subsys0 nqn.2014.08.org.nvmexpress:80868086PHLJ230100SG4P0VGN INTEL SSDPE2KX040T8 nvme0

Device SN MN FR TxPort Address Subsystem Namespaces
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
nvme3 PHLJ242100814P0VGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:01:00.0 nvme-subsys3 nvme3n1
nvme2 PHLJ242100FZ4P0VGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:42:00.0 nvme-subsys2 nvme2n1
nvme1 PHLJ230100SD4P0VGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:41:00.0 nvme-subsys1 nvme1n1
nvme0 PHLJ230100SG4P0VGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:02:00.0 nvme-subsys0 nvme0n1

Device Generic NSID Usage Format Controllers
------------ ------------ -------- -------------------------- ---------------- ----------------
/dev/nvme3n1 /dev/ng3n1 1 4.00 TB / 4.00 TB 4 KiB + 0 B nvme3
/dev/nvme2n1 /dev/ng2n1 1 4.00 TB / 4.00 TB 4 KiB + 0 B nvme2
/dev/nvme1n1 /dev/ng1n1 1 4.00 TB / 4.00 TB 4 KiB + 0 B nvme1
/dev/nvme0n1 /dev/ng0n1 1 4.00 TB / 4.00 TB 4 KiB + 0 B nvme0
root@debian:~# nvme detach /dev/nvme0 -n 1 -c 1
NVMe status: Controller List Invalid: The controller list provided contains invalid controller ids(0x411c)
root@debian:~# nvme id-ctrl /dev/nvme0 | grep ^cntlid
cntlid : 0
root@debian:~# nvme detach /dev/nvme0 -n 1 -c 0
detach-ns: Success, nsid:1
root@debian:~# nvme delete-ns /dev/nvme0 -n 1
delete-ns: Success, deleted nsid:1
root@debian:~# nvme create-ns /dev/nvme0 -s 6875000000 -c 6875000000 -b 4096
NVMe status: Namespace Insufficient Capacity: Creating the namespace requires more free space than is currently available(0x4115)
root@debian:~# sst create -namespace -ssd 0 size=6875000000

- Intel SSD DC P4510 Series PHLJ230100SG4P0VGN -

Status : create namespace successful.

root@debian:~#
As you can see, I'm not able to create a namespace of ~3.2TB with the usual command I also use when I set up Kioxia CM6 NVMe's here in the environment >> nvme create-ns /dev/nvme0 -s 6875000000 -c 6875000000 -b 4096. But doing so with the SST command providing the same amount of blocks to create >> sst create -namespace -ssd 0 size=6875000000 is doing it just fine. If I give it a lower number with the nvme create-ns ... command I will get something of < 1TB. I have no explanation to it and so does Solidigm at the moment. It's like with this firmware the NVMe-Cli can not evaluate the size of the NVMe anymore. Of course all this does work for the updated drives not connected to backplanes only. Otherwise, as mentioned already, I literally can't do anything with the nNVMe-Cli and just with SST and/or IntelMST commands.
Btw, which vendor did you buy the P4510 from (that we need to stay away)? Did they sell you some bad batch of drives?
Further, the drive with SerialNumber: PHLJ930005JW4P0DGN appears to be an old stock drive
Until this is not 100% solved and can be surely directed to the seller, I won't point any fingers or call names.
 

dbTH

Member
Apr 9, 2017
148
59
28
I more and more believe there's more to it.

It seems this maybe related to my Supermicro backplanes BPN-SAS3-826EL1-N4 and BPN-SAS3-826A-N4. First of all I have always issues to get the firmware upgrade tool loaded via USB. If the drives are connected to the backplanes above, boot process will be stuck at some point or kernel will panic immediately when its loaded. When update the firmware with drives connected through a backplane, I can reproduce the issues above. Once drives are connected to a PCIe adapter card, either directly or via SFF-8643 to 8639 cable, the update tool boots imemdiately and the drives are available immediately after the firmware update.

However, something still does not play nicely with this new firmware while using the NVMe-Cli commands or I'm just misusing the NVMe-Cli. Once I detach the namespace from the controller and delete the namespace ... I'm lost with the NVME-Cli as per below.

As you can see, I'm not able to create a namespace of ~3.2TB with the usual command I also use when I set up Kioxia CM6 NVMe's here in the environment >> nvme create-ns /dev/nvme0 -s 6875000000 -c 6875000000 -b 4096. But doing so with the SST command providing the same amount of blocks to create >> sst create -namespace -ssd 0 size=6875000000 is doing it just fine. If I give it a lower number with the nvme create-ns ... command I will get something of < 1TB. I have no explanation to it and so does Solidigm at the moment. It's like with this firmware the NVMe-Cli can not evaluate the size of the NVMe anymore. Of course all this does work for the updated drives not connected to backplanes only. Otherwise, as mentioned already, I literally can't do anything with the nNVMe-Cli and just with SST and/or IntelMST commands.

Until this is not 100% solved and can be surely directed to the seller, I won't point any fingers or call names.
I more and more believe there's more to it.

It seems this maybe related to my Supermicro backplanes BPN-SAS3-826EL1-N4 and BPN-SAS3-826A-N4. First of all I have always issues to get the firmware upgrade tool loaded via USB. If the drives are connected to the backplanes above, boot process will be stuck at some point or kernel will panic immediately when its loaded. When update the firmware with drives connected through a backplane, I can reproduce the issues above. Once drives are connected to a PCIe adapter card, either directly or via SFF-8643 to 8639 cable, the update tool boots imemdiately and the drives are available immediately after the firmware update.

However, something still does not play nicely with this new firmware while using the NVMe-Cli commands or I'm just misusing the NVMe-Cli. Once I detach the namespace from the controller and delete the namespace ... I'm lost with the NVME-Cli as per below.

As you can see, I'm not able to create a namespace of ~3.2TB with the usual command I also use when I set up Kioxia CM6 NVMe's here in the environment >> nvme create-ns /dev/nvme0 -s 6875000000 -c 6875000000 -b 4096. But doing so with the SST command providing the same amount of blocks to create >> sst create -namespace -ssd 0 size=6875000000 is doing it just fine. If I give it a lower number with the nvme create-ns ... command I will get something of < 1TB. I have no explanation to it and so does Solidigm at the moment. It's like with this firmware the NVMe-Cli can not evaluate the size of the NVMe anymore. Of course all this does work for the updated drives not connected to backplanes only. Otherwise, as mentioned already, I literally can't do anything with the nNVMe-Cli and just with SST and/or IntelMST commands.

Until this is not 100% solved and can be surely directed to the seller, I won't point any fingers or call names.
Note this part when you were using the nvme cli:

root@debian:~# nvme create-ns /dev/nvme0 -s 6875000000 -c 6875000000 -b 4096
NVMe status: Namespace Insufficient Capacity: Creating the namespace requires more free space than is currently available(0x4115)


6875000000*4096 = 28,160,000,000,000 ---> This is almost 28TB size, which is much greater than the drive free space

And when you use sst tool, sst create -namespace -ssd 0 size=6875000000, it is basically using 512 bytes sector size:
6875000000*512 = 3,520,000,000,000 --> which is about 3.2TB

So, when using nnme-cli tool, make sure the -s and -c value are correctly issue so that the namespace size is not out of the boundary

Could you try this to see if you are able to create a 2TB namespace (approximately) with the 512 bytes sector using the nvme cli?

nvme create-ns /dev/nvme0 -s 4000000000 -c 4000000000 -f 0 -d 0 -m 0

where, -f is the namespace formatted logical block size setting.
In the linux nvme cli version I used, "-f 0" is 512 bytes

Also, try to run nvme id-ctrl /dev/nvme0 -H to get these 2 values before creating a namespace:

tnvmcap --> # total nvme capacity
unvmcap --> # unused nvme capacity

So, when creating a namespace, these 2 value shouldn't be exceeded
 
Last edited:
  • Like
Reactions: nexox

mr44er

Active Member
Feb 22, 2020
133
42
28
smartctl -x /dev/nvme0n1
Code:
...
Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 -     512       0         2
1 +    4096       0         0
...
It shows the available blocksize and performance in relation. Lower number is good, 0 is best. So 4096 means best performance.

To have a clean start, delete all namespaces (you have a backup?)
nvme delete-ns /dev/nvme0 -n 1 etc...

nvme id-ctrl /dev/nvme0 | grep tnvmcap The number means total capacity in bytes this device has. (smartctl should show the same)
My example:
tnvmcap : 960,197,124,096 = 960GB

Divide this with the bytes per block/sector to get the available blocks.
960197124096 / 4096 gives 234423126 blocks
If you would want 512b (960197124096 / 512) gives 1875385008 blocks

 

gb00s

Well-Known Member
Jul 25, 2018
1,165
570
113
Poland
FINAL UPDATE:

So, first of all the sizing issue with the commands I intended to apply has nothing to do with the calculation of the available blocks in accordance with the sector size. This is now confirmed from Solidigm and I'm waiting for a fixed firmware. There still seems to be an issue with multi-namespaces and updating the firmware followed by a power cycle. Drive parameters are not updated correctly. At least in my case. So all data are lost.

However and as a workaround, the solution here is to first create a single namespace again with the max drive size available, re-applying firmware update and followed by a power cycle. Now, I can use the nvme-cli as usual and set namespaces and sizes as I intended initially and how many I want. If, as intended, I set up a namespace smaller than the max available blocks or native drive capacity first, this will fail again.

Documented by Solidigm under case number NSGSE-112016 (2023/10/04) now.
 
Last edited:
  • Like
Reactions: NablaSquaredG