Unable to change partition type to Linux RAID

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Werner Barnard

New Member
Jun 10, 2022
2
0
1
I am struggling to change a partition to Linux RAID:

Code:
user@h1016:~$ sudo mdadm -E /dev/nvme0n1
/dev/nvme0n1:
MBR Magic : aa55
Partition[0] : 1875385007 sectors at 1 (type ee)
user@h1016:~$ sudo fdisk /dev/nvme0n1

Welcome to fdisk (util-linux 2.37.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): t
No partition is defined yet!

Command (m for help): n
Partition number (1-128, default 1):
First sector (34-1875384974, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1875384974, default 1875384974):

Created a new partition 1 of type 'Linux filesystem' and of size 894.3 GiB.

Command (m for help): t
Selected partition 1
Partition type or alias (type L to list all): fd
Type of partition 1 is unchanged: Linux filesystem.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

user@h1016:~$ sudo mdadm -E /dev/nvme0n1
/dev/nvme0n1:
MBR Magic : aa55
Partition[0] : 1875385007 sectors at 1 (type ee)
I have tried code 29 as well, but it just doesn't want to change...
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Technically, GPT would be a better choice. It doesn't work with MBR codes but uses GUIDs to designate GPT partition types.
 

cageek

Active Member
Jun 22, 2018
94
105
33
/dev/nvme0n1 is the disk
/dev/nvme0n1p1 is the it's first partition

Do a:

sudo fdisk -l /dev/nvme0n1

It's not really clear what you wanted (or what you got). You probably want a GPT label (in 2022), and a raid partition as @oneplane suggested. There shouldn't be a problem with GPT.

After that you probably want...

sudo mdadm -E /dev/nvme0n1p1

which should give full raid info if the first partition is a raid partition
 

casperghst42

Member
Sep 14, 2015
112
20
18
55
Why are you even adding partitions? Just feed mdadm the raw device.
They probably have fixed it, but one of the MB vendors had a FW bug a few years back which would strip the raid during reboot. Not a "biggy" but you had to reassemble the raid after each reboot. It only happened with raw devices.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
They probably have fixed it, but one of the MB vendors had a FW bug a few years back which would strip the raid during reboot. Not a "biggy" but you had to reassemble the raid after each reboot. It only happened with raw devices.
Been using mdadm for raid on raw devices in production for the last 15 years.... never heard of an issue like that. The metadata is on the disks themselves, if your motherboard was overwriting that on reboot, I'd be VERY concerned, lol!
 

casperghst42

Member
Sep 14, 2015
112
20
18
55
Been using mdadm for raid on raw devices in production for the last 15 years.... never heard of an issue like that. The metadata is on the disks themselves, if your motherboard was overwriting that on reboot, I'd be VERY concerned, lol!
I've been using it even longer and also not seen it my self. It was mentioned in the MB section, I think it was Gigabyte which had the problem. The reason why I started with partitions, was due to HD size, you would be able to add a drive which was larger than the rest, but that has all changed with the newer version of mdadm (I think).
 

oneplane

Well-Known Member
Jul 23, 2021
844
484
63
Most of those problems happened during two phases:

1. The introduction of various FakeRAID/SoftRAID schemes as BIOS Option ROMs
2. Drive sizes growing beyond what the firmwares used to support (127GB first, then 4TB I think, and then there was the 4k sectors thing)

While Intel RST RAID didn't really cause much of an issue, the AMD version and the non-Intel but IBV added option ROMs for fake RAID were sometimes causing problems. The issue was that the firmware would scan de drives for active arrays, 'find' those arrays but then the 0.9x MD format wasn't parsable and as such considered 'broken' where the firmware would then be helpful in 'recovering' those metadata parts for you. Result: the FakeRAID BIOS extension wouldn't be able to actually construct a logical volume and neither could MD because the metadata was corrupted.
 
  • Like
Reactions: casperghst42