Migration to HighPoint SSD7202 (Linux)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PriamX

New Member
Mar 3, 2022
18
4
3
This question may be a bit specific on a rather uncommon controller (at-least I hadn't heard of it until recently).

I want to migrate my OS drives (two NVMe SSDs) to a HighPoint NVMe RAID card.

I have a file server. It has 8x HDDs and the 2 SDDs. It is running Fedora 37. I am competent enough with Linux to compile kernel modules if need be.

The SSD7202 is not yet installed. I'm trying to figure out how to install it to minimize downtime.

The 8x HDD are 10TB WD Red Drives (SATA 7200rpm); all 8 are connected as a RAID6 to a Broadcom MegaRAID 9361-16i, which is a PCIe 3.0 x8 lane card. It is not bootable, it's just data storage (all manner of file types). This is working very well and I only mention it to point out there's an existing PCIe storage device in the server.

The 2x SDDs are 1TB WD Black SN750 drives; each is plugged into the 2 M.2 slots on the ASrock Rack C246 WS motherboard, we'll call these Drive A (M.2 slot 1), and Drive B (M.2 slot 2).

Drive A and Drive B are OS drives. They are configured in a software-based RAID1 (mirror) using mdadm, these are what I want to relocate to the SSD7202 one at a time.

There is an available x8 lane PCIe slot that I plan to plug the SSD7202 into.

My backup service is iDrive. It's great for one-off recoveries, but restoring the entire system would likely take 2-3 days (never done this before, so that's a guess based on limited recoveries I've done). This is why I would rather not set up the SSD7202 using a full backup then restore.

Here's how I *think* I would like to do it, but not sure if it's possible. I will outline it in steps:

1. In Linux (/etc/fstab) comment out the large RAID6 device, and dismount it from the OS. (temporarily, just for setting up the SSD7202)

2. Shut down the system and physically remove the RAID6 Broadcom card (again just temporarily).

3. Physically remove both Drive A and Drive B.

4. Install only Drive A into Slot 1 on the SSD7202 (putting Drive B aside for now)

5. Install the SSD7202 into the motherboard (again with only Drive A in it).

6. Create a Fedora boot USB, and boot up the system, create a RAID1 (mirror) on the SSD7202 (with only the one drive), install a minimal Fedora and get it running with UEFI boot. Test that it boots and make sure it's working.

7. Shut down the system and install Drive B back into the M.2 motherboard slot (not the SSD7202, yet).

8. Boot up using either Drive B *or* the SSD7202, whatever make more sense.

9. Copy files/data/configs etc. from Drive B onto the SSD7202.

10. Shut down again, remove Drive B from the mobo.

11. Boot up using the SSD7202, make sure all the files from Drive B made it on there and it's booting and configured correctly.

12. Shut down again, install Drive B into the SSD7202 and add it as the 2nd member of the existing RAID1 mirror.

13. Make sure it all boots up, Drive A and Drive B are now a RAID1 on the SSD7202.

14. Shutdown and reconnect the SATA MegaRAID PCIe controller. Boot it up, and make sure the SSD7202 is still working alongside the MegaRAID. Remount the RAID6.

That's the steps I was thinking of, but I'm new to HighPoint; things I'm not sure about that may cause some problems. Anyone here have experience with HighPoint devices? (Based on the Marvell 88NR2241 I think.) Can you create a single-drive RAID1 and later add a 2nd drive? Can there be an NVMe drive installed in both the motherboard and SSD7202 and still boot off either one?

Thanks!
 

nexox

Active Member
May 3, 2023
138
67
28
I haven't used HighPoint hardware since the ATA133 days, but your plan sounds more or less reasonable, so long as it will let you grow from a single device to a RAID1, you'll have to read the manual or contact their support to confirm.

The question really is what are you trying to accomplish here? That Marvell chip is a PCI-e switch, not a RAID controller, so you're going to end up with software RAID anyway (just like the ATA133 cards I used,) seems like a lateral move, maybe even a step backwards since you lose all the documentation and help available for mdadm. If you just want additional m.2 slots and your board supports bifurcation then can get by with a much cheaper passive dual m.2 adapter card.
 
  • Like
Reactions: PriamX

PriamX

New Member
Mar 3, 2022
18
4
3
I couldn't dig that information up in the manual, pretty sure I'll need to contact their support.

You're very close regarding what I'm trying to accomplish...I'm trying to pick up a x4 PCIe slot. The goal is to be able to install and fiddle with an ASrock PAUL IPMI card...no real objective with the IPMI card, just monkeying with it, perhaps some kernel module coding, depends what I can discover about it.

On that mobo I have two x4 lanes one each for my SSDs. One x4 lane is M.2, the other x4 lane is an M.2 shared with a physical x8 slot. I'm pretty sure the board doesn't support bifurcation, either that or the BIOS doesn't. Either way, I don't think it matters, I can't use it. HighPoint says I can put two NVMe drives on their board and run them both on a single x4 lane w/o bifurcation. Then that leaves me a place to put the x4 IPMI card. So like you said, a step backwards. But I need the slot/lane.
 

nexox

Active Member
May 3, 2023
138
67
28
Fair enough, but there are also dual (or even quad) m.2 cards with PCIe switches that don't have the additional RAID layer, not only do they usually cost a little less, but you could just move the drives over and boot off the same install.
 
  • Like
Reactions: PriamX

PriamX

New Member
Mar 3, 2022
18
4
3
Indeed. I had started in that direction. I had found an Ableconn PEXM2-130 card on Amazon (ASMedia ASM2824 PCIe switch). It is cheaper, but still more than half as much as the HighPoint card. I chose the HighPoint card in the end because they initially answered my emails, and have support. I paid about $40 more for the HighPoint. But yes, you're right. It would be straightforward to migrate those drives over, in fact, not even a migration really.
 
  • Like
Reactions: nexox

UhClem

Active Member
Jun 26, 2012
376
204
43
NH, USA
...I'm trying to pick up a x4 PCIe slot. The goal is to be able to install and fiddle with an ASrock PAUL IPMI card...
... Then that leaves me a place to put the x4 IPMI card.
The ASRock PAUL card is PCIe x1. Can't you use your SLOT7 (x1 slot; nearest CPU)?
I'm pretty sure the board doesn't support bifurcation, either that or the BIOS doesn't. Either way, I don't think it matters, I can't use it.
Also, you should carefully check, in your BIOS Set-up, to see whether your SLOT6 (x16 elec) can be set to x8x4x4 (bifurcation). Also check to see if your SLOT4 (x8 elec) can be set to x4x4.

And, I agree with @nexox; you do NOT want to be dependent on Highpoint for drivers. Trust us.

By the way...
Does the PAUL card come with both high- and low-profile brackets?
What case/chassis is in use?
 
Last edited:
  • Like
Reactions: PriamX

nexox

Active Member
May 3, 2023
138
67
28
Now that I think about it, unless the Fedora installer can use the UEFI driver (haven't messed with such things myself) it might be quite annoying to get the highpoint driver into the installer. Do you have a spare SATA drive over 1TB that you could use to image back and forth? Though now that I think about it you'd still need a bootable USB or network environment that also has the highpoint driver, but the process would go like:

1) Install highpoint card with no drives, boot your existing install load the driver and test the config software to make sure it's working. I guess this may require a random m.2 drive to test with. Make an initrd with the highpoint driver and install it.
2) Boot to USB, dd your current raid1 to the sata drive.
3) Move m.2 drives to highpoint and configure raid1.
4) Boot the USB distro again and dd the image on the SATA drive.
5) Unplug the SATA drive, set the highpoint as boot device in BIOS, reboot and cross your fingers.
 
  • Like
Reactions: PriamX

PriamX

New Member
Mar 3, 2022
18
4
3
Thanks for the reply UhClem!

The ASRock PAUL card is PCIe x1. Can't you use your SLOT7 (x1 slot; nearest CPU)?
Normally, yes, however, I have a Hauppauge WinTV-quadHD card plugged into that slot which is a x1 lane card.

Also, you should carefully check, in your BIOS Set-up, to see whether your SLOT6 (x16 elec) can be set to x8x4x4 (bifurcation). Also check to see if your SLOT4 (x8 elec) can be set to x4x4.
I have not carefully checked. It is something I should find out. However, SLOT6 has a Nvidia GTX 1660 card in it (x16 but running as x8 because it's shared with SLOT4). And SLOT4 has a Broadcom MegaRAID 9361-16i card in it which is my file storage array (x8).

And, I agree with @nexox; you do NOT want to be dependent on Highpoint for drivers. Trust us.
I'm curious, why is that? Although slightly annoying, I don't mind setting up the drivers to recompile for each kernel upgrade, this is something I know. But if there's some other reason...I'd sure like to know! I've never used a HighPoint product before.

By the way...
Does the PAUL card come with both high- and low-profile brackets?
What case/chassis is in use?
Yes, it came with both high and low brakets.
 

nexox

Active Member
May 3, 2023
138
67
28
It's the kind of thing where you only tend to hear from people who are unsatisfied with the product, but over the years, I have almost only heard disappointment about Highpoint RAID components, mostly poor reliability and performance because the drivers aren't anywhere near as well-developed as mdraid. Out of tree drivers also tend to age out and stop compiling against newer kernels, sometimes after only a year or two, and then you just hope Highpoint is going to put development effort into a product they (by then) don't even sell any more, not a situation I would like to get into.
 
  • Like
Reactions: UhClem and PriamX

PriamX

New Member
Mar 3, 2022
18
4
3
Now that I think about it, unless the Fedora installer can use the UEFI driver (haven't messed with such things myself) it might be quite annoying to get the highpoint driver into the installer. Do you have a spare SATA drive over 1TB that you could use to image back and forth? Though now that I think about it you'd still need a bootable USB or network environment that also has the highpoint driver, but the process would go like:

1) Install highpoint card with no drives, boot your existing install load the driver and test the config software to make sure it's working. I guess this may require a random m.2 drive to test with. Make an initrd with the highpoint driver and install it.
2) Boot to USB, dd your current raid1 to the sata drive.
3) Move m.2 drives to highpoint and configure raid1.
4) Boot the USB distro again and dd the image on the SATA drive.
5) Unplug the SATA drive, set the highpoint as boot device in BIOS, reboot and cross your fingers.
To set up the HIghPoint card as a Linux boot device, there's certainly more steps and HighPoint has documentation on it. Very briefly: first it requires updating the card's firmware to a version that supports UEFI. Then with the binary driver on a USB, during the Fedora install process, you drop into the shell and run an installation script. Then, after Fedora installs and before rebooting, back into the shell to run a 2nd script to put the driver in its proper location.

I did hear back from HighPoint support this morning. They were very responsive, even going so far as to find and look at the manual on my mobo. I cannot create any RAID w/o having both drives present initially, but your idea of imaging over to a bootable USB would indeed save me the hassle and downtime of the restore from iDrive.

They also offered a couple other ideas:

1. Use the card as a PCIe bridge, this would require bifurcating an x8 lane to x4/x4 to use each NVMe drive separately, and I would not have to reformat. This is what you'd covered before, but I can't do because I don't have the free x8 lane.

2. Use the card as an NVMe switch (no RAID, separate drives), this would run on x4. I'd continue to use mdadm to manage the RAID1. However, it's likely to be data-destructive; they're just not confident the SW RAID would be able to be automatically rebuilt by the OS (mdadm) once the drives are relocated.

Thanks again nexox!
 

PriamX

New Member
Mar 3, 2022
18
4
3
It's the kind of thing where you only tend to hear from people who are unsatisfied with the product, but over the years, I have almost only heard disappointment about Highpoint RAID components, mostly poor reliability and performance because the drivers aren't anywhere near as well-developed as mdraid. Out of tree drivers also tend to age out and stop compiling against newer kernels, sometimes after only a year or two, and then you just hope Highpoint is going to put development effort into a product they (by then) don't even sell any more, not a situation I would like to get into.
You've probably already seen my other reply to you. But it looks like this card can also function simply as an NVMe switch which does not require any kernel drivers. Given your points, that may be the way to go and then continue to use mdadm.
 
  • Like
Reactions: nexox

nexox

Active Member
May 3, 2023
138
67
28
I would bet that in case 2) plain switch mode md wouldn't even know the drives had moved, the BIOS might need to have the boot device re-selected, but that would probably also be automatic. You could try it out easily enough, booting from a USB stick if you want to make sure nothing tries to write to the array while you're experimenting.
 
  • Like
Reactions: PriamX

nexox

Active Member
May 3, 2023
138
67
28
One other thing to note if you do try to dd image between a SATA drive and back with the highpoint raid mode - the resulting volume may be slightly smaller than the md volume, so you may need to shrink your filesystem slightly, perhaps after you copy it to a spare drive, while booted with a USB distro, so you can verify all files are still the same between both volumes after the resize.
 
  • Like
Reactions: PriamX

PriamX

New Member
Mar 3, 2022
18
4
3
I would bet that in case 2) plain switch mode md wouldn't even know the drives had moved, the BIOS might need to have the boot device re-selected, but that would probably also be automatic. You could try it out easily enough, booting from a USB stick if you want to make sure nothing tries to write to the array while you're experimenting.
That is true, then if it didn't work I wouldn't be out anything as long as I had the USB-based image ready to go first.
 
  • Like
Reactions: nexox

UhClem

Active Member
Jun 26, 2012
376
204
43
NH, USA
however, I have a Hauppauge WinTV-quadHD card plugged into that slot which is a x1 lane card.
...
However, SLOT6 has a Nvidia GTX 1660 card in it (x16 but running as x8 because it's shared with SLOT4). And SLOT4 has a Broadcom MegaRAID 9361-16i card in it which is my file storage array (x8).
Thanks for the reply(/explanation).
[I always look for the low-hanging fruit, but until now, you hadn't shown me the entire tree.:)]

Here's a game-changer: Move the 9361 to SLOT2, freeing SLOT4 for NVMe usage. SLOT2 is x4 elec, x8 phys; that will still give you ~3200 MB/s bandwidth for your storage, and going thru the PCH has negligible impact.

Now, with SLOT4 available (x8 elec, x16 phys), you're in fat city. I'd be shocked if ASRock didn't offer x4x4 option, if not in the original BIOS version, in a recent update. That gets you directly to your stated final destination for about $30 [a 2-slot M.2, x8 card using a bifurcated slot] (and merely transplanting the 2 M.2s). And there is the added performance from using CPU (PCIe) lanes. [Also, see this thread [Link] for more ideas/inspiration.]

Are you still within the return window for the (grossly overpriced) Highpoint card?

PS There is STILL the issue of 5 PCIe cards and only 4 PCIe slots, but it is doable; (mobo) M2_1 comes into play ...
 

PriamX

New Member
Mar 3, 2022
18
4
3
Thanks for the reply(/explanation).
[I always look for the low-hanging fruit, but until now, you hadn't shown me the entire tree.:)]

Here's a game-changer: Move the 9361 to SLOT2, freeing SLOT4 for NVMe usage. SLOT2 is x4 elec, x8 phys; that will still give you ~3200 MB/s bandwidth for your storage, and going thru the PCH has negligible impact.
Hmm. A *very* interesting idea. I hadn't considered SLOT4 for anything other than the MegaRAID card, but there's a bit more on the tree. The MegaRAID (SLOT4) is configured with CacheCade, which is an SSD RAID0 in front of the HDD RAID6. This is set up with two WD Ultrastar DC SS300 drives which "claim" 2.2 GB/s each. But I have not verified what real speed I'm getting out of this, I'm not sure if I know how to measure its true speed. Certainly there's a marked difference with it on vs. off, and a canned data set provides SSD speeds, only because the "test" is 100% cache hits.

Now, with SLOT4 available (x8 elec, x16 phys), you're in fat city. I'd be shocked if ASRock didn't offer x4x4 option, if not in the original BIOS version, in a recent update. That gets you directly to your stated final destination for about $30 [a 2-slot M.2, x8 card using a bifurcated slot] (and merely transplanting the 2 M.2s). And there is the added performance from using CPU (PCIe) lanes. [Also, see this thread [Link] for more ideas/inspiration.]

Are you still within the return window for the (grossly overpriced) Highpoint card?
No, not an option. Bought it used but unopened for half price, a co-worker of a friend had plans for it but never used it. I'm ~okay~ with the deal, especially now that I've found out how to use it as an NVMe RAID, NVMe switch, or a PCIe bridge.

PS There is STILL the issue of 5 PCIe cards and only 4 PCIe slots, but it is doable; (mobo) M2_1 comes into play ...
Yeah. I had planned to do something like an ADT-LINK M.2 to PCIe adapter cable for the IPMI card, and like you said, plug it into M2_1. Which is kinda weird because that M.2 slot is right under the Nvidia card. I might need to find one with a longish cable.

PS, you'd asked what chassis, it's an Antec P101 Silent.
 

UhClem

Active Member
Jun 26, 2012
376
204
43
NH, USA
... but there's a bit more on the tree. The MegaRAID (SLOT4) is configured with CacheCade, which is an SSD RAID0 in front of the HDD RAID6. This is set up with two WD Ultrastar DC SS300 drives which "claim" 2.2 GB/s each. ...
Yes, in light of this new tree view, I'm in agreement with your original assessment. LSI in slot4, Highpoint in slot2. [my vote is to use the HPt as (driver-less) switch]

Now for the IPMI card ... given its modest PCIe signal needs (x1 gen2), there is an alternative to the ADT-Link style M.2 cables. This [Link] gives length and flexibility:
m2-riser-cable-sm.jpg
and still provides more than enough signal bandwidth (x1 gen3) for your IPMI card.
 
  • Like
Reactions: nexox