Raid Cache as NVME?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fr3

New Member
Nov 20, 2023
6
1
3
I happened to get some raid cards for free. and I'm using them as JBOD because I want to directly access the drives.

But it seems like such a waste of that nice cache memory they have.

Is there a way to export the cache memory itself, as a drive on a megaraid card? Each card has 4 or 8GB of ram based storage with a BBU. It seems silly to let it go to waste. It could make a nice swap partition, even without the bbu's or I could use it for dm-cache to accelerate some LVs

Are there any open source firmware replacement projects for LSI cards?
 
Last edited:

sko

Active Member
Jun 11, 2021
249
131
43
given the amount of power those things draw, just get more 'real' RAM and dump those controllers...
 

fr3

New Member
Nov 20, 2023
6
1
3
And how do i connect my drives without controllers? I don't pay for power, and I got the controllers free too. I already have the maximum installed ram for my motherboard.
 

sko

Active Member
Jun 11, 2021
249
131
43
I had the impression you want to use that/those controllers only for the installed memory.

But nonetheless - if all you need is a simple HBA, I'd use a normal HBA (in IT-mode) instead of a raid-controller that probably uses a proprietary on-disk format and hence makes it impossible to move those drives to another system or a HBA...

And that RAM is only accessible to the controller itself, not the host system.
If you still need an additional cache because you already maxed out the RAM and still experience memory pressure from your FS, I'd go for NVMe drives as an additional caching layer.
No idea what additional caching capabilities LVM has nowadays (it was basically none back when I had to deal with it...), but In terms of ZFS you would ideally first go for a 'special' vdev, which (especially for spinning-rust pools) gives a much higher real-world performance boost without the additional cost of (large) allocation tables in RAM that an L2ARC would need (and hece taking away even more of the already scarce RAM).
 

fr3

New Member
Nov 20, 2023
6
1
3
I am using IT mode, thanks.

I said my disks are jbod.

I have no proprietary on disk format.

I got double vaxxed and I'm wearing a mask like a good little christian too, in case that's important.

All of this, super not related at all to anything I asked, which is how to take advantage of that cache memory and BBUs thats otherwise sitting unused.

It's okay to not know how to do something and to move on to another thread. nobody is obligated to respond to everything here. It's not a ticket system. Or a 911 call center. I'm not going to give you one-star or ask to speak to your manager.

I'll put you in the "Not possible, and no there's no open firmware projects" category.

Maybe someone else does know a trick to repurpose it. If the controller can use it, and the controller can create virtual disks when used in IR mode, then it's certainly physically possible, just not something Avago ever cared to write. I know there's open firmware projects for NICs and UEFI/BIOS replacement, and video cards, so it stood to reason there might be one for storage controllers and I don't know a better place to ask than here.
 
  • Like
Reactions: UhClem

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
But nonetheless - if all you need is a simple HBA, I'd use a normal HBA (in IT-mode) instead of a raid-controller that probably uses a proprietary on-disk format and hence makes it impossible to move those drives to another system or a HBA...
LSI/Broadcom controllers use a disk format that MDADM can read trivially.

The battery-backed cache on a RAID controller can greatly increase the effective write speed compared to using system RAM for cache, especially with the 4-8GB that the OP listed. It's also much safer than using system RAM, because data would be written even if the system is powered off before the write completes.
 

fr3

New Member
Nov 20, 2023
6
1
3
Bingo.

Even my wonderful, mirrored ECC System RAM is vulnerable during a power loss, or psu failure, or any hardware or software fault that causes an immediate reboot. the RAM on the card isn't. I'd really love if I could make use of it somehow even if not as an nvme, as a block device of some sort. Maybe I'd use it for bcache or lvm writeback cache. 8gb of writeback cache would help out a lot!

I'm reluctant to use hardware raid because my raid is rather dynamic, I have 6-disk raid1's, and 12 disk raid0's and 12 disk raid6's on the same disks. Thanks LVM for letting me specify raid policy per LV! pretty cool. But I don't think LSI's hw raid is that flexible even if it does have "os volumes". Kernel raid is just super flexible, and I can have one array span multiple HBAs, even onboard ports
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
Thanks LVM for letting me specify raid policy per LV! pretty cool. But I don't think LSI's hw raid is that flexible even if it does have "os volumes".
Once you create a "disk group" on an LSI card, you can then create multiple volumes across that disk group. The volumes do not have to have the same RAID type, but the number of disks must match the type of RAID you want to use. For example, a disk group with 5 disks could have a RAID-0, RAID-5, or RAID-6 volume, but not RAID-10, because the number of disks is not a multiple of two.

Your 12-disk example would allow pretty much anything, but I don't know if LSI hardware supports more than 2-way mirror (RAID-1), because anything I have that needs to be so highly available that RAID-1 with hot spare isn't enough would get a mirror of all the data on a completely different system.
 

fr3

New Member
Nov 20, 2023
6
1
3
I can not imagine why one would want a hot spare with their two-disk raid1 array when they could have simply a 3 disk raid1. You want your array to be vulnerable to loss during a rebuild? you want to find out of your spare will be good when you need it most?

I'd rather be able to survive two concurrent failures than just one.
I'd rather have the confidence that the third disk will work properly under load because it has been working for the last 3 years.
Rebuild times are much better when it doesn't have to rebuild because its never not been in sync.

I've got some six-way mirrors that have been running now for 12 years. They're simply in very very remote locations that I don't want to ever return. One of them has had a controller fail, but its bootable from the controller and onboard, so it kept on chugging. another is 4 disks down now. Yeah they're super slow and obsolete, but they still do today, what they were built to do originally the same as they used to. I expect them to die from the electrolytic caps in 5-10 more years if I'm lucky.

I think more than ten-way raid1 would probably out-live all psu modules. But who knows. When I have more than 6 or 7 in a raid1 it's because I need grub to be able to read them at boot time, and I prefer to keep the symmetry of all drives being used the same way: /boot up front, encrypted lvm after that.

The purpose of this thread though was enabling the productive use of the battery backed ram on the cards, which I'm not holding my breath on, but hopefully someday, if someone figures it out, they'll find this thread.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
I can not imagine why one would want a hot spare with their two-disk raid1 array when they could have simply a 3 disk raid1.
That was my point. If it needs more than just a 2-disk RAID-1, then the data gets copied to other machines.

I rarely use any RAID on a boot drive because I have access to the machines and everything on the boot drive can be rebuilt with a re-install of the OS and a quick copy of config files that are backed up every night.