Using Debian 10.4 and I have a software MDADM RAID 6 array formatted as XFS. Zero complaints BUT, I think I read that during a weekly scrub if MDADM detects an error in a RAID 6 setup, it doesn't hold an election process and just assumes the parity drive is always correct. If that is correct, would this be ONE good reason why a dedicated RAID card would be better when compared to an MDADM RAID?
I have a Supermicro 846 24 bay server with a SAS2 backplane. The SM has a single SFF-8087 port on the backplane that I feed into an LSI HBA. If I were to switch to using a dedicated RAID card, would I be able to plug the backplane straight into that and remove the HBA? What sort of "got ya!" issues do I need to be on the lookout for if I switch to a RAID card in my setup? I know the Dell H700 is a popular card but I saw a post on Reddit (that went unanswered) about using a H700 with a SM and they talked about how the H700 only supported a total of 16 drives....and I have 20 in my server.
PS Before anyone suggests it, I have no desire to use BTRFS or ZFS.
I have a Supermicro 846 24 bay server with a SAS2 backplane. The SM has a single SFF-8087 port on the backplane that I feed into an LSI HBA. If I were to switch to using a dedicated RAID card, would I be able to plug the backplane straight into that and remove the HBA? What sort of "got ya!" issues do I need to be on the lookout for if I switch to a RAID card in my setup? I know the Dell H700 is a popular card but I saw a post on Reddit (that went unanswered) about using a H700 with a SM and they talked about how the H700 only supported a total of 16 drives....and I have 20 in my server.
PS Before anyone suggests it, I have no desire to use BTRFS or ZFS.