Reading Software vs hardware RAID performance and cache usage confuses me.
Today i'm running Linux md RAID10 with 4 x Hitachi 4TB NAS drives. As i only have 8TB avaiable space, i'm planning for the future and RAID6 based on 5x 6TB disks seems good to me space wise.
I will then be able to get 18TB of available space.
This is for a server inside a DC so there are UPS etc, so i won't have too many power failures to deal with (what i'm thinking is that with Linux md RAID6 data will be cached to OS cache so it's solely protected by the mains and UPS, nothing else (i.e no battery))
So my questions is, is it dangerous to go for Linux md RAID6? If HW RAID is the better solutions, what kind of prices do these BBU backed (used) controllers go for?
I was also told once at #centos that Linux md RAID isn't recommended with drives more than 8, but ideally 4.
Thanks.
Today i'm running Linux md RAID10 with 4 x Hitachi 4TB NAS drives. As i only have 8TB avaiable space, i'm planning for the future and RAID6 based on 5x 6TB disks seems good to me space wise.
I will then be able to get 18TB of available space.
This is for a server inside a DC so there are UPS etc, so i won't have too many power failures to deal with (what i'm thinking is that with Linux md RAID6 data will be cached to OS cache so it's solely protected by the mains and UPS, nothing else (i.e no battery))
So my questions is, is it dangerous to go for Linux md RAID6? If HW RAID is the better solutions, what kind of prices do these BBU backed (used) controllers go for?
I was also told once at #centos that Linux md RAID isn't recommended with drives more than 8, but ideally 4.
Thanks.