Setup:
System: Xeon E5-2609 in a CSE846 chassis with EL1 expander backplane.
Raid Card: IBM M5014 with BBU flashed to LSI 9260 with Advanced Feature Key.
OS: Server 2012 R2 Essentials virtualized in Server 2012 Standard. Virtual Disk directly passed through.
Drives: 10x Seagate Constellation ES.2 3TB SATA version in Raid 6 config (24TB usable space)
This is a storage server that feeds the home. Holds all our pictures, videos, documents.
Migrated this from a 8x Raid0 that was running for 3 years without issue in another chassis where the raid0 had manual robocopy to a 2x3tb software raid0 of the files I'd die if we lost (would never lose them, offsite backup and all).
The performance of the raid 0 was about 700MB/s on read and write. Far more than I needed as the largest link I had was a dual gbit teamed link to my main workstation.
When I migrated the chassis, I had the M5014 ready for the upgrade in order to get some piece of mind in reliability on both the critical and non critical files. I now have the raid 6 array (which took a whole week to live migrate).
I knew I would lose some performance on both the read and write side of things, but I figured what I'd drop to was maybe 1/2 or 1/3 the performance of the raid0, which would be quite acceptable and still capable of saturating all my links. What happened was my performance dropped to about 30-50MB/s write and still maxxed read. Running ATTO I got roughly the same numbers as direct file copy.
Single Drive: 110-150MB/s read and write
Full Array: 30-50MB/s Write, 300 - 700MB/s read.
My questions are:
Is this sort of performance loss with Raid 6 expected?
If not, what's the bottleneck? M5014 since it doesn't have enough cache (256MB)? Lack of drives? OS? Configuration of the Raid6? The fact that I live-migrated it from Raid0? CPU?
I don't need IOPS, I don't need super mega speed, but i'd like to maximize available capacity while being able to keep the links saturated on a single continuous file transfer.
Any insight would be appreciated.
System: Xeon E5-2609 in a CSE846 chassis with EL1 expander backplane.
Raid Card: IBM M5014 with BBU flashed to LSI 9260 with Advanced Feature Key.
OS: Server 2012 R2 Essentials virtualized in Server 2012 Standard. Virtual Disk directly passed through.
Drives: 10x Seagate Constellation ES.2 3TB SATA version in Raid 6 config (24TB usable space)
This is a storage server that feeds the home. Holds all our pictures, videos, documents.
Migrated this from a 8x Raid0 that was running for 3 years without issue in another chassis where the raid0 had manual robocopy to a 2x3tb software raid0 of the files I'd die if we lost (would never lose them, offsite backup and all).
The performance of the raid 0 was about 700MB/s on read and write. Far more than I needed as the largest link I had was a dual gbit teamed link to my main workstation.
When I migrated the chassis, I had the M5014 ready for the upgrade in order to get some piece of mind in reliability on both the critical and non critical files. I now have the raid 6 array (which took a whole week to live migrate).
I knew I would lose some performance on both the read and write side of things, but I figured what I'd drop to was maybe 1/2 or 1/3 the performance of the raid0, which would be quite acceptable and still capable of saturating all my links. What happened was my performance dropped to about 30-50MB/s write and still maxxed read. Running ATTO I got roughly the same numbers as direct file copy.
Single Drive: 110-150MB/s read and write
Full Array: 30-50MB/s Write, 300 - 700MB/s read.
My questions are:
Is this sort of performance loss with Raid 6 expected?
If not, what's the bottleneck? M5014 since it doesn't have enough cache (256MB)? Lack of drives? OS? Configuration of the Raid6? The fact that I live-migrated it from Raid0? CPU?
I don't need IOPS, I don't need super mega speed, but i'd like to maximize available capacity while being able to keep the links saturated on a single continuous file transfer.
Any insight would be appreciated.