SSD Endurance RAID 5 vs RAID 10

Discussion in 'RAID Controllers and Host Bus Adapters' started by Myth, Jun 13, 2018.

  1. Myth

    Myth Member

    Joined:
    Feb 27, 2018
    Messages:
    56
    Likes Received:
    2
    Hi Guys,

    I'm a storage manufacturer and I'm trying to figure out the write endurance benefits from using Samsung Evo 970 M.2 drives. Each drive is 2TB and has a 1200TBW life span.

    I've read online that RAID 5 and 6 are harder on SSD lifespans because they create parity information that is spanned across all drives. Also I've read that erasure coding takes the life of the SSD away, although I must admit I don't know what erasure coding is, or how it functions.

    So Basically I'm trying to decide how much life I will get if I use RAID 10 instead of Raid 5 or 6. I've read that RAID 10 is better on SSDs because you don't have to store parity information, but I guess I get kind of confused because the parity data isn't that big. I mean we move media files on our servers and we are talking about terabytes of data daily. So if I've got 12, m.2 drives and each drive can live up to 1200TBW the total life span of the array would be something like 14,400TB or 14.4PB - correct?

    Like if I copy a 1TB folder onto the m2 array it will spread that 1TB across all 12 drives, meaning that each drive will write 84 Gigs per day. Is that correct? What's the difference between RAID 6 and RAID 10. Would RAID 6 write 100Gigs because of the added parity?

    I'm just trying to see exactly how much endurance I would gain by using RAID 10 vs RAID 6 or RAID 5. The performance of either raid is fine with our Linux based software Raid, so I'm not concerned about performance, I'm just concerned about endurance, and based on that number I can evaluate if the lost storage space is worth the longevity.

    Also please check my math to make sure I understand how the m2 Raid divides the data across the disks, because it certainly doesn't copy the 1tb to each or all 12 drives, does it?

    Best,
    Myth

    So if my math is correct the 12 M2 drives can write 7TB per day for five years. Am I thinking appropriately for this? Having more SSDs in the array will actually increase endurance?
     
    #1
    Last edited: Jun 13, 2018
  2. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,219
    Likes Received:
    272
    12 devices

    raid 10:
    - read & write to 6 devices
    - "repeat" every operation on the mirror
    - write 1tb to raid 10: 170.7 gb per device (1tb / 6 devices)

    raid 5:
    - you have read & write data + parity
    - storage efficeiency: 11 devices for data, 1 for parity
    - write 1tb to raid 5: 93gb per device (1tb / 11 devices)

    raid 6:
    - you have read & write data + parity
    - storage efficeiency: 10 devices for data, 2 for parity
    - write 1tb to raid 5: 102gb per device (1tb / 10 devices)

    Talking about what is "better" depends what you need.
    Raid 5 will have the lowest writes but put your data at risk if one device fails. With raid 6 you can increase availibilty but you have to write more than raid 5 per device.
    Raid 10 will write more per device but have the best read and write performance (mbyte/s) ecuase you don't have to calculate parity.

    What workload do you have for such a raid?
    Asking becuase the evo drives do not offer powerloss protection and drop in perfromance under sustained workloads.
     
    #2
  3. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    556
    Likes Received:
    180
    Don't forget parity RAIDs like 5 and 6 do a read-modify-write of the whole stripe, rather than only updating specific blocks (although this can vary depending on the RAID implementation) so as a rule of thumb RAID5 and 6 will have a higher write amplification than RAID1 or 10 - particularly on more random workloads. Whether this is an issue given workload and common drive lifetimes is another matter of course.
     
    #3
  4. Myth

    Myth Member

    Joined:
    Feb 27, 2018
    Messages:
    56
    Likes Received:
    2
    @i386 thanks for your response.

    My workload is usually offloading large media files from camera card readers onto these drives, then hooking them up to multi editor stations. So a bunch of 4k streams simultaneous to the storage array and/or fast offloading of camera cards with tons of data, like maybe 8TB a day sometimes.

    So it actually seems like RAID 10 would exhaust the drives much faster since it's only using 6 of them, and then copying or mirroring that data across the other 6?

    We have extremely high read/writes. We stream 4k media to video editors. It's not that much random IO but it can be if the drive is heavily fragmented which it usually is. So if these M2 drives performance degrades over time, that would be a problem. We usually use UPS power supplies so I'm not that concerned about power loss protection, but yes, it would be a nice feature.

    The interesting thing about these drives is that they have incredible speeds and they are cost effective - abut $800 per drive with 2TB and over 2k r/w speeds (MBps) per drive. I don't know how much they would degrade, but if they really do get 1200TBW they should last a very long time. So in RAID 10 they should be able to write about 4TB a day and have no write amplification.

    @EffrafaxOfWug Yes, this write amplification is something I'm very interested in. Because the SSD manufacturers give a TBW life span number, how does the write amplification affect that number? Because according to @i386 's post RAID 10 will actually die faster, and that's the logic I'm also understanding. However as you mentioned and as I've read across many boards is that there is a write amplification. But mathematically it seems like RAID 10 will last less than RAID 6 simply based on the amount of TBW is more with RAID 10 because there are less drives to span the data across.

    How can I measure the life span of this write amplification process and how does the SMART data record it? Like if a drive dies because of write amplification but still have 600TBW left, I'll still be covered by my warranty? Or are you saying that write amplification will increase the TBW significantly more than RAID 10?

    Sorry for being so confused on this topic.

    Back to the degradation of performance issue. We have used Micron SATA SSDs in the past and their write speeds have decreased to something like 50MBps on each drive. So we have about 18 of these 1 TB Micron SSDs and they write around 350MBps striped together in RAID 0. Yes I'm hoping these Samsung Evo's don't do that or they will be worthless. But I'm more hopeful that Samsung won't let us down like Micron did. What are your guys thoughts?

    PS. Oh and yes RAID 10 is faster with these HighPoint Controller cards and windows, but if I use the Asus Hyper Kit on a Linux Machine, I can use our commerical RAID software which calculates parity at the speed of light. So RAID 5 or 6 is just as faster or even faster than RAID 10.

    And as a site note, have you guys ever tried to use RAID 10 with Storage pools in Server 2016? the write speeds drop from like 8000MBps to 800Mbps - crazy slow. Yuck.
     
    #4
    Last edited: Jun 14, 2018
  5. nk215

    nk215 Active Member

    Joined:
    Oct 6, 2015
    Messages:
    273
    Likes Received:
    70
    Raid 10 will die faster. Most RAID5 right now only update specific blocks. That's why RAID 5/6 has been very fast.

    RAID 6 has the overhead of Reed-Solomon writes.

    In term of life, RAID 0 would last the longest, then RAID5 then 6 then 1. The larger the number of drive, the more true that is.

    That said, it’s next to impossible to fail SSD with writes. Many (most) drives exceed their rating by a large margin.

    I strongly recommend enterprise drives to prevent the slow down you've seen in the Micron SSD. I do experience slow down of the Micro SSDs so they are now my storage drives.
     
    #5
  6. Myth

    Myth Member

    Joined:
    Feb 27, 2018
    Messages:
    56
    Likes Received:
    2
    Also what would be an enterprise quality M2 with similar stats?
     
    #6
Similar Threads: Endurance RAID
Forum Title Date
RAID Controllers and Host Bus Adapters Poor RAID-6 performance with Adaptec 7-series Jun 14, 2018
RAID Controllers and Host Bus Adapters Performance advice (3x Slow-drives - RAID5) on LSI RAID for ESXI Jun 14, 2018
RAID Controllers and Host Bus Adapters Guidance on RAID 10 Multimedia Server Array on Adaptec + SSDs Jun 8, 2018
RAID Controllers and Host Bus Adapters Installing OS on a RAID system May 22, 2018
RAID Controllers and Host Bus Adapters ESXi 6.N SMIS Provider for whitebox server / H700 RAID card May 15, 2018

Share This Page