Storage server - RAID 6, 10 or storage spaces ? - which way to go nowdays?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

blublub

Member
Dec 17, 2017
34
2
8
42
Hi everyone!

I need to replace our storage server which hold medical images (PACS server) of about 13TB.
The server isn't mission critical as we also have an enterprise PACS but the server works as secondary backup and image router as well as VPN access - so loosing all the data would actually suck ;).

At the moment we have a 7 disk RAID 6 via an Adaptec RAID controller.

So the question here is which way to go with storage server of about 30-40TB - RAID 6 + HS, RAID 10 or storage spaces with ReFS!?
By the time we build the old server storage spaces weren't an option, and I am not sure if it is now since I have never worked with it and it seems rather complicated....

Additional info:
Writes apporx. 6GB / day
Reads approx. 15-30GB/day

Rreads are important for higher speed, writes not so much

thx for any help with this matter !
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
Given you workload Storage spaces would be fine. You still have to decide if you want Dual Parity Spaces (Raid6) or Mirrored Spaces (Raid10). Assuming you are sticking with Windows either option is acceptable. If other OS's are an option then there is a lot more to talk about. A RAID card is something else to buy/fail/replace. With storage spaces if the server died you could plug all those disks into another computer and storage spaces would just work.

I haven't dealt with it in a while but I know back is Server 2012 r2 Parity spaces running ReFS had really terrible write speeds like 25ish MBps unless you added writeback caching which wouldn't be hard with your write load but something to keep in the back of your head.
 

darkconz

Member
Jun 6, 2013
193
15
18
Just want to comment on the read on parity as well. I had my security camera vdisk on a parity disk and reviewing the footages were nightmare because it would take minutes to load to the specific time and playback skips frames.

When I switched the vdisk to mirror, everything went smoothly.


Sent from my iPhone using Tapatalk
 

blublub

Member
Dec 17, 2017
34
2
8
42
Given you workload Storage spaces would be fine. You still have to decide if you want Dual Parity Spaces (Raid6) or Mirrored Spaces (Raid10). Assuming you are sticking with Windows either option is acceptable. If other OS's are an option then there is a lot more to talk about. A RAID card is something else to buy/fail/replace. With storage spaces if the server died you could plug all those disks into another computer and storage spaces would just work.

I haven't dealt with it in a while but I know back is Server 2012 r2 Parity spaces running ReFS had really terrible write speeds like 25ish MBps unless you added writeback caching which wouldn't be hard with your write load but something to keep in the back of your head.
Yeah Windows is a must since all programs we use are Windows based and no user despite myself can deal with Linux (I run a NAS with ZFS on OMV at home).

Storage spaces don't sound too good with that write performance, I have to look into that - otherwise it does sound good, however I think I would need a HBA card instead of a RAID controller.

As for RAID 6 vs 10 my understanding is that the more drives you have the better/saver a RAID 10 gets - break even point is rumored to be 10+ (I think it is very hard to tell).
So for me a RAID 6 "should" be enough as long as I use drives with low URE rate (10^15).

Do you know if ReFS on top of a RAID 6 is a good idea in order to avoid bit-rot?


Just want to comment on the read on parity as well. I had my security camera vdisk on a parity disk and reviewing the footages were nightmare because it would take minutes to load to the specific time and playback skips frames.

When I switched the vdisk to mirror, everything went smoothly.

Sent from my iPhone using Tapatalk
Wow, but I am pretty sure there has to be a reason for that behavior. I have never had such issues with parity arrays yet.
 
  • Like
Reactions: Spartacus

moblaw

Member
Jun 23, 2017
77
13
8
38
Refs on raid should be fine
I have to poke at this. Because it's right, thats "it's fine". But in reality, the "self-healing-ability" of the refs architecture isn't exposed to the O/S the same way as with an HBA or by directly attaching the drive via motherboard. So there is no bit-rot detection.

I do not think, that you get the benefits of refs while using it ontop of raid card running raid 6. For most cases, the raidcard has data-consistency checks itself, that deals with parity errors and missmatches, bad-blocks, etc.

And I remember reading, that "Quota" and other Windows features are disabled while using refs.

Feel free to correct me if I'm wrong.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Refs requires you to enable the integrity steams feature for bit rot detection. Depending what underlying storage you use it can fix/restore the file or it will return an error.
ReFS integrity streams

I take raid 6 (+ ssd caching) for bulk storage and (striped) mirrors of ssds when performance is required.
In raid 6 any two devices can fail before data loss occur while in raid 10 "only" a mirror needs to fail for data loss.
 
  • Like
Reactions: blublub

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
@blublub I work in the PACS field although on the vendor side. A few thoughts/questions:

- What sort of performance do you require? If this is used as a backup/archive server how many concurrent pulls do you expect at peak hours? Does your PACS have decent prefetching to limit ad-hoc pulls? As I'm sure you're aware radiologists don't like waiting (who does?) so even with your low daily volume you want to make sure users aren't waiting too long for their studies.
- Is this a solution you intend to build or purchase from a vendor? I'm sure you're more aware of your needs and obviously budgets dictate a lot of this, but even if this isn't mission critical you've pointed out it's important so there's a lot of value in having 24x7 coverage and same/next day part replacement compared to you needing the scramble to source parts. It also takes the crosshairs off of you if there are problems with the solution.
 

blublub

Member
Dec 17, 2017
34
2
8
42
@blublub I work in the PACS field although on the vendor side. A few thoughts/questions:

- What sort of performance do you require? If this is used as a backup/archive server how many concurrent pulls do you expect at peak hours? Does your PACS have decent prefetching to limit ad-hoc pulls? As I'm sure you're aware radiologists don't like waiting (who does?) so even with your low daily volume you want to make sure users aren't waiting too long for their studies.
- Is this a solution you intend to build or purchase from a vendor? I'm sure you're more aware of your needs and obviously budgets dictate a lot of this, but even if this isn't mission critical you've pointed out it's important so there's a lot of value in having 24x7 coverage and same/next day part replacement compared to you needing the scramble to source parts. It also takes the crosshairs off of you if there are problems with the solution.
Hi Aestr
Sorry I have somehow missed you post almost 2 years ago. The last PACS worked OK for what it was supposed to do: large volume at lo cost. Performance was ok considering the expectations, but not mind blowing - we ended up with a RAID 10 for 8 6TB HDDs on NTFS with some FS tweak.

We are currently changing the PACS vendor for various reasons and hence new got new HW in the process
 

blublub

Member
Dec 17, 2017
34
2
8
42
If you are doing HW RAIDI would recommend using NTFS.

Chris
Wow Chris,
you are everywhere :) - That is my old thread from 2017. But the topic is actually still valid. I have SSD RAID 6 and was about to use ReFS but am currently struggling with the decision as I have read quite a few post about dataloss with RefS in 2018 - that is really troubling me.
The metadata integrity stream which make chkdsk obsolete are really nice.