Cheapest option for high speed 5-6tb storage under SATA?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
I'm using 2x1 TB Black WB under raid 0 setup for a diskless server
Because of the nature of the diskless environment, I need to upgrade them with a faster drive...
Problem is, i'm also gonna need twice of that size, about 4-5TB,
I was gonna use NVME but it's too expensive and then i also remember SATA speed maxed out at under 500 MBps..
So technically 4 samsung SSD with raid 0 should be enough?
Or given the speed limit of SATA, 4 cheaper 1 TB SSD with raid 0?
Is that the cheapest option if i want to max out SATA speed as well?
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
"it depends"
Samsung consumer or enterprise ssds?
Read or write intensive/mixed workloads?
"Cheap"/"expensive", what's your budget?
Do you have limited space/drive bays? (for tiered storage -> cheap hdds for capacity, optane ssd as cache for performance)
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
"it depends"
Samsung consumer or enterprise ssds?
Read or write intensive/mixed workloads?
"Cheap"/"expensive", what's your budget?
Do you have limited space/drive bays? (for tiered storage -> cheap hdds for capacity, optane ssd as cache for performance)
Consumer i guess, 850 series is available here if im not wrong..
It's read intensive, clients read huge files through ISCSI traffic
Well, preferably under 1k i guess, lower is better,
since i also don't want anything above SATA speed limit
I'd conside optane before, but i read on forums it doesn't support caching for non-boot drive
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
I'd conside optane before, but i read on forums it doesn't support caching for non-boot drive
That's what the consumer (m.2) optanes are for.
I was thinking of the optane 900p for slog (zfs) or wbc/journal disk (storage spaces) usage.
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
That's what the consumer (m.2) optanes are for.
I was thinking of the optane 900p for slog (zfs) or wbc/journal disk (storage spaces) usage.
My diskless program runs on win server 2012, so probably only possible with storage spaces? I have no experience with storage spaces though.. so you mean we can pair optane 900p with say.. a 10tb hdd to boost its r/w performance?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Going from two platter discs in RAID0 to a massive array of SSDs is... well, a very large jump. A grand doesn't buy you a whole lot of SSD storage (and I'll confess to being fatally allergic to RAID0 so I'd never recommend sticking with it).

What's the access pattern of the data like? How many client are there and do they access the file(s) randomly or sequentially? What chassis and OS are you limited by?
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Going from two platter discs in RAID0 to a massive array of SSDs is... well, a very large jump. A grand doesn't buy you a whole lot of SSD storage (and I'll confess to being fatally allergic to RAID0 so I'd never recommend sticking with it).

What's the access pattern of the data like? How many client are there and do they access the file(s) randomly or sequentially? What chassis and OS are you limited by?
30 clients, win server 2012
Im using regular atx motherboard

Its based on pxe boot algorithm.. kinda similar to esxi i guess. Ccboot separate image drive and this so called "gamedisk"(D drive) which serves as a upgaradable content for admin to manage from server side.. now this gamedisk require large size due to huge game content..

Ccboot actually has a built in caching method using ram/ssd.. unfortunately with only raided hdd doesnt give enough speed for modern AAA games, cause of their large size

I was thinking of using large hdd paired with ram or nvme as cache, but i havent found reliable method for this..
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Is the server connected via 10g/40g? If your network is only gig then it may not matter.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
So is this like an internet café sorta deal where you can load up all the computers with a network-accessible drive in order for them to run games from...? Are all 30 clients going to be doing the same thing at the same time? I guess depending on the game you're either going to have the clients reading entire files into local memory (best case) or constantly streaming reads (worst case).

Personally I think you'd want summat like a RAID10 of HDDs and, an SSD cache in front of it to handle the random IO - with 30 clients, even if they all rad sequentially it starts to look an awful lot like random IO by the time it gets to the server. In terms of putting an SSD cache in front of some spinning discs, windows isn't my forte in that regard. What are you using for the RAID controller currently? Is using cheap'n'cheerful (since that seems to be your MO) Intel RST a possibility?

I'll add the caveat though that a couple of spindles can only go so far ultimately. Ideally you're going to want to do a perfmon trace of your server during one of these sessions and see what the usage profile is like, and then you'll be better able to judge what sort of IO you might need to aim for.
 

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Is the server connected via 10g/40g? If your network is only gig then it may not matter.
Server is using quad nic to lacp enabled smart switch at the moment,
In the future if necessary might upgrade it to 10g as i have another 10g based setup that has been running for quite some time,
I think another ccboot user said having a switch with better buffer rate is also a factor
 
Last edited:

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
So is this like an internet café sorta deal where you can load up all the computers with a network-accessible drive in order for them to run games from...? Are all 30 clients going to be doing the same thing at the same time? I guess depending on the game you're either going to have the clients reading entire files into local memory (best case) or constantly streaming reads (worst case).

Personally I think you'd want summat like a RAID10 of HDDs and, an SSD cache in front of it to handle the random IO - with 30 clients, even if they all rad sequentially it starts to look an awful lot like random IO by the time it gets to the server. In terms of putting an SSD cache in front of some spinning discs, windows isn't my forte in that regard. What are you using for the RAID controller currently? Is using cheap'n'cheerful (since that seems to be your MO) Intel RST a possibility?

I'll add the caveat though that a couple of spindles can only go so far ultimately. Ideally you're going to want to do a perfmon trace of your server during one of these sessions and see what the usage profile is like, and then you'll be better able to judge what sort of IO you might need to aim for.
Well I havent figured out 100% how ccboot works,
What i know is it requires 4 drives,
Image for client image, writeback, cache and gamedisk.
Writeback is responsible fot clients' writing activity..
I suppose its stored (or perhaps partially) temporarily in RAM since from my experience using higher speed ram (3000+ mhz) seems to give better performance.
Im using built in raid software from win server 2012, cmiiw i think i read somewhere hardware raid doesnt benefit much over software raid?
The problem that i mentioned before was solved by some other ccboot users by changing game disk from hdd to ssd. So improving gamedisk read / write should solve my issue.. but i will check with perfmon during crowded hours
 
Last edited:

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
I'm considering 4 raid 0 firecuda @2tb each
Technically with raid 5 i'll be getting
6 gb + 3 times read speed + 1 drive fault tolerance.
Amazon reviewers got around 150-180 MBps read speed. 3 times of that should get me a little below sata 3 max speed limit no?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
>100MB/s speeds are only for sequential IO. Random IO on a hard drive is normally an order of magnitude slower than sequential.
 

rune-san

Member
Feb 7, 2014
81
18
8
I'm considering 4 raid 0 firecuda @2tb each
Technically with raid 5 i'll be getting
6 gb + 3 times read speed + 1 drive fault tolerance.
Amazon reviewers got around 150-180 MBps read speed. 3 times of that should get me a little below sata 3 max speed limit no?
Do not use FireCuda's in RAID. Their Firmware is focused on standalone data work, and their write flush commits vary heavily. When they're put in RAID, the RAID array becomes the greatest common denominator of whichever drive is last committing its specific stripe. It's a bad idea.
 
Last edited:
  • Like
Reactions: denywinarto

denywinarto

Active Member
Aug 11, 2016
238
29
28
40
Do not use FireCuda's in RAID. Their Firmware is focused on standalone data work, and their write flush commits vary heavily. When they're put in RAID, the RAID array becomes the lowest common denominator of whichever drive is last committing its specific drive. It's a bad idea.
Any thoughts about black WD?
 

rune-san

Member
Feb 7, 2014
81
18
8
There's no problem using most of the Western Digital Drives in basic RAID 1: Support for WD desktop drives in a RAID 0 or RAID 1 configuration | WD Support

That said, I've never heard of issues with RAID Black drives in any kind of RAID. Should work fine, though Western Digital Red drives would be a superior choice for compatibility. If you need the high performance of Black Drives with the compatibility of Red Drives, get Red Pro. At $165, 4TB Red Pro drives don't cost that much more than their non-pro brethren and are often cheaper than the Black drives.