Simple RAID0/Stripe using Windows Storage Spaces on Windows Server 2016/1709 very slow?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RamGuy

Member
Apr 7, 2015
35
2
8
34
I'm having some trouble with 2x Seagate IronWolf 10TB disks in a simple RAID0 aka "Stripe" configuration using Windows Storage Spaces on Windows Server 2016 v1709. This server is being used for Plex and torrents. Nothing fancy, but it seems like the RAID-array is not up to the task.

Having a 540/540mbit Internet connection I'm having issues with the torrent-client not being able to write fast enough to the disk so the result is disk overloaded messages.

The downloads max out at about 68MB/s so I have a hard time understanding why this is such a big problem at times. Even a single disk should be able to handle that load, so having two in RAID0 shouldn't really have any issues?

It seems like the issue occurs more often when there are multiple downloads going at the same time, so its obviously not enjoying having multiple writes going at the same time without dropping performance.


Is this to be expected? Due to limited amount of RAM and how the torrent-client tends to crash when you start playing around with caching it seems like letting the client itself handling caching of disk writes is a no-go. But I suppose Storage Spaces itself should be doing some kind of write caching by itself, or no?

I could of course double the amount of RAM on the server and dedicate a few gigabytes for disk caching but I don't really know how to manage disk caching of storage spaces? I can't seem to locate any options for it. All I can seem to find is options for adding SSD-cache to RAID5/6.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Do you use refs? It's a cow filesystem like zfs and if you have a bunch of random, small(er) writes you will be limited by hdds very fast (7k rpm drives max ~120 iops).
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Torrents really end up demanding a lot of random IO, ssd is the way to go them move the files to bulk storage at least use ssd for active downloads.
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
The issue with caching the downloads on a SSD to then move it afterwards is that I get really limited in terms of folder and file structures. If I want to automate this process within the client it only allows me to set a specific "move to" folder whereas I mostly rely on RSS and get torrents saved on various different folders depending on its content.
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
Never mind, I found a way to handle the moving of completed downloads. I will try to use a SSD as download cache.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I assume you're using utorrent as your download client? There was another thread here a while back concerning "disk overloaded" problems with utorrent - it seems to be a common problem. Personally I've had much better luck with transmission which is much more lightweight and does things like batched writes and file preallocation to minimise the amount of random IO hitting the disc.

Secondly - I assume you're aware that 540Mb/s is more or less the same as 68MB/s?
 

RamGuy

Member
Apr 7, 2015
35
2
8
34
Sorry for my late reply. Office365 suddenly decided to mark the notification e-mails as junk/spam.

I'm using utorrent and I prefer to have it that way due to the awesome RSS downloader and the fact that I have over 600 torrents seeding so it would be a lot of hassle to re-add them into another client etc.. I'm fully aware that 540mbps = 68MBps, I was simply questioning why a RAID0 array consisting of 2x Seagate IronWolf 10TB would struggle with 68MBps writes.

I will be changing things up. I actually have a Intel 520 Series 240GB SSD as cache drive in storage spaces so one would figure it should have issues with as it got 1gb caching on a ssd already... But I've now attached 2x Intel 320 series 160GB SSD's in RAID0 and use them as download/cache drives for downloads before they get automatically moved to the appropriate folder on the IronWolfs. It seems to be working.


I've also ordered a LSI 9285CV-8E with 1GB DDR3 RAM cache and battery backup unit and another 16GB of DDR3 EEC RAM so I will try to move away from storage spaces over to hardware raid with RAM caching and see if that will be fast and good enough, or whether I have to keep relying on a dedicated SSD cache for the downloads. I guess the LSI RAID controller also can do SSD caching, but one would think 1GB of RAM cache should be enough?
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I'm using utorrent and I prefer to have it that way due to the awesome RSS downloader and the fact that I have over 600 torrents seeding so it would be a lot of hassle to re-add them into another client etc..
In terms of automation there's a dozen utils that'll support automated downloading of torrrent feeds, and transmission certainly supports a "watch" dir directive (i.e. "drop a torrent file in here and I'll automatically download it") so it can be handled automagically.

I'm fully aware that 540mbps = 68MBps, I was simply questioning why a RAID0 array consisting of 2x Seagate IronWolf 10TB would struggle with 68MBps writes.
Depends entirely on the access pattern; that array would almost certainly net you writes of at least 200MB/s, but only sequentially. Add any degree of random IO to that and your throughput will drop off pretty sharply. From reading other threads about this issue, utorrent writes in such a way that there seems to be way more random IO than needed (i.e. it either doesn't make effective use of its cache or batch up writes correctly), although you'd probably want to have set up a monitor in perfmon or similar to verify that.

Regardless, writing to a dedicated SSD will obviate the download problem, assuming the platter-based array is capable of keeping up with the reads whilst seeding.
 

IT33513

New Member
Mar 14, 2018
6
0
1
32
UK
I was thinking of using storage spaces but I gave up after reading about its poor performance.
When it comes to SS with a combination of S2D I would recommend giving it a shot only if we are talking about 4 nodes or above.

They just not willing to work in 2-3 nodes configuration.