[Solved] Unusually low RAID 10 throughput, any ideas?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

s3ntro

Member
Apr 25, 2016
56
13
8
39
Hi all,

I have a home-built storage server running 12 HGST and Toshiba drives in RAID 10. The raid controller is an Intel RS3DC080 connected directly to four of the drives. The remaining eight are connected via a RES2SV240 expander card and a single cable back to the RS3DC080. I have the array configured for sequential workloads, and I've seen over 1 GBps. However, I'm now capped at just over 800 MBps and I have no idea why. The drives are mostly enterprise HGST SATA drives, either 5 or 6 TB, all with 128 MB cache and 7200 RPM. The Toshibas are consumer X300s but with the same specs.

Yesterday I connected 3 SSD drives to the RES2SV240 and configured them as a RAID 0. It hit the exact same speed - just over 800 MBps. Previously I had the same configuration and drives in my personal PC and hitting 1400-1500 MBps. All of this testing is local via CrystalDisk Mark, not through the network (though I do have it connected via 10 gig, hence my interest in getting over 1 GBps). OS is Windows 10 Pro. Processor is an Intel 4770k. 16 gigs of RAM. The only other thing running is a Plex server, but it's rarely used. It also has a GTX 970 in it for random game streaming.

I thought perhaps I had a failing hard drive before I plugged in the SSDs. SAS2 should have plenty of bandwidth to hit 1 GBps as it can do 4x 6 gigabit to the individual drives. Each drive is good for about 160-180 MBps real world performance, so write speeds should be over 900 Mbps. Again, I've seen it eclipse 1 GBps on sequential workloads but I don't test it often enough to know when (or why) it's now dropped to 800.

I'm waiting on additional cables to see if that's a problem, as well as move the flash drives to the controller card instead of the expansion card. In the meantime, do any of you have any ideas or suggestions? Thank you all in advance!
 

bonox

Member
Feb 23, 2021
87
20
8
couple of thoughts -

1. are the drives clean or dirty? If they're fragmented, even sequential transfers will drop substantially.
2. home built screams lower end (and your CPU points that way) - perhaps you don't have enough PCI lanes to service the controller, (especially since you've got a video card plugged in there) or you're CPU limited.
3. older IR mode cards are fairly well known to be IOPS restrictive or limited queue depth and can artificially throttle/bottleneck collections of SSDs

Since you're getting limits across rust and flash, i'm betting the card itself is the limitation, probably the available pci bandwidth. It's an 8x card and something like a B85 chipset motherboard only has 8 lanes in total to share with your video card, controller, 10gb NIC and everything else you've got. If the CPU has all lanes going to the video card, then the chipset will power everything else with only 8 lanes, meaning that if you've got an 8 lane disk controller, 4 lane NIC and some other single lane device like an onboard NIC, then you mayy find it serving only 1 pci lane to each device (because it won't do something like a 4/2/2 split). which is about what your performance indicates for PCIe 3.
 
Last edited:

s3ntro

Member
Apr 25, 2016
56
13
8
39
couple of thoughts -

1. are the drives clean or dirty? If they're fragmented, even sequential transfers will drop substantially.
2. home built screams lower end (and your CPU points that way) - perhaps you don't have enough PCI lanes to service the controller, (especially since you've got a video card plugged in there) or you're CPU limited.
3. older IR mode cards are fairly well known to be IOPS restrictive or limited queue depth and can artificially throttle/bottleneck collections of SSDs

Since you're getting limits across rust and flash, i'm betting the card itself is the limitation, probably the available pci bandwidth. It's an 8x card and something like a B85 chipset motherboard only has 8 lanes in total to share with your video card, controller, 10gb NIC and everything else you've got. If the CPU has all lanes going to the video card, then the chipset will power everything else with only 8 lanes, meaning that if you've got an 8 lane disk controller, 4 lane NIC and some other single lane device like an onboard NIC, then you mayy find it serving only 1 pci lane to each device. which is about what your performance indicates for PCIe 3.
Thank you for taking the time to look into this and respond. I've run it previously with the 970 out of it (relying solely on the onboard video) and I definitely hit those 1 GB/sec transfers. As I said before, I don't test it often enough to know for sure when it dropped, but I think there's some merit to testing this - and it's easy to do!

Drives were definitely clean/formatted fresh. I'm more inclined to think there's a system limitation than the drives, too. And yes, it's low-end, but makes a great file server, backup location and Plex transcoder!

Thanks again for the input.
 

s3ntro

Member
Apr 25, 2016
56
13
8
39
was that gigabyte per second sustained or just a cache hit?
Sustained as far as I can remember.

I pulled the box out and the RAID card was indeed in a PCI 2x slot. Switched it over to the 3x that the 970 was in. Tested the SSDs and they came back over 1 GBps - and that's through the expander card. Success! Tested the spinners, 630 MBps. Worse than before. It's a good thing my hair is short. Just so strange. I have a few new cables showing up tomorrow and can mess with ports. I really thought the PCI link was it though.
 

bonox

Member
Feb 23, 2021
87
20
8
ok, good luck

I've no idea what a 2x or 3x pci slot is. Despite the spec, i've only ever seen 1,4,8,16 lane, and the same for cards in non-native slots - eg if you have 2 lanes available, any card will always seem to default to only 1 lane, not 2. If you have 5 available lanes on the bus, you'll get 4 applied to the card even if it's a 16x card sitting in an 8x slot etc.

But i might be about to learn something new. I suspect you mean second or third motherboard slot though and not a slot with (*this many*) lanes
 

s3ntro

Member
Apr 25, 2016
56
13
8
39
ok, good luck

I've no idea what a 2x or 3x pci slot is. Despite the spec, i've only ever seen 1,4,8,16 lane, and the same for cards in non-native slots - eg if you have 2 lanes available, any card will always seem to default to only 1 lane, not 2. If you have 5 available lanes on the bus, you'll get 4 applied to the card even if it's a 16x card sitting in an 8x slot etc.

But i might be about to learn something new. I suspect you mean second or third motherboard slot though and not a slot with (*this many*) lanes
You're right - I think the numbering was the slot, not the PCIe speed. I believe the top slot (3) was 16x, the ones after that 8x. The RS3DC080 is an 8x card anyway so going to the 16x slot wouldn't be of any help unless I had a lane restriction due to devices. With the 970 gone, that doesn't appear to be the case.

Thanks again for your thoughts. I'm shocked I didn't catch the 8x/16x vs 2/3 thing. Just moving too quickly.
 

s3ntro

Member
Apr 25, 2016
56
13
8
39
Found the problem: Windows allocation was set at the default 4096 rather than 1MB, even though the RAID setup had it at 1 MB. I've formatted this thing 20 times but must've missed that last time. With it now set appropriately, I'm pulling just over or under 1 GBps sequential transfers again @ 4 GB files, so I'm happy.