Poor performance PERC 5/i, PE1950ii? Ideas?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rcmcdonald91

New Member
Feb 5, 2015
5
0
1
32
I am trying to troubleshoot some poor performance that I am observing from my PERC 5/i in a Dell Powerdge 1950 (2 x 3.5" 1TB Hitachi A7K1000). I have already ordered a new BBU as my old one was pretty much dead.

This server only has 2 3.5" drive bays so the only configuration I can really do is RAID1. I don't need a ton of local space, but this is an ESXi host that runs two VMs (a PXE server and a 2012R2 RDS host).

What is the optimal configuration? Write Back + Adaptive Read Ahead? What about stripe size?

The two 1TB drives I have in their currently aren't the best 1TB drives by any means. They are just what I had lying around as spare to throw into this box. These 1TB drives alone benchmark around 70MB/s sequential read/write.

However, running ATTO I get the following result:

(See below)

I don't need 1TB of local datastore. My plan is to throw in two 300GB 15k SAS drives for the local store and if I need to run any other VMs, I will store them elsewhere.

I'm running the latest Dell customized ESi 5.5u2 image and latest firmware/bios/etc.
 

Attachments

andrewbedia

Well-Known Member
Jan 11, 2013
701
260
63
I don't think you quite understand RAID1. You're not going to get faster performance out of it than you would with just normal single drives. If anything, it could be slower. There are no stripe sizes. The same data is written to both drives in the same locations.
 

rcmcdonald91

New Member
Feb 5, 2015
5
0
1
32
Also, the BBU isn't going to do anything for you, I don't think.
I understand RAID1. My concern is that my performance isn't at least that of one drive. RAID1 should offer the performance of at least the slowest drive (maybe a bit higher on the reads). Both drives can be read from simultaneously, so in theory reads should/could be a bit faster. Writes will always be that of the slowest drive--the write operation will always be bottlenecked by the slowest drive in the mirror.

Clearly the cache and BBU are important--the BBU to support the cache (this is volatile memory we are talking about). Enabling write through kills my write performance. So clearly it is doing something. The battery triggered an error to be raised in OSMA and subsequently the PERC 5/i will fall back to write through.

So here's the logic I'm trying to work through here:

1) These 7.2k drives aren't the fastest contenders on the block. They are barely pushing 70MB/s sequential individually, let alone after you consider any virtualization overhead.
2) This server only supports 2 x 3.5" drives. I only have the budget for mechanical drives. So lets forget SSDs here. The best drives that I can get for high IO/throughput are the 15k SAS drives (15k.7 Seagate Cheetah). If I threw in 2x300GB that would give me more than enough storage for the 2 VMs I run. Anything else I will just run over my SAN.
3) My only real options are the PERC 5i (which I already have) or the PERC 6i. The PERC H700 is (correct me if I'm wrong) the "next-gen" PERC 6i. Even if I upgraded cards to try to squeeze as much throughput as I can out of this setup, at some point the mechanical aspects of the drives themselves will bottleneck a card upgrade. And I imagine that this bottleneck will happen even with the PERC 5i. Do you agree or disagree? Simply if all this considered equal, 2x300GB 15k SAS drives in RAID1 on a PERC 5i should get the same throughput as on a PERC 6i or a PERC H700.