The new 9207-8i cards arrived and were installed last week. Server 2k8R2 and IOmeter are ready for testing- but our older primary VM host crashed and I have not gotten to play with this since. After that was fixed, I found 4 dead drives in our primary NAS pair (replicated). Hopefully later this week I can get some work done
DBA- message me your config settings.
I believe that you mean my IOMeter settings. Here they are:
IOMeter configuration for SSD-oriented testing:
Run the IOMeter UI and configure it as shown below.
Test Setup tab:
All default values except:
"Number of Workers to Spawn Automatically" = 1 or optionally 1 per physical CPU slot (not the default, which is one worker per CPU core). Try both options and see what you get.
"Ramp Up Time" - 0 seconds is fine if you are testing the HBA itself. If you are testing overall system performance, 60 seconds or up to 60 minutes is appropriate - this gives the system time to get into "steady state" before you start collecting metrics.
"Run Time" - I use five minutes for quick card/HBA tests and 30 minutes to 5 hours for system performance tests.
Results Display tab:
"Results Since" = Start of Test since we want the overall results, not a snapshot
"Update Frequency" = 10 seconds. It's not critical
Access Specifications tab:
Each test will have it's own configuration here - see below.
Network Targets tab:
You won't be testing the network, so there is no need to touch this tab.
Disk Targets tab:
"Maximum Disk Size" = 24,000,000 sectors would be 12GB per disk assuming 512byte sectors. Set this large enough that the data won't fit into your RAID card cache or your OS memory, if you are testing a filesystem.
"# of Outstanding I/Os" = 32. This is a good average for an LSI HBA. Most systems will show maximum performance somewhere between 16 and 256.
"Write IO Data Pattern" = Pseudo Random.
Note: Control-click to select more than one disk to test at once.
Read IOPS Test:
4kb, 100% of specification, 100% read, 100% random for SSD testing. All other settings default.
Read Throughput test:
1MB, 100% of specification, 100% read, 100% random. Sometimes you can see greater throughput by testing 2MB or 4MB transfers, but my use case is for 1MB transfers so that's what I use.
RAID Read test:
64kb (or whatever your RAID chunk size is), 100% of specification, 100% read, 100% random.
RAID Write test:
64kb (or whatever your RAID chunk size is), 100% of specification, 100% write, 100% random.
Configure IOMeter as above with the exception of the access specifications, which vary per test. For each test, configure an access specification and then run that test. You should create additional test specifications to match your anticipated disk usage, including mixed read/write tests.
In my methodology, I first test the system as JBOD, which gives me a good idea of my raw maximum throughput. For this, I connect many SSD drives as separate volumes, leaving them initialized but unformatted (to avoid OS caching) and then control-click all of them in the Disk Targets tab so that I'm testing on all of the SSD drives in parallel but not testing any RAID implementation.
Once you are satisfied with your raw throughput, you can re-test with the disks in whatever RAID setup you have chosen to see how well your RAID implementation is performing.