my c6100 is 4 nodes @ dual six core 2.80Ghz - X5660, between 24 and 48GB of ram each. I have the base LSI raid controller offered by dell, the LSI1068e. I had 3 spare vertex4 drives, so i made a 3 drive raid0 array (no hacking up my backplane on this one, other than this node, the other 3 nodes are in production).
running 2008r2, 12 cores, 24GB ram, and iometer, using the original test posted here:
Benchmarking your Disk I/O | Technodrone i'm seeing much more modest numbers (depends on the test but: 550 - 1050MB/s and about 45K-60k iops), granted this is the OS disk and i'm running windows update applying 92 updates on this node while i run the tests...
what tool did you use for your testing and if iometer, if you can send me the test file, i can do a proper apples to apples.
-hak
I'm not surprised that you are seeing lower throughput - the LSI1068e is an older chip. It's good to see that it's faster than the motherboard SATA ports, which max out at around 650-700MB/S combined.
For my throughput and max IOPS testing I use IOMeter. I don't have a test file saved - I just enter my standard setup each time. Start with all default values in IOMeter and then:
Test Setup tab. All default values except:
Run time: 1 minute if testing the card, 10 minutes if testing the disks
Ramp-up time: 1 minute
Number of workers: 1 except when testing very large systems
Results Display tab:
All default values is OK, or view every 5-10 seconds.
Access Specification tab:
Create a new specification with exactly 1MB size, 100% random, 100% reads
Otherwise all default values
Network Targets tab:
All default values (aka empty)
Disk Targets tab:
Select the sever, not the individual worker.
Maximum Disk Size: 12,000,000 sectors (which is arund 6GB
# of outstanding OIs: 32 (this is queue depth)
Pattern: pseudo random
For IOPS, create a new access specification with 100% reads, 100% random, and 4kb (instead of the 1MB above)
Also, if you are testing the maximum throughput of some disks and/or a card (as opposed to complete formatted array), just leave each disk as JBOD. When you are in IOMeter, use control-click on the Disk Targets tab to select all of the disks at once. IOMeter will queue up transfers to each disk separately, driving the absolute maximum possible throughput.
Lastly: If you can, initialize the drives but do not create filesystems on them. In Windows, the drives should not appear with drive letters and should show up as "unallocated" when viewed using Disk Management. By using raw drives, the OS has no opportunity to mess with your results by doing its own caching. My HP MSA array, for example, will test at 4GB/S against formatted drives (because of OS caching) and 1GB/S with raw drives.