Starting to work towards configuring the DL180 into my rack/daily use configuration. The goal is to have it house a 12-14 drive ZFS pool as a "backup" pool for my other storage and use the CPU/memory resources for VMs.
Before I finalize this I wanted to do some testing of the disk performance with the backplane/expander compared to a similar configuration without it. The configuration is a bit contrived and the bench is not formalized. It does directly compare two options I have for my "backup" array so for me it is valid.
Test configuration:
Common in both "before" and "after":
- Debian Linux, 2.6 kernel (Proxmox 3.0)
- ZFS on Linux (ZoL) release 0.6.1
- 11x Hitachi 7k3000 2TB spinny disks configured as RaidZ2 (its 11 rather than 12 'cuz I had to leave one slot in the C6100 chassis for a system disk in the "before" test)
- M1015 HBA in IT mode
- CPU is dual L5638.
Before:
- 1 node C6100, 48GB memory, 8 drives cabled to M1015, 3 drives to on-board SATA-II ports
- System disk is 60GB Intel 330 SSD on MB sata. Mounted in drive cage since there is no room for "ghetto" inside the C6100.
- Result: bonnie++ block reads > 1,000KB/s, block writes > 800KB/s (unfortunately I can't read my own notes...)
After:
- DL180 G6, 96GB memory, 11 drives on backplane expander with single 4-lane 8087 cable to M1015
- System disk 60GB Vertex-1 currently sitting loose inside the chassis (not final config)
- Result: bonnie++ block reads 628KB/s, block writes 523KB/s
To make sure the ZFS pool was identical I built the pool on the C6100, did the test, and then moved the drives and did a "zpool import" on the HP.
It is significantly slower than cabling drives directly to the M1015/MB ports. Initially I'll attribute it to the expander, but there could be other things going on too. Though slower, it is certainly fast enough since the array will just be holding backups from the other systems while the CPU/memory is used for VMs as part of the lab.
The DL180s backplane refuses to light the power light for the SATA drives. Activity lights work fine (except one slot that appears burnt out). I had an older SAS drive and plugged it in just for grins...power light lights up fine. Curious, though at the end of the day the activity light is way more useful. Its way better than the C6100, where you get power lights as long as the node "normally" associated with the drive bay is powered on, but no activity lights when not using "normal" drive-bay/node configurations.
Also, no staggered spinup through the expander. On the C6100 the 8 drives on the M1015 staggered nicely at startup, but on the DL180 through the expander all 12 drives spun together with the expected surge in power draw (I have 12 drives mounted even though on 11 were used in the test). For those of you who bought the units with the 460W PSU this might be worth keeping on eye on. I'm using a 750w so it doesn't bother me.
Finally, when I was moving the drives from the C6100 trays to the HP trays I was struck by how much better designed the HP trays seemed to be. I tracked drive temps throughout the bonnie++ runs and a subsequent "zpool scrub". The ambient sensor from my APC is mounted near the top-front of the rack and is logging about 30c-31c throughout. Drive temps are ranging from 28c-31c throughout the "after" test. I had them regularly hitting 38-39c when mounted in the C6100. While the fan noise from the HP is just a little louder than my modified fans on the C6100 they appear to be doing a MUCH better job of cooling drives. Also noted that the system temps in the C6100 dropped 10c when the drives were pulled...