I just got things up and running with OI and napp-it.
Everything is going great, and with my raidz of (4) 7K3000's I am getting anywhere from 350-400+ MB/sec write speed. I'm very happy with that.
The thing that has me concerned, however, is that the behavior the array exhibits during writes. I have the 4 drives on an LSI2008 controller, and when I have a sustained copy running over the network the drives seem to all continually make a quick (less than 1s) write (which is audible with these drives, I'm used to that), then go quiet for about 5 seconds, then another quick write, and so on as long as the copy is sustained.
Initially I chalked this up to the fact that the incoming transfers are limited to around 100MB/sec by the GigE interface, and that the array was capable of writing so much faster (4x or more) it was just servicing it's write cache that much faster than the data was coming in. This seems like a perfectly reasonable explanation of what's going on, but I'm hopeful that someone can offer some confirmation of this, or offer an alternative explanation.
As I said, I haven't seen any detrimental performance issues; quite the opposite in fact--the performance is excellent. I just want to make sure there isn't something I'm missing that's putting unnecessary wear on my disks that could cause a premature failure.
Setup: oi-151 under ESXi / 8GB VRAM / (4)7K3000 / LSISAS2008(passthrough) / Intel 1000 NIC (passthrough)
Any insight from anyone on this?
Everything is going great, and with my raidz of (4) 7K3000's I am getting anywhere from 350-400+ MB/sec write speed. I'm very happy with that.
The thing that has me concerned, however, is that the behavior the array exhibits during writes. I have the 4 drives on an LSI2008 controller, and when I have a sustained copy running over the network the drives seem to all continually make a quick (less than 1s) write (which is audible with these drives, I'm used to that), then go quiet for about 5 seconds, then another quick write, and so on as long as the copy is sustained.
Initially I chalked this up to the fact that the incoming transfers are limited to around 100MB/sec by the GigE interface, and that the array was capable of writing so much faster (4x or more) it was just servicing it's write cache that much faster than the data was coming in. This seems like a perfectly reasonable explanation of what's going on, but I'm hopeful that someone can offer some confirmation of this, or offer an alternative explanation.
As I said, I haven't seen any detrimental performance issues; quite the opposite in fact--the performance is excellent. I just want to make sure there isn't something I'm missing that's putting unnecessary wear on my disks that could cause a premature failure.
Setup: oi-151 under ESXi / 8GB VRAM / (4)7K3000 / LSISAS2008(passthrough) / Intel 1000 NIC (passthrough)
Any insight from anyone on this?