Hi Matt,
I am about to embark on Zoneminder under CentOS. What kind of drive system (raid? ) works well ? I have about 15 Panasonic cameras. I have dedicated 8 x 3TB drives for this project.
For my setup, I just have a pair of 7200 rpm drives in software raid-1. And in fact, that's a recent upgrade, it used to be just a single 5400 spinning drive.
However, my setup is meaningless to you. To plan how much storage and IO you need, you really have to do the math, and it's not that precise...
It comes down to resolution, bit-depth and frame rate. Bit-depth is what it sounds like: the number of bits per pixel. 32bpp, 24bpp, and 16bpp are common values. Divide this by 8 to get the bytes per pixel (i.e. 4, 3, or 2 bytes per pixel).
Resolution is the number of pixels per frame. This is mostly a function of what your camera supports. I have mine set lower than what my cameras actually support, because it's good enough for me.
Frame rate is how many frames (pictures) are taken per time interval, usually expressed in seconds, i.e. "FPS".
So, to use one of my cameras as an example:
Bit-depth: 32bpp = 4bytes/pixel
Resolution: 1280x720 (i.e. "720p")
Frame rate: 12 FPS
So every second of video from this camera requres 4x1280x720x12 = about 42 MB, which would also equate to a storage system than can support 42 MB/s write speed.
What?! OK, in actuality, Zoneminder is saving JPG files, which are
compressed images. So in practice, you generally get much better numbers than my theoretical worst-case numbers presented here.
To give you some real numbers, I looked at a few events for this camera:
17 seconds = 53 MB
13 seconds = 40 MB
10 seconds = 19 MB
In practice, that's 2-3 MB/sec. But the specs I showed above don't tell the whole story, as how much compression you get from JPG conversion is highly dependent on the scene itself. Intuitively, if there's a lot of redundancy in an image, it will compress better, but if it's highly variable, then it will not compress that well.
Oh, also: in practice, you generally don't do continuous recording, but rather activate some kind of motion detection trigger. That saves a ton of space and IO load. My cameras actually support two streams: a high-resolution main stream, and a low-resolution sub-stream. I use the main stream for motion detection (and capture only when my criteria is met), and the sub-stream for continuous recording (as a backup).
That's one limitation of ZM, it stores "videos" as a sequence of JPG images. So it tends to be CPU and I/O heavy (since it was written back in the day when MPEG was the norm). Last I read, they were working on a new version that deals in video natively, which should improve performance considerably. For now, I have a little script that creates actual h.264/mkv videos from the JPGs (using ffmpeg) and that radically improves the space efficiency.
Think of it this way: say you have only one camera, but it supports 4k resolution at 60fps (like big-budget movie studio quality). That's 3840 × 2160 × 60 = 497,664,000 pixels per second.
Compared to 16 cameras running at 1280x720 and 15fps = 221,184,000 pixels per second. Roughly half the bandwidth, storage and CPU, but 16x as many cameras.
Hope that helps!