Hi everyone,
I have a rather old system (2009) lying idle around and figured I might make it more useful by adding NAS functionality to it. It runs a 64bit 6.0 RedHat Linux variant and the specs are quite old for these times: An Athlon2 X2 and 8 GB DDR2 800 MHz RAM. Thats the MB's max capacity, so anything requiring more than that would mandate a motherboard upgrade and obviously some new RAM sticks too.
It does however have quite a few SATA ports and I'm wondering how far could I go with a ZFS pool added in without having to upgrade most of the components in the system.
From the start I should say that the pool should be used mostly for storage so no read/write intensive activities are envisioned. Concurrent access is estimated to 1-2 users at a time and the files (mostly backups) would be accessed through ethernet connections (100 Mbps). High transfer speed is not a requirement as long as it doesn't drop under 10 Mpbs. Optionally a SAMBA service would run on the host system to facilitate access.
I want to strech things a little bit by grouping 3 x 3 TB disks into a RAID-Z array which should give me around 5.5 TB of usable storage. In addition a "normal" hdd (~1TB) would serve as the system and applications disk.
According to references on the web, such as this one from FreeBSD (20.2. The Z File System (ZFS)) 1GB per 1TB of storage is recommended, however it's not clear to me if it refers to usable storage (5.5 TB and 8GB RAM should be fine) or raw storage (a bit south of 9TB which goes over the recommended values). I've also read that people have managed to run ZFS pools on less than the recommended specs without running into issues as long as their read/write requirements were modest. It's also sometimes noted that the recommendations are more to the best practices type rather then mandatory.
I'm also rather familiar with another NAS system (N40L) also using a RAID-Z array (in budget at 8GB RAM) and I have noticed (via the free command) that the memory shows most of the time as available (7/8 capacity) when idle and doesn't drop under 4 GB when disks transfers (either to or from the array) take place. Again, this is also a low usage system.
I guess all I'm wondering about is whether the filesystem would crash and burn on an underspecced system or the the worst I could axpect is a performance penalty - I would prefer to avoid any crashes and/or data loss.
Oh, and I might also squeeze an always-on virtual machine on the same system, but this would get less or at most max 1GB RAM and would run off the system disk.
What do you guys think?
I have a rather old system (2009) lying idle around and figured I might make it more useful by adding NAS functionality to it. It runs a 64bit 6.0 RedHat Linux variant and the specs are quite old for these times: An Athlon2 X2 and 8 GB DDR2 800 MHz RAM. Thats the MB's max capacity, so anything requiring more than that would mandate a motherboard upgrade and obviously some new RAM sticks too.
It does however have quite a few SATA ports and I'm wondering how far could I go with a ZFS pool added in without having to upgrade most of the components in the system.
From the start I should say that the pool should be used mostly for storage so no read/write intensive activities are envisioned. Concurrent access is estimated to 1-2 users at a time and the files (mostly backups) would be accessed through ethernet connections (100 Mbps). High transfer speed is not a requirement as long as it doesn't drop under 10 Mpbs. Optionally a SAMBA service would run on the host system to facilitate access.
I want to strech things a little bit by grouping 3 x 3 TB disks into a RAID-Z array which should give me around 5.5 TB of usable storage. In addition a "normal" hdd (~1TB) would serve as the system and applications disk.
According to references on the web, such as this one from FreeBSD (20.2. The Z File System (ZFS)) 1GB per 1TB of storage is recommended, however it's not clear to me if it refers to usable storage (5.5 TB and 8GB RAM should be fine) or raw storage (a bit south of 9TB which goes over the recommended values). I've also read that people have managed to run ZFS pools on less than the recommended specs without running into issues as long as their read/write requirements were modest. It's also sometimes noted that the recommendations are more to the best practices type rather then mandatory.
I'm also rather familiar with another NAS system (N40L) also using a RAID-Z array (in budget at 8GB RAM) and I have noticed (via the free command) that the memory shows most of the time as available (7/8 capacity) when idle and doesn't drop under 4 GB when disks transfers (either to or from the array) take place. Again, this is also a low usage system.
I guess all I'm wondering about is whether the filesystem would crash and burn on an underspecced system or the the worst I could axpect is a performance penalty - I would prefer to avoid any crashes and/or data loss.
Oh, and I might also squeeze an always-on virtual machine on the same system, but this would get less or at most max 1GB RAM and would run off the system disk.
What do you guys think?