(This turned out to be a long thread start...sorry...)
Im redoing my servers at home and looking to maybe redo the storage too.
Today I run ESXi on two hosts and run like periodic backups to external storage, because I never really found a way to do it properly. (VMs backup via Unitrends). .
Media storage was/is in a Virtual Napp-it zpool rz2 of 8x4TB + 200GB S3700, external storage have been 4x4TB on a HP Microserver.
VM storage was a mix, most on a 500GB hw raid10 (which dies, seperate thread....)
Now Im migrating to Proxmox for several reasons - I prefer Debian, its "free", I could run ZoL etc etc
But what Im looking at now it to change my storage too. ZFS have been very good to me, no problems really whatsoever, but I would like to see what I could do which would be smart in my situation in terms of backup and availability (and obviously labing, testing, learning )
My nodes would be:
1. SM mobo, E5-1620v1, 96GB RAM (this is where the zpool is today)
2. SM mobo, E5-2640v1, 32GB RAM (On the way, Thanks @T_Minus )
3. SM mobo, Atom C2750, 32GB RAM (This one I planned to sell, but then I had to idea to put it remote instead)
My storage today ranges between like 10-12TB, this is mostly media, and I could limit it. My issue with running ZFS on the backup node is that space would be almost full, and well over recommended fill of a zpool.
So, the real question: Would Ceph be a valid direction to go in for my storage needs? The way I see it, Id then spread 4x4TB across the nodes (and SSDs, for journal?) keeping them all in sync, effectivly giving me a backup on the remote location. (which Id have to connect to via ...VPN?)
Alternative two would be a zfs send, or pve-zync setup. Never used zfs send/receive, but people seem to like it. But since this is media etc, Id really like to idea of have an actual filesystem to brows on the backup, not snapshots. Is restoration of snapshots a fairly simple task?
Alternative three is some nightly rsync script, which could work. But its not as fancy, is it?
Maybe, and probably Im overthinking all of this, but its nice to learn!
Im redoing my servers at home and looking to maybe redo the storage too.
Today I run ESXi on two hosts and run like periodic backups to external storage, because I never really found a way to do it properly. (VMs backup via Unitrends). .
Media storage was/is in a Virtual Napp-it zpool rz2 of 8x4TB + 200GB S3700, external storage have been 4x4TB on a HP Microserver.
VM storage was a mix, most on a 500GB hw raid10 (which dies, seperate thread....)
Now Im migrating to Proxmox for several reasons - I prefer Debian, its "free", I could run ZoL etc etc
But what Im looking at now it to change my storage too. ZFS have been very good to me, no problems really whatsoever, but I would like to see what I could do which would be smart in my situation in terms of backup and availability (and obviously labing, testing, learning )
My nodes would be:
1. SM mobo, E5-1620v1, 96GB RAM (this is where the zpool is today)
2. SM mobo, E5-2640v1, 32GB RAM (On the way, Thanks @T_Minus )
3. SM mobo, Atom C2750, 32GB RAM (This one I planned to sell, but then I had to idea to put it remote instead)
My storage today ranges between like 10-12TB, this is mostly media, and I could limit it. My issue with running ZFS on the backup node is that space would be almost full, and well over recommended fill of a zpool.
So, the real question: Would Ceph be a valid direction to go in for my storage needs? The way I see it, Id then spread 4x4TB across the nodes (and SSDs, for journal?) keeping them all in sync, effectivly giving me a backup on the remote location. (which Id have to connect to via ...VPN?)
Alternative two would be a zfs send, or pve-zync setup. Never used zfs send/receive, but people seem to like it. But since this is media etc, Id really like to idea of have an actual filesystem to brows on the backup, not snapshots. Is restoration of snapshots a fairly simple task?
Alternative three is some nightly rsync script, which could work. But its not as fancy, is it?
Maybe, and probably Im overthinking all of this, but its nice to learn!