What backup software for backing up physical Linux servers?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mobycl1ck

Member
Feb 20, 2022
56
9
8
As the title says, I am interested to learn what solutions do you guys use, being that in homelab or enterprise environment.
For the moment, I am using Synology ABB on a test server and it is able to run a scheduled backup with its agent, and recovered the backup succesfully. Yet, I have some reservstion to promote it on production. Not much of support and some things are not so well documented, not implemented.
I am open to suggestions.
Thanks

:edited for spelling errors
 
Last edited:

louie1961

Active Member
May 15, 2023
371
164
43
Rsync to back up my synology to a pi based NAS as a second copy onsite, and Rclone to backup to AWS Glacier as a third/offsite copy of my data.
 

NerdAshes

Active Member
Jan 6, 2024
110
70
28
Eastside of Westside Washington
I used to be a StorageCraft partner - it's now ArcServe. It (SPX) was an awesome system, that cost a fortune. I got to bill monthly for it, so I loved it as a partner. If the money is not a concern - in my opinion, it's (was) the best DRaaS (Disaster Recovery as a Service). We could back up everything as often as we like (1min) and fail-over to VM from Bare-metal, in 2ish min (users wouldn't even notice the server redirection). Once the Bare-metal was restored - move the system back from VM, like nothing happened. That was some sexy "Business Continuity" All with deduplication, incremental backups and automated backup testing of local and remote files. My customer base LOVED it. Ransomware news reports, sold it for me. Good times. I'm not sure what the ArcServe team brings or removes from the SPX system though - I was retired before that merger/acquisition. Oh - and they had good swag/food at conferences.

Datto, Azure, etc are other expensive, effective DRaaS options.

For free, today, for Linux servers, I'd probably go with Proxmox Backup Server. It really only handles Debian Linux Bare-metal servers, containers and VMs. It has many of the same features of StorageCraft SPX if combined with a Proxmox Virtual Environment server.

I've also had success with free Duplicati and free/paid Veeam. I've also successfully done things like Rsync, Syncthing, Robocopy, etc.. Even Windows 7 backup works well for Windows PCs (even Win 11). Windows Server backup is actually good too! Honestly every OS has a free, robust solution either backed in, or freely installed with simple CLI cut/paste options. I just like an all-in-one, single pane of glass solution - so I usually go beyond what is simple and offered directly.

I also used to be a Synology partner. They have really picked up their backup game and I really like what they are doing. They have several packages that backup in several different ways. One or more of those packages should meet the needs of almost any SMB/Home Lab. However I find the cost of entry into Synology, along with their new drive certification program, to be less attractive than just rolling your own server.

If I'm not using a DRaaS solution, then I typically like to have local PCs back up to a same OS family type, file server (keeps it simple), that a Hypervisor can access (to bring up, downed systems, while they are repaired).That file server then has it's files backed to another local server (that focuses on server backups), and that other server has it's files synced to a provider, that is located in a different non-local region (for mega disasters).

Today, I'm setting up a TrueNAS system. I've never used it, so for giggles I'm going to dip my toes.
 

Stephan

Well-Known Member
Apr 21, 2017
1,085
845
113
Germany
Used StorageCraft ShadowProtect for Windows up until 3.5/4.0 days in places. It's been a while. Acronis has sadly also gone to sjit. Guess all those new developers had to prove their salary somehow some way so they blew up the code base a 100 fold and nobody stopped them. Scripted Drive Snapshot using PsExec and Blat, no more bloat. Using correct older version of Blat, senseless rewrite broke umlaut handling.

One main attraction of VMs is crash-consistant backups. For physical the super-tier is not needing it, because deployment is automatized fully. If you don't have a snapshot-capable filesystem like ZFS, and have no databases like Postgres, and you are sure not much or nothing important is changing while you do it, might try dirty tar with the correct options to capture the root fs and pipe it via ssh to another host:

tar --exclude=/swapfile --use-compress-program=/usr/bin/lbzip2 --one-file-system --numeric-owner --acls --xattrs --xattrs-include='*' --totals -cpSf- / | ssh user@host "cat > backup.tar.bz2"
 
  • Like
Reactions: nexox

Stephan

Well-Known Member
Apr 21, 2017
1,085
845
113
Germany
Forgot to mention how to restore, e.g. to a new disk:

tar --xattrs --xattrs-include='*' --acls -xvjpf backup.tar.bz2 -C /mnt/newdisk

If this is supposed to boot, you have to restore the boot configuration of course. With Grub some grub-install and grub-mkconfig, with UEFI and systemd-boot some bootctl plus check for changed PARTUUID in /boot/loader/entries/ and /etc/fstab.

For UEFI, backup command should probably read "... / /boot | ..." to also catch the FAT16 UEFI partition. Grub can read ext4 fine. Restore means setup GPT partition table, mkfs for FAT and your main partition. Then mount main partition, mkdir /boot in this partition, then mount into that directory the boot partition, as an example. Then untar. Then chroot into the installation (Arch has arch-chroot command, handy), then "bootctl --path=/boot install", then check PARTUUIDs if present and update them from new values gleaned through blkid.

Why lbzip2, because up until recently it struck a nice balance of compression performance using all cores and compression ratio, and everybody has bzip2 to uncompress. Everybody. With zstd this changed, but sometimes I just like the hits.
 

mobycl1ck

Member
Feb 20, 2022
56
9
8
A little update.
Implemented Veeam Backup and Synology ABB, as licences are already paid and those services weren't used. Now, I managed t implement a Bacula Server to have aditional, redundant reular backups.
Now, after reading your responses, I realize how advanced your solutions are and how basic and cluncky are mine.
 
  • Like
Reactions: dswartz

finno

New Member
Apr 22, 2023
11
4
3
Maryland
People already mentioned rsync for backup. Rsnapshot is a good tool that wraps rsync commands to give you the backup function with version (hourly/daily/weekly/monthly/whatever).
 

dswartz

Active Member
Jul 14, 2011
611
79
28
A little update.
Implemented Veeam Backup and Synology ABB, as licences are already paid and those services weren't used. Now, I managed t implement a Bacula Server to have aditional, redundant reular backups.
Now, after reading your responses, I realize how advanced your solutions are and how basic and cluncky are mine.
i've used veeam agent for linux and have been satisfied.
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
Run everything in a VM. Then zfs snapshot & send.
Only remaining problem.
ZFS send is based on ZFS snaps. They are like a sudden powerloss. This can compromise filesystem consistency of a VM. A VM must be offline to be save or you need additional methods like integration of safe ESXi snaps in ZFS snaps.
 

acquacow

Well-Known Member
Feb 15, 2017
811
454
63
43
Only remaining problem.
ZFS send is based on ZFS snaps. They are like a sudden powerloss. This can compromise filesystem consistency of a VM. A VM must be offline to be save or you need additional methods like integration of safe ESXi snaps in ZFS snaps.
You can absolutely freeze a local VM filesystem prior to a snap to ensure consistency.

Lookup fsfreeze docs.
 

mobycl1ck

Member
Feb 20, 2022
56
9
8
Set up a Bacula Server, with community packages 13.03. Added AWS Cloud Storage support.
What FileSet do you have set up for a bare metal backup/restore?
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,286
850
113
Stavanger, Norway
intellistream.ai
Only remaining problem.
ZFS send is based on ZFS snaps. They are like a sudden powerloss. This can compromise filesystem consistency of a VM. A VM must be offline to be save or you need additional methods like integration of safe ESXi snaps in ZFS snaps.
That is why we have journaling on filesystems, so the risk of total file system corruption should rarely happen. And if it happens, previous snapshot is most likely fine.
I also do this with a live running PostgreSQL, as long both the data and wal is in a single snapshot. Note that I also have a hot standby, just in case. Also if you have a PostgreSQL database running on multiple pools (fast, slow, history data), you can have a hot standby that runs on a single (slow) pool and take snapshots from it.
Running traditional full backup, takes too long. And you spend a lot of system resources.
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
That is why we have journaling on filesystems, so the risk of total file system corruption should rarely happen. And if it happens, previous snapshot is most likely fine.
Journaling only helps if atomic writes (like writing a datablock + update metadata or write a raid stripe to all disks or both disks of a mirror) are done completely and this is not guaranteed on a sudden crash during write unless the filesystem is Copy on Write that can handle such a situation - but only for itself, not for VM guest filesystems ontop. A guest filesystem must care for consistency on its own.

Journaling can only reduce the risk, this is why ZFS was developped.
 
  • Like
Reactions: unwind-protect

BackupProphet

Well-Known Member
Jul 2, 2014
1,286
850
113
Stavanger, Norway
intellistream.ai
Not really, a journal is a write append log, there are no modifications

To go more into the details, with some help from ChatGPT:
When ext4 is formatted with journaling enabled, it creates a dedicated area in the filesystem where it can record transactions. This journal acts as a kind of "black box" that logs each file system operation.

Before ext4 writes changes to the disk (like creating, deleting, or modifying files), it first logs each operation as a transaction in the journal. These transactions include all the necessary information to either complete the operation fully or roll it back entirely.

Ext4 ensures that the journaling is done in a way that the journal updates are written to the disk before the actual data blocks and metadata get written. This is crucial because it allows the filesystem to maintain a consistent state by ensuring that no data is written before its corresponding journal entry.

On system startup, if the system detects that there was an unexpected shutdown (like a power loss), it checks the journal for any transactions marked as in-progress. If any are found, ext4 uses the information in the journal to either:
Complete the transaction, if all required data for the transaction is present and correct in the journal, or
Roll back the transaction, restoring the filesystem to the last known consistent state before the transaction began.
 

gea

Well-Known Member
Dec 31, 2010
3,489
1,371
113
DE
It is all about reducing risks.

In the binary world there is no parallellism. Every io can be only sequential. Whenever a write transaction needs to update more than one adress you have dependent writes that must be done completely for the whole io to be valid, does not matter if it is a journal write or any other io. Such a dependent wite is called an atomic write. Incomplete atomic writes can mean a damaged filesystem. Ext4 simply cannot guarantee atomic writes.

This is why Copy on Write filesystems like ZFS and based on its ideas btrfs and ReFS came up. Now every atomic write is a new write, not a modify of current structures. Only if the atomic write or in case of ZFS a write of ZFS datablock happened completely, it becomes valid otherwise the former disk state remains valid. In the end this reduces possible filesystem problems by a magnitude compared to ext4 or ntfs.

But even ZFS must at some point change some pointers... A crash in the exact wrong moment during a write may even damage ZFS.

It is all about reducing risks.
Accept risks or reduce risks, that is the choice.

Do external backups.