But that's the thing I don't get. There is nothing you can setup with root privilege that can't be removed with the same privileges. Assume a semi-competent malware takes control of your backup machine. What will protect those files? He can simply copy an encrypted version of those files locally then delete all backup data.
A) Most malware isn't that complicated. The stuff that looks for things on the network is going to be going after common things, off the shelf (both consumer and pro) systems, open SMB shares, etc.
B) Failing a vulnerability a remote system set up with a decent level of login protection, such as a combination of: SSH key only login, good passwords, MFA, no direct root, setting your "backup sync" users to not allow a full shell, not using the default SSH key or location on the client system, not having ANY of the remote login users be SUDOers, having the password to elevate to root/admin be DIFFERENT than any of the remote login passwords. Or the ultimate paranoia, admin login not possible from client system.
C) You are generally NOT going to deal with a direct hack or super sophisticated/targeted malware, the less "like" other people your system is the less things with impact you "fully". ZFS snapshots from a desktop replicated to another machine is going to be super niche for consumers, and somewhat niche for even business.
D) Your NAS is and always should be only PART of your backup solution. 3-2-1 rule or as close as you can get.
Heres a reasonably paranoid system layout that is fairly close to what I actually do:
1) Client systems ideally run ZFS and do snapshots and ZFS send to the NAS, this is managed with syncoid and sanoid where possible to automate snapshots, snapshot retention, and syncing. I have limited permissions delegated to the user that runs sanoid, and limited permissions for the syncoid user and the REMOTE syncoid user (no root user needed for any of it). The syncoid user CANNOT delete snapshots, datasets, etc. I have sanoid on the destination machine running a different retention pattern keeping the snapshor count down (dont need to keep hourlies for months)
2) on the nas root cannot SSH in, users can only SSH in with SSH pubkey pairs. The users that run automated backups via scripts (not the normal user accounts for me/humans) can't even get a normal shell and are not in sudors.
3) this system itself syncs to ANOTHER machine running ZFS that is "warm storage", it is off except once a week where it boots, pulls snapshots vs zfs send-recieve via a user that logs into the middle machine (again limited permissions, no sudo). this machine allows NO remote access, not even from the middle machine.
4) that 2nd "NAS" is not actually network attached. fiber optic direct between the 2, and fiber to the main switch but only outbound traffic to local on a vlan so it can report job status.
5) (not done yet) power the 3rd system exclusively from a DC-DC psu powered by a LFP battery that is only charged when the machine is disconnected (power surge/lightning isolation paranoia level)