I have a proxmox+Debian+docker server and I'm looking to setup my backups so that they get backed up (DUH) on my Linux PC whenever it comes online on the local network.
I'm not sure if what's best is backing up locally and having something else handling the copying, how to have those backup run only if they haven't run in a while regardless of the availability of the PC, if it's best to have the PC run the logic or to keep the control over it on the server.
Mostly I don't want to waste space on my server because it's limited...
I don't know the what and I don't know the how, currently, any input is appreciated.
I see syncthing being recommended, and like, it's fine.
But keep in mind it's NOT a backup tool, it's a syncing tool.
If something happens to the data on your client, for example, it will happily sync and overwrite your Linux box's copy with junk, or if you delete something, it'll vanish in both places.
It's not a replacement for recoverable-in-a-disaster backups, and you should make sure you've got a copy somewhere that isn't subject to the client nuking it if something goes wrong.
I use syncthing to copy important files between pc, phone and proxmox server. Syncthing can be set up with version control so it keeps old versions of files.
Only the proxmox server is properly backed up though. to a proxmox backup server running in a VM on said proxmox server. the encryptred backup files are copied to backblaze using rclone
Not sure if this is what you are looking for, but it works for me.
TLDR syncthing for copies between local machines, and proxmox backup server and backblaze for proper backups
Since space is a major concern, maybe have a look at borg and possibly something like borgmatic on top for easier configuration. Borg does deduplicated backups, so you could do even hourly ones if you wanted without too much extra space depending on how many you want to keep. You'd need to run a borg server wherever you want to store your backups so it's not a simple rsync over ssh situation but that's the price you pay for the extra niceties.
run your backup process every few minutes, if both machines are online, it sends the latest snapshot, if not then it exits. You could reduce your snapshots to once a day, so it only does the data transfer once per day, and any extra backups just succeed.
The downside of this approach is you will need to setup alerting if a snapshot doesn't get backed up within a window, rather then alerting on a single backup process failing. (i.e. if the server doesn't have a successful backup within a week create an alert)
You could also do the same thing with rclone vfs mounts with full caching, so you could write locally even when the network is offline, and when you get online it would transfer (but you need to setup your server side alerting again)
As to how, I'd probably use zfs send | receive, any built-in functionality on a CoW filesystem, rsnapshot, rclone or just syncthing. As to when, I'd probably hack something with systemd triggers (e.g. on network connection, send all remaining incremental snapshots). But this would only be needed in some cases (e.g. not using syncthing ;p)
Cron + rsync is always a bulletproof solution. Simple bash script that runs every X minutes to sync to a network target. Wouldn't need to be only when the machine starts as you mentioned.