Replies: 2 comments 1 reply
-
|
Hi! That's an excellent idea, and it's exactly what I need! However, it raises other issues regarding permission management, because running restic remotely means having to run it as root to be able to back up system directories without restrictions! And for security reasons, allowing root SSH access remotely is out of the question! It would require a very robust design! |
Beta Was this translation helpful? Give feedback.
-
|
Thank you @jcsogo for opening this discussion. This is something I have planned since the beginning but we are looking at a mid-longer term implementation. I wanted to start working on this recently but the amount of community requests I got recently had me shift focus to other more urgent (maybe less important) things and bug reports are piling up. My idea would be to have a controller / agent system. You'd install a main controller (the zerobyte ui) somewhere on your network and on each machine you want to perform backups you can install a small headless agent that runs in the background. The controller would send backup instructions to the agents through a websocket and the agent would report back progress / status through the same websocket. This would achieve exactly what you have described. The upload would be performed directly from where your data lives without going through the controller. This requires a lot of work but it'll come later this year! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
First off, I want to say I absolutely love ZeroByte! The UI is fantastic, and the concept of a centralized backup server where I can manage all my backups is exactly what I needed. Having a reliable CLI fallback if the ZeroByte instance itself crashes is the cherry on top. Excellent work! 👏
I wanted to open a discussion about remote backup performance because I've run into something that's been bugging me. The centralized server model is brilliant for management, but there's a tradeoff: backup operations have to pull data remotely rather than running locally on the source host. This means performance gets constrained not just by network bandwidth, but by protocol overhead as well.
An example: I'm backing up to a cloud provider using SFTP as the remote volume protocol. A 2 GB incremental snapshot takes over 10 minutes to complete—despite neither saturating my internet connection nor maxing out the remote CPU. There's clearly significant protocol overhead happening here.
I've been thinking through possible solutions. Switching to something more performant like NFS doesn't really fit my use case—I need to backup multiple scattered directories (
/home,/opt,/etc) without creating separate volumes for each, and the data would still need to go remote host → ZeroByte server → repository anyway. Running ZeroByte locally on each host defeats the entire purpose of centralized management. Without that, it's just another backup tool with a nice UI.But what about executing restic locally on the remote host, similar to how rsync works over SSH? ZeroByte could coordinate by SSH-ing into remote hosts, running restic operations locally, then streaming results directly to the repository. This has been discussed in the restic community for 10+ years (restic/restic#299) without much progress. Maybe the solution isn't for restic to implement this—but for ZeroByte as the orchestration layer to do it.
This could give us the best of both worlds: centralized management and monitoring through ZeroByte's UI, plus local execution performance where data goes directly from source to repository. You'd keep the flexibility to backup arbitrary directory structures without the protocol overhead.
I'd love to hear what others think about exploring this direction. Is this feasible? Would there be interest in something like this?
Beta Was this translation helpful? Give feedback.
All reactions