Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this a technical limitation? I'm not seeing anything in the docs that forces you to do a full backup after N amount of backups (maybe I missed this).


There is no technical limitation. You can do as many incremental backups as you want and are not forced to do a full backup after the initial one. As a practical matter though, I've found it's a good idea to occasionally do a full backup because the metadata can get out of sync (especially if something goes wrong like you lose your connection during an incremental backup).


In addition to the other reasons given, there's a reliability problem in relying on an ever-growing chain of incremental backups: you can only recover your latest state if every increment in the chain is uncorrupted.

I would much rather use an incremental backup that records the diffs in reverse, so that the latest state is stored explicitly, and the earlier versions are recovered through a chain of reverse diffs.

Duplicity's security goals make it hard to implement this kind of reverse diff structure.


You can do as many incrementals as you want, but you can't delete any.

So if you want to expire old backups after a while you must do a full backup, then delete the entire full+incrementals chain.

rdiff-backups stores diffs in reverse so you can always easily delete the oldest ones.

Duplicity archives and encrypts all it's files so it really doesn't have much choice in the matter. rdiff-backups doesn't encrypt anything - it can't - it needs the full file on the other end to use for rsync.


You can do many many incrementals, but the restoration process, especially from a remote source is incredibly slow.


its not that bad. i do it from a shiva plug over a 10mbit DSL link for a few dozen gigs regularly. its as fast as the DSL will go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: