duplicity-team team mailing list archive
-
duplicity-team team
-
Mailing list archive
-
Message #00540
Re: [Question #150909]: reverse diffs
Question #150909 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/150909
Status: Open => Answered
edso proposed the following answer:
On 01.04.2011 16:40, ceg wrote:
> Question #150909 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/150909
>
> Status: Answered => Open
>
> ceg is still having a problem:
> uh, so duplicity is no chunky bacon. ;-)
>
> Doing unnecessary full backups of hundreds of GB if only a couple MB have changed is just too unfortunate.
> But having all data stored in a single data stream that causes everything b ehind a single corruption to be lost, o
well, not everything. just the changes that ended up in this volume.
Although there is some hassle to deal with, because duplicity will
hiccup if gpg dies because of a corrupted file.
eventually this is a legacy from the earliest duplicity and the maintainer is still looking for a new concept as well the time to implement it. see
http://duplicity.nongnu.org/new_format.html
>well, many thanks edso for pointing that out!
np
>
> Shouldn't duplicity chunk the files to volumes first and then compress
> and gpg them?
that's what it does, while gpg also does the compression
>
> Well, probably just switch from using .tar to .dar for various reasons.>
> Dar seems much better then .tar and already supports reduncancy
> checksumming etc.
>
will look into it, still it won't help, because we need a redundancy
checksumming for the encrypted files, not their contents.
..ede/duply.net
--
You received this question notification because you are a member of
duplicity-team, which is an answer contact for Duplicity.