duplicity-team team mailing list archive
-
duplicity-team team
-
Mailing list archive
-
Message #00596
Re: SHA mismatch bug
I will get it into the trunk this afternoon. Thanks for the catch!
...Ken
On Mon, Jun 13, 2011 at 12:23 PM, Michael Terry <mike@xxxxxxxxxxx> wrote:
> Ken, any comment on feasibility of this patch, perhaps in time for
> this weekend's release?
> -mt
>
> On 7 June 2011 12:31, Michael Terry <mike@xxxxxxxxxxx> wrote:
> > I have a potential fix... It's a bit hackish, but seems to work.
> > Basically, when we resume, we just always redo the last block. There
> > could be a more elegant patch, where we download the last block and
> > check its hash, but this was easier.
> >
> > Ken, I can do the download-block-check-hash patch if ya like. I'm
> > also curious if you think there would be any side effects of redoing
> > the last block, with either version of the patch.
> >
> > The simple patch is attached as well as a testing script to reproduce
> > the bug reliably.
> >
> > -mt
> >
> >
> > On 7 June 2011 10:09, Michael Terry <mike@xxxxxxxxxxx> wrote:
> >> Hello! I was looking at
> >> https://bugs.launchpad.net/deja-dup/+bug/487720 again, which I'll
> >> summarize. If something goes wrong when backing up, a partial file
> >> can get left on the backup target. This partial file confuses
> >> duplicity and the sha1 sum for the partial file doesn't match what
> >> duplicity expects for the complete file and the user gets "Invalid
> >> data - SHA1 hash mismatch".
> >>
> >> Now, it was originally thought this only affected ssh. I think more
> >> accurately, it affects at least the gio and local backends. The
> >> original report was from Deja Dup using the gio backend. Various
> >> me-toos on the bug have discussed smb (I know that reporter was using
> >> gio), ftp, and s3.
> >>
> >> For gio, a temporary file will only be used if the target file already
> >> exists. Otherwise it just writes directly to the target. Local
> >> (file://) targets always just write directly.
> >>
> >> A naive approach would be to add an extra layer to backend.py that
> >> says if there was an exception during a write, delete any partial
> >> target file that may have been written. But that wouldn't really
> >> solve all cases like computer shutting down or a kill signal to
> >> duplicity.
> >>
> >> Would it make sense to augment the resume code to check the sha1sum of
> >> the most recent chunk and restart that chunk if the sums don't match?
> >>
> >> -mt
> >>
> >
>
> _______________________________________________
> Mailing list: https://launchpad.net/~duplicity-team
> Post to : duplicity-team@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~duplicity-team
> More help : https://help.launchpad.net/ListHelp
>
References