← Back to team overview

duplicity-team team mailing list archive

Re: [Question #404018]: Forced disconnect of my internet connection during backup

 

Question #404018 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/404018

    Status: Open => Answered

edso proposed the following answer:
On 12.11.2016 16:37, nils wrote:
> New question #404018 on Duplicity:
> https://answers.launchpad.net/duplicity/+question/404018
> 
> Hi there,

hey Nils, long time no read ;)

> I'm facing an interesting situation and I'm sure that I'm not alone:
> I have a backup set that comprises way more data than what my internet connection can handle within 24 hours (the set is about 200 GB and that should take round about a week). 
> Unfortunately, my ISP disconnects my internet connection every 24 hours and assigns me a new IP. And the backends that I've tried so far (SSH and dropbox) cannot handle the closed socket (even though the internet connectivity is back after a few seconds). 
> I tried quite a few things but in the end failed. So, I have some questions:
> 1) Does it somehow harm the quality of the backup if I would start the backup process over manually (or via the bash) 20 times? I don't find it a good solution to resume the backup for so often but currently I see no other option. I really would appreciate your opinion on that

it shouldn't although resuming always contains a minimal risk, that
wouldn't be there otherwise. i suggest you do regular verify runs to
make sure that yur backups are in good shape.

> 2) Are there or will there be any backends that can hanlde such a
situation? In principle, it's pretty simple. The backend "only" would
have to start over authentication and reconnect completely in case of a
permanent error (at least trying this in case of a permanent error would
be very useful).

not as such. resuming is only done on when duplicity detects a that the conditions are right to resume on a new backup run.
however what backends can do is retry and you may finetune the retry behaviour via --num-retries . the delay is currently hardcoded as 30s in http://bazaar.launchpad.net/~duplicity-team/duplicity/0.8-series/view/head:/duplicity/backend.py#L400

> 3) Is anybody here encountering the same problem and maybe found a
different solution that I did not yet think of?

probably. the usual workaround that is mentioned on the ml for issues
like that is to backup to a local file:// target and rsync or use some
other means to sync it to the backend of your choice. this way the
backup process does not get interrupted by uplink issues.

have fun ..ede/duply.net

-- 
You received this question notification because your team duplicity-team
is an answer contact for Duplicity.