duplicity-team team mailing list archive
-
duplicity-team team
-
Mailing list archive
-
Message #03895
Re: [Question #404018]: Forced disconnect of my internet connection during backup
Question #404018 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/404018
edso proposed the following answer:
On 13.11.2016 16:17, nils wrote:
> Question #404018 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/404018
>
> nils posted a new comment:
>
>> Hi there,
>
> hey Nils, long time no read ;)
>
> Nils: Indeed, currently a huge amount of my spare time somehow ends up
> with duplicity :-)
>
>> I'm facing an interesting situation and I'm sure that I'm not alone:
>> I have a backup set that comprises way more data than what my internet connection can handle within 24 hours (the set is about 200 GB and that should take round about a week).
>> Unfortunately, my ISP disconnects my internet connection every 24 hours and assigns me a new IP. And the backends that I've tried so far (SSH and dropbox) cannot handle the closed socket (even though the internet connectivity is back after a few seconds).
>> I tried quite a few things but in the end failed. So, I have some questions:
>> 1) Does it somehow harm the quality of the backup if I would start the backup process over manually (or via the bash) 20 times? I don't find it a good solution to resume the backup for so often but currently I see no other option. I really would appreciate your opinion on that
>
> it shouldn't although resuming always contains a minimal risk, that
> wouldn't be there otherwise. i suggest you do regular verify runs to
> make sure that yur backups are in good shape.
>
> Nils: OK, I guess, I'll consider going this way then.
>
>> 2) Are there or will there be any backends that can hanlde such a
> situation? In principle, it's pretty simple. The backend "only" would
> have to start over authentication and reconnect completely in case of a
> permanent error (at least trying this in case of a permanent error would
> be very useful).
>
> not as such. resuming is only done on when duplicity detects a that the conditions are right to resume on a new backup run.
> however what backends can do is retry and you may finetune the retry behaviour via --num-retries . the delay is currently hardcoded as 30s in http://bazaar.launchpad.net/~duplicity-team/duplicity/0.8-series/view/head:/duplicity/backend.py#L400
>
> Nils: I already played around with the timeouts and the numbers of
> retries but that does not help as (with the forced disconnect) the
> complete socket is gone. The backends would have to go one step further
> and repeat authentication against the backend server but they don't do
> that :-(
pretty sure that there are some that do eg. WebDAV or pexpect+ssh
if you need it, consider patching the backends or file a bug report wrt.
to this issue.
>> 3) Is anybody here encountering the same problem and maybe found a
> different solution that I did not yet think of?
>
> probably. the usual workaround that is mentioned on the ml for issues
> like that is to backup to a local file:// target and rsync or use some
> other means to sync it to the backend of your choice. this way the
> backup process does not get interrupted by uplink issues.
>
> Nils: Yes, I read that. For me that does not work as it would consume
> disk space that I don't have :-(
>
makes sense.. ede/duply.net
--
You received this question notification because your team duplicity-team
is an answer contact for Duplicity.