duplicity-team team mailing list archive
-
duplicity-team team
-
Mailing list archive
-
Message #01169
Re: [Question #193846]: These backup times seem very excessive
Question #193846 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/193846
edso proposed the following answer:
On 22.04.2012 22:30, Kai-Alexander Ude wrote:
> Question #193846 on Duplicity changed:
> https://answers.launchpad.net/duplicity/+question/193846
>
> Kai-Alexander Ude proposed the following answer:
> Installed the last version of duplicity. Still the same problems :-(
ok, please check as Ken proposed if your system is swapping because
duplicity's process fill up your ram
> Upload from temp dir to the target is still not the problem, 100 MB/s.
> But creating temp files in /var/tmp is very slow.
did you make sure that your /tmp filesystem is not the issue here?
>
> Currently incremental backup is running.
> The temp file size is increasing in 200kb/s steps.
> However, some temp files are packed in less than 5 minutes.
maybe it get's sporadically slow?
> Maybe it could be a problem caused by compression.
> Is there an alternative compression method?
duplicity leaves compression to gpg. if you add the appropriate --gpg-
options you can change algorithm and level.
btw. it seems that nearly your whole backup changes on incrementals. in case of e.g. database dumps we usually suggest not to compress them as this makes the impossible for librsync to detect only the changed portions within a file. compression is then done during backup.
..ede/duply.net
--
You received this question notification because you are a member of
duplicity-team, which is an answer contact for Duplicity.