← Back to team overview

duplicity-team team mailing list archive

Re: [Question #193846]: These backup times seem very excessive

 

Question #193846 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/193846

Kai-Alexander Ude proposed the following answer:
I installed version 0.6.18 three days ago.

What I did so far:
- installed 0.6.18 and doing backups (full / incr.) with this version
-> since upgrade cpu load is never more than 2.00
-> before upgrade cpu load was about 5 and more

- changed config, volsize from 1000 to 250 ... and back.
-> nothing changed

- changed config, temp dir from /var/tmp (my default) to /tmp and an 1Gb/s network share ... and back to default
-> nth changed

- changed config, gpg options to '--compress-algo=bzip2 --bzip2-compress-level=9'; this config line was commented all the time
-> waiting for result but ...

... I carfully think since I upgraded to newest version there isn't really an issue (except cpu (over)load). I think there are no problems with duplicity and no probs with harddrive, ram, etc.
Kenneth could be very right. 1.5 million files are a lot. Mainly small files, mail and a lot of cms webpages.

Currently I configured one duply profile for everything I would like to backup (server config, mail and web).
Anything else is excluded. Maybe optimizing the backup process could be helpful.
Are there any experiences with backup processes in connection with duplicity?
I googled for that but there were only small examples ...

I will try my best. Try and error :-)
Thank you for your help and time.
I'll keep you informed ...

-- 
You received this question notification because you are a member of
duplicity-team, which is an answer contact for Duplicity.