← Back to team overview

duplicity-team team mailing list archive

Re: [Question #271336]: Parallelizing compression and gpg

 

The ThreadZip package would not be what we would use.  We need to
parallelize gpg usage since that is the one that does the compression as
well as encryption.  I don't have time to work on this at the moment, but
it is on the plan for someday.

At the moment, the best we can do is to use --asynchronous-upload to make
the upload and gpg steps go at the same time.

...Ken


On Sun, Sep 13, 2015 at 4:57 AM, alex <question271336@xxxxxxxxxxxxxxxxxxxxx>
wrote:

> New question #271336 on Duplicity:
> https://answers.launchpad.net/duplicity/+question/271336
>
> Dear duplicity developers,
>
> I am using duplicity for a while now and do backup very large files
> (>500GB) on really fast machines (XEON CPUs, SSDs, and RAID10).
>
> Wondering about the speed of duplicity I saw that duplicity only utilizes
> one CPU core and the storage is bored at about 20-30 MB/s.
>
> What about parallelizing duplicity's compression and gpg implementation? I
> had a look at https://code.google.com/p/threadzip/ and it doesn't seem to
> be very complex. I am a programmer but do not dare to implement it myself
> because of lack in experience in python.
>
> So, is someone willing to implement this? I can test it. GZIP
> parallelization is a good start for me.
>
> Regards
> Alex
>
> --
> You received this question notification because your team duplicity-team
> is an answer contact for Duplicity.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~duplicity-team
> Post to     : duplicity-team@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~duplicity-team
> More help   : https://help.launchpad.net/ListHelp
>

References