← Back to team overview

duplicity-team team mailing list archive

[Question #271336]: Parallelizing compression and gpg

 

New question #271336 on Duplicity:
https://answers.launchpad.net/duplicity/+question/271336

Dear duplicity developers,

I am using duplicity for a while now and do backup very large files (>500GB) on really fast machines (XEON CPUs, SSDs, and RAID10).

Wondering about the speed of duplicity I saw that duplicity only utilizes one CPU core and the storage is bored at about 20-30 MB/s.

What about parallelizing duplicity's compression and gpg implementation? I had a look at https://code.google.com/p/threadzip/ and it doesn't seem to be very complex. I am a programmer but do not dare to implement it myself because of lack in experience in python.

So, is someone willing to implement this? I can test it. GZIP parallelization is a good start for me.

Regards
Alex

-- 
You received this question notification because your team duplicity-team
is an answer contact for Duplicity.


Follow ups