← Back to team overview

duplicity-team team mailing list archive

Re: [Question #265766]: High outbound bandwidth usage

 

Question #265766 on Duplicity changed:
https://answers.launchpad.net/duplicity/+question/265766

Description changed to:
Hi,

I've been using duplicity (through duply) for quite some time already sending backups to a remote FTP server. Recently I'm running out of disk space, so I tried moving over to S3, which I thought would be quite cheap, since it's mostly sending data and hardly ever requesting any.
However, after storing the initial backup (about 35 GB), I found that it had already consumed all of my free-tier outbound bandwidth (15 GB). Three very small incremental backups later, it had used another 16 GB.
Also it used 5290 PUT/POST requests (which I assume is due to the multi processing option), and 5,011 GET requests, which I suppose are causing all the outbound traffic.

So, I'm not too much into how exactly S3 works, but I thought this
wouldn't be normal behaviour for S3 applications.

My question is basically, if this is expected behavior and/or if there
is any way to prevent it from using so much bandwidth while uploading
backups.

Thanks in advance.

André

# duplicity --version
duplicity 0.7.02

# status
Start duply v1.5.5.4, time is 2015-04-23 16:27:13.
Using profile '/root/.duply/twc_backup'.
Using installed duplicity version 0.7.02, python 2.7.3, gpg 1.4.11 (Home: ~/.gnupg), awk 'mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan', bash '4.2.25(1)-release (x86_64-pc-linux-gnu)'.
Test - En/Decryption skipped. (GPG disabled)

--- Start running command STATUS at 16:27:13.274 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Tue Apr 21 09:42:04 2015
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /root/.cache/duplicity/duply_twc_backup

Found 0 secondary backup chains.

Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Tue Apr 21 09:42:04 2015
Chain end time: Thu Apr 23 05:00:02 2015
Number of contained backup sets: 4
Total number of contained volumes: 718
 Type of backup set:                            Time:      Num volumes:
                Full         Tue Apr 21 09:42:04 2015               703
         Incremental         Wed Apr 22 05:00:01 2015                 1
         Incremental         Wed Apr 22 09:57:22 2015                 1
         Incremental         Thu Apr 23 05:00:02 2015                13
-------------------------
No orphaned or incomplete backup sets found.
--- Finished state OK at 16:27:14.742 - Runtime 00:00:01.467 ---


EDIT:

I'm sorry, I forgot showing how exactly I use duplicity.

# this is my duply profile
TARGET='s3://s3-eu-west-1.amazonaws.com/<bucket>/<folder>'
TARGET_USER=$AWS_ACCESS_KEY_ID
TARGET_PASS=$AWS_SECRET_ACCESS_KEY
SOURCE='/home'
MAX_AGE=60D
MAX_FULL_BACKUPS=2
MAX_FULLBKP_AGE=30D
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE --s3-european-buckets --s3-use-new-style --s3-use-rrs --s3-use-multiprocessing "

# and this is how I call it from my cron
duply twc_backup backup >/dev/null

-- 
You received this question notification because you are a member of
duplicity-team, which is an answer contact for Duplicity.