← Back to team overview

duplicity-team team mailing list archive

[Merge] lp:~mgorse/duplicity/0.8-series into lp:duplicity/0.7-series

 

Mgorse has proposed merging lp:~mgorse/duplicity/0.8-series into lp:duplicity/0.7-series.

Requested reviews:
  duplicity-team (duplicity-team)

For more details, see:
https://code.launchpad.net/~mgorse/duplicity/0.8-series/+merge/363564


-- 
The attached diff has been truncated due to its size.
Your team duplicity-team is requested to review the proposed merge of lp:~mgorse/duplicity/0.8-series into lp:duplicity/0.7-series.
=== modified file '.bzrignore'
--- .bzrignore	2017-07-11 14:55:38 +0000
+++ .bzrignore	2019-02-22 19:07:43 +0000
@@ -2,10 +2,12 @@
 *.log
 .DS_Store
 ._.DS_Store
+.cache
 .eggs
 .idea
 .project
 .pydevproject
+.pytest_cache
 .settings
 .tox
 __pycache__
@@ -18,4 +20,5 @@
 random_seed
 testfiles
 testing/gnupg/.gpg-v21-migrated
+testing/gnupg/S.*
 testing/gnupg/private-keys-v1.d

=== added file 'AUTHORS'
--- AUTHORS	1970-01-01 00:00:00 +0000
+++ AUTHORS	2019-02-22 19:07:43 +0000
@@ -0,0 +1,36 @@
+Duplicity Authors
+-----------------
+
+- Aaron Whitehouse <aaron@xxxxxxxxxxxxxxxxxx>
+- Alexander Zangerl <az@xxxxxxxxxxxxx>
+- Andrea Grandi <a.grandi@xxxxxxxxx>
+- Ben Escoto <ben@xxxxxxxxxxx>
+- Carlos Abalde <carlos.abalde@xxxxxxxxx>
+- Dmitry Nezhevenko <dion@xxxxxxxxxxx>
+- Edgar Soldin <edgar.soldin@xxxxxx>
+- Eric EJ Johnson <ej.johnson@xxxxxxxxxxxxx>
+- Fabian Topfstedt <topfstedt@xxxxxxxxxxxxxxxxxxx>
+- Frank J. Tobin <ftobin@xxxxxxxxxxxxxxx>
+- Germar Reitze <germar.reitze@xxxxxxxxx>
+- Gu1
+- Havard Gulldahl
+- Henrique Carvalho Alves <hcarvalhoalves@xxxxxxxxx>
+- Hynek Schlawack
+- Ian Barton <ian@xxxxxxxxxxxxxx>
+- J.P. Krauss <jkrauss@xxxxxxxxxxxxx>
+- jno <jno@xxxxxxxxx>
+- Kenneth Loafman <kenneth@xxxxxxxxxxx>
+- Marcel Pennewiss <opensource@xxxxxxxxxxxx>
+- Matthew Bentley
+- Matthieu Huin <mhu@xxxxxxxxxxxx>
+- Michael Stapelberg <stapelberg+duplicity@xxxxxxxxxx>
+- Michael Terry <mike@xxxxxxxxxxx>
+- Peter Schuller <peter.schuller@xxxxxxxxxxxx>
+- Roman Yepishev <rye@xxxxxxxxxxxxxxx>
+- Scott McKenzie <noizyland@xxxxxxxxx>
+- Stefan Breunig <stefan-duplicity@xxxxxxxxxxx>
+- Steve Tynor <steve.tynor@xxxxxxxxx>
+- Thomas Harning Jr <harningt@xxxxxxxxx>
+- Tomas Vondra (Launchpad id: tomas-v)
+- Xavier Lucas <xavier.lucas@xxxxxxxxxxxx>
+- Yigal Asnis

=== modified file 'CHANGELOG'
--- CHANGELOG	2018-12-17 17:10:11 +0000
+++ CHANGELOG	2019-02-22 19:07:43 +0000
@@ -198,6 +198,7 @@
 * Copy.com is gone so remove copycombackend.py.
 * Merged in lp:~xlucas/duplicity/swift-multibackend-bug
   - Fix a bug when swift backend is used in a multibackend configuration.
+<<<<<<< TREE
 * Merged in lp:~duplicity-team/duplicity/po-updates
 
 
@@ -306,6 +307,196 @@
   - Revert log.Error to log.Warn, as it was prior to the merge in rev 1224,
     as this was affecting other applications (e.g. deja dup; Bug #1605939).
 * Merged in lp:~duplicity-team/duplicity/po-updates
+=======
+* Fixed problem in dist/makedist when building on Mac where AppleDouble
+  files were being created in the tarball.  See:
+  https://superuser.com/questions/61185/why-do-i-get-files-like-foo-in-my-tarball-on-os-x
+* Merged in lp:~dawgfoto/duplicity/replicate
+  - Add integration test for newly added replicate command.
+  - Also see https://code.launchpad.net/~dawgfoto/duplicity/replicate/+merge/322836.
+* Merged in lp:~xlucas/duplicity/multibackend-prefix-affinity
+  - Support prefix affinity in multibackend.
+* Merged in lp:~xlucas/duplicity/pca-backend
+  - Add support for OVH Public Cloud Archive backend.
+* Fixed PEP8 and 2to3 issues.
+* Patched in lp:~dawgfoto/duplicity/skip_sync_collection_status
+  - collection-status should not sync metadata
+  - up-to-date local metadata is not needed as collection-status is
+    generated from remote file list
+  - syncing metadata might require to download several GBs
+* Fixed slowness in 'collection-status' by basing the status on the
+  remote system only.  The local cache is treated as empty.
+* Fixed encrypted remote manifest handling to merely put out a non-fatal
+  error message and continue if the private key is not available.
+* Merged in lp:~mterry/duplicity/giobackend-display-name
+  - giobackend: handle a wider variety of gio backends by making less assumptions;
+    in particular, this fixes the google-drive: backend
+* Fixed bug #1709047 with suggestion from Gary Hasson
+  - fixed so default was to use original filename
+* Fixed PEP8 errors in bin/duplicity
+* Merged in lp:~mterry/duplicity/gio_child_for_display_name
+  - gio: be slightly more correct and get child GFiles based on display name
+* Fixed bug #1711905 with suggestion from Schneider
+  - log.Warn was invoked with log.warn in webdavbackend.py
+* Merged in lp:~mterry/duplicity/gpg-tag-versions
+  - Support gpg versions numbers that have tags on them.
+  - This can happen if you build gpg from git trunk (e.g. 2.1.15-beta20). Or if you run
+    against the freedesktop flatpak runtime (e.g. 2.1.14-unknown).
+* Fixed bug #1394386 with new module megabackend.py from Tomas Vondra
+  - uses megatools from https://megatools.megous.com/ instead of mega.py library
+    which has been deprecated
+  - fixed copyright and PEP8 issues
+  - replaced subprocess.call() with self.subprocess_popen() to standardize
+* Fixed bug #1538333 Assertion error in manifest.py: assert filecount == ...
+  - Made sure to never pass .part files as true manifest files
+  - Changed assert to log.Error to warn about truncated/corrupt filelist
+  - Added unit test to make sure detection works
+  - Note: while this condition is serious, it will not affect the basic backup and restore
+    functions.  Interactive options like --list-files-changed and --file-changed will not
+    work correctly for this backup set, so it is advised to run a full backup as soon as
+    possible after this error occurs.
+* Fixed bug #1638033 Remove leading slash on --file-to-restore
+  - code already used rstrip('/') so change to just strip('/')
+* Fixed bug introduced in new megabackend.py where process_commandline()
+  takes a string not a list.  Now it takes both.
+* Updated web page for new megabackend requirements.
+* Merged in lp:~mterry/duplicity/more-decode-issues
+  - Here's some fixes for another couple UnicodeDecodeErrors.
+  - The duplicity/dup_time.py fixes when a user passes a utf8 date string (or a string with bogus
+    utf8 characters, but they have to really try to do that). This is bug 1334436.
+  - The bin/duplicity change from str(e) to util.uexc(e) fixes bug 1324188.
+  - The rest of the changes (util.exception_traceback and bin/duplicity changes to use it) are to
+    make the printing of exceptions prettier. Without this, if you see a French exception, you see
+    "accept\xe9es" instead of "acceptées".
+  - You can test all of these changes in one simple line:
+    $ LANGUAGE=fr duplicity remove-older-than $'accept\xffées'
+* Fix backend.py to allow string, list, and tuple types to support megabackend.py.
+* Fixed bug #1715650 with patch from Mattheww S
+  - Fix to make duplicity attempt a get first, then create, a container
+    in order to support container ACLs.
+* Fixed bug #1714663 "Volume signed by XXXXXXXXXXXXXXXX, not XXXXXXXX"
+  - Normalized comparison length to min length of compared keys before comparison
+  - Avoids comparing mix of short, long, or fingerprint size keys.
+* Patched in lp:~mterry/duplicity/rename-dep
+  - Make rename command a dependency for LP build
+* Fixed bug #1654756 with new b2backend.py module from Vincent Rouille
+  - Faster (big files are uploaded in chunks)
+  - Added upload progress reporting support
+* Fixed bug #1448094 with patch from Wolfgang Rohdewald
+  - Don't log incremental deletes for chains that have no incrementals
+* Fixed bug #1724144 "--gpg-options unused with some commands"
+  - Add --gpg-options to get version run command
+* Fixed bug #1720159 - Cannot allocate memory with large manifest file since 0.7.03
+  - filelist is not read if --file-changed option in collection-status not present
+  - This will keep memory usage lower in non collection-status operations
+* Fixed bug #1723890 with patch from Killian Lackhove
+  - Fixes error handling in pydrivebackend.py
+* Fixed bug #1730902 GPG Error Handling
+  - use util.ufn() not str() to handle encoding
+* Fixed bug #1733057 AttributeError: 'GPGError' object has no attribute 'decode'
+  - Replaced call to util.ufn() with call to util.uexc().  Stupid typo!
+* More fixes for Unicode handling
+  - Default to 'utf-8' if sys.getfilesystemencoding() returns 'ascii' or None
+  - Fixed bug #1386373 with suggestion from Eugene Morozov
+* Merged in lp:~crosser/duplicity/fix-oauth-flow
+  - Fixed bug #1638236 "BackendException with oauth2client 4.0.0"
+* Merged in lp:~crosser/duplicity/dpbx-fix-file-listing
+  - Fixed bug #1639664 "Dropbox support needs to be updated for Dropbox SDK v7.1"
+* Merged in lp:~crosser/duplicity/fix-small-file-upload
+  - Fixed small file upload changes made in Dropbox SDK v7.1
+* Converted to use pytest instead of unittest (setup.py test is now discouraged)
+  - We use @pytest.mark.nocapture to mark the tests (gpg) that require
+    no capture of file streams (currently 10 tests).
+  - The rest of the tests are run normally
+* More pytest changes
+  - Use requirements.txt for dependencies
+  - Run unit tests first, then functional
+  - Some general cleanup
+* Merged in lp:~aaron-whitehouse/duplicity/08-ufn-to-uc_name
+  - Replace util.ufn(path.name) with path.uc_name throughout.
+* Merged in lp:~aaron-whitehouse/duplicity/08-ufn-to-fsdecode
+  - Change util.fsdecode to use "replace" instead of "ignore" (matching behaviour of util.ufn)
+  - Replace all uses of ufn with fsdecode
+  - Make backend.tobytes use util.fsencode rather than reimplementing
+* Reduce dependencies on backend libraries
+  - Moved backend imports into backend class __init__ method
+  - Surrounded imports with try/except to allow better errors
+  - Put all library dependencies in requirements.txt
+* Merged in lp:~dawgfoto/duplicity/fixup1251
+  - Avoid redundant replication of already present backup sets.
+  - Fixed by adding back BackupSet.__eq__ which was accidentally(?) removed in 1251.
+* Merged in lp:~dawgfoto/duplicity/fixup1252
+  * only check decryptable remote manifests
+    - fixup of revision 1252 which introduces a non-fatal error message (see #1729796)
+    - for backups the GPG private key and/or it's password are typically not available
+    - also avoid interactive password queries through e.g. gpg agent
+* Fixed bug #1768954 with patch from Max Hallden
+  - Add AZURE_ENDPOINT_SUFFIX environ variable to allow setting to non-U.S. servers
+* Fixed bug #x1717935 with suggestion from strainu
+  - Use urllib.quote_plus() to properly quote pathnames passed via URL
+* Merged in lp:~aaron-whitehouse/duplicity/08-pycodestyle
+  - Tox changes to accommodate new pycodestyle version warnings.
+    Ignored W504 for now and marked as a TODO.
+    Marked W503 as a permanent ignore, as it is prefered to the (mutually exclusive) W504 under PEP8.
+  - Marked various regex strings as raw strings to avoid the new W605 "invalid escape sequence".
+* Merged in lp:~aaron-whitehouse/duplicity/08-unadorned-strings
+  - Added new script to find unadorned strings (testing/find_unadorned_strings.py python_file)
+    which prints all unadorned strings in a .py file.
+  - Added a new test to test_code.py that checks across all files for unadorned strings and gives
+    an error if any are found (most files are in an ignore list at this stage, but this will allow
+    us to incrementally remove the exceptions as we adorn the strings in each file).
+  - Adorn string literals in test_code.py with u/b
+* Fixed bug #1780617 Test fail when GnuPG >= 2.2.8
+  - Relevant change in GnuPG 2.2.8: https://dev.gnupg.org/T3981
+  - Added '--ignore-mdc-error' to all gpg calls made.
+* Merged in lp:~aaron-whitehouse/duplicity/08-adorn-strings
+  - Adorning string literals (normally to make these unicode), in support of a transition to Python 3.
+    See https://blueprints.launchpad.net/duplicity/+spec/adorn-string-literals
+  - Adorn string in duplicity/globmatch.py.
+  - Adorn strings in testing/unit/test_globmatch.py
+  - Adorn strings in selection.py
+  - Adorn strings in functional/test_selection.py and unit/test_selection.py
+  - Remove ignores for these files in test_code.py
+* Added function to fix unadorned strings (testing/fix_unadorned_strings.py)
+  - Fixes by inserting 'u' before token string
+  - Solves 99.9% of the use cases we have
+  - Fix unadorned strings to unicode in bin/duplicity and bin/rdiffdir
+  - Add import for __future__.print_function to find_unadorned_strings.py
+* Fixed unadorned strings to unicode in duplicity/backends/*
+  - Some fixup due to shifting indentataion not matching PEP8.
+* Reverted back to rev 1317 and reimplemented revs 1319 to 1322
+* Adorned strings in testing/, testing/functional/, and testing/unit
+* Added AUTHORS file listing all copyright claimants in headers
+* Merged in lp:~mgorse/duplicity/0.8-series
+  - Adorn some duplicity/*.py strings. I've avoided submitting anything that I think might require
+    significant discussion; I think that reviewing will be easier this way. Mostly annotated strings
+    as unicode, except for librsync.py.
+* Merged in lp:~qsantos/duplicity/fix-unmatched-rule-error
+  - There are actually two commits: the first fixes a very minor detail in the README regarding the
+    Python version that should be used; the second fixes the way exceptions are handled when an
+    incorrect rule is specified, and display the nice error message rather than an obscure stack trace.
+* Merged in lp:~mgorse/duplicity/0.8-series
+  - Adorn some remaining strings
+* Merged in lp:~mgorse/duplicity/0.8-series
+  - Run futurize --stage1, and adjust so that tests still pass.
+* Fixed but #1797797 with patch from Bas Hulsken
+  - use bytes instead of unicode for '/' in filenames
+* Merged in lp:~mcuelenaere/duplicity/duplicity
+  - Make sure we don't load files completely into memory when transferring them from/to
+    the remote WebDAV endpoint.
+* Fixed bug #1798206 and bug #1798504
+  - Made paramiko a global with import during __init__ so it would
+    not be loaded unless needed.
+* Merged in lp:~mgorse/duplicity/0.8-series
+  - First pass at a python 3 port.
+* Fixed bug #1803896 with patch from slawekbunka
+  - Add __enter__ and __exit__ to B2ProgressListener
+* Merged in lp:~okrasz/duplicity/duplicity
+  - Add --azure-blob-tier that specifies storage tier
+    (Hot,Cool,Archive) used for uploaded files.
+* Merged in lp:~vam9/duplicity/0.8-series-s3-kms-support
+  - Added s3 kms server side encryption support with kms-grants support.
+>>>>>>> MERGE-SOURCE
 
 
 New in v0.7.08 (2016/07/02)
@@ -346,8 +537,21 @@
   - Set line length error length to 120 (matching tox.ini) for PEP8 and
     fixed E501(line too long) errors.
 * Merged in lp:~duplicity-team/duplicity/po-updates
+<<<<<<< TREE
 * Fix bug using 40-char sign keys, from Richard McGraw on mail list
   - Remove truncation of argument and adjust comments
+=======
+* Merged in lp:~aaron-whitehouse/duplicity/08-unicode
+  - Many strings have been changed to unicode for better handling of international
+    characters and to make the transition to Python 3 significantly easier, primarily
+    on the 'local' side of duplicity (selection, commandline arguments etc) rather
+    than any backends etc.
+* Fixes so pylint 1.8.1 does not complain about missing conditional imports.
+  - Fix dpbxbackend so that imports require instantiation of the class.
+  - Added pylint: disable=import-error to a couple of conditional imports
+* Merged in lp:~excitablesnowball/duplicity/s3-onezone-ia
+  - Add option --s3-use-onezone-ia S3 One Zone Infrequent Access Storage
+>>>>>>> MERGE-SOURCE
 
 
 New in v0.7.07.1 (2016/04/19)

=== modified file 'Changelog.GNU'
--- Changelog.GNU	2018-12-17 17:10:11 +0000
+++ Changelog.GNU	2019-02-22 19:07:43 +0000
@@ -1,3 +1,4 @@
+<<<<<<< TREE
 2018-10-17  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Fixed bug #1798206 and bug #1798504
@@ -224,6 +225,330 @@
       - giobackend: handle a wider variety of gio backends by making less assumptions;
         in particular, this fixes the google-drive: backend
 
+=======
+2019-01-25  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~vam9/duplicity/0.8-series-s3-kms-support
+      - Added s3 kms server side encryption support with kms-grants support.
+
+2019-01-06  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~okrasz/duplicity/duplicity
+      - Add --azure-blob-tier that specifies storage tier
+        (Hot,Cool,Archive) used for uploaded files.
+
+2018-12-27  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1803896 with patch from slawekbunka
+      - Add __enter__ and __exit__ to B2ProgressListener
+
+2018-12-23  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mgorse/duplicity/0.8-series
+      - First pass at a python 3 port.
+
+2018-10-17  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1798206 and bug #1798504
+      - Made paramiko a global with import during __init__ so it would
+        not be loaded unless needed.
+
+2018-12-15  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mcuelenaere/duplicity/duplicity
+      - Make sure we don't load files completely into memory when transferring them from/to
+        the remote WebDAV endpoint.
+
+2018-10-16  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mgorse/duplicity/0.8-series
+      - Run futurize --stage1, and adjust so that tests still pass.
+    * Fixed but #1797797 with patch from Bas Hulsken
+      - use bytes instead of unicode for '/' in filenames
+
+2018-10-11  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mgorse/duplicity/0.8-series
+      - Adorn some remaining strings
+
+2018-08-17  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~qsantos/duplicity/fix-unmatched-rule-error
+      - There are actually two commits: the first fixes a very minor detail in the README regarding the
+        Python version that should be used; the second fixes the way exceptions are handled when an
+        incorrect rule is specified, and display the nice error message rather than an obscure stack trace.
+
+2018-08-17  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mgorse/duplicity/0.8-series
+      - Adorn some duplicity/*.py strings. I've avoided submitting anything that I think might require
+        significant discussion; I think that reviewing will be easier this way. Mostly annotated strings
+        as unicode, except for librsync.py.
+
+2018-08-01  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Added AUTHORS file listing all copyright claimants in headers
+
+2018-07-26  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Adorned strings in testing/, testing/functional/, and testing/unit
+
+2018-07-24  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Reverted back to rev 1317 and reimplemented revs 1319 to 1322
+
+2018-07-23  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed unadorned strings to unicode in duplicity/backends/*
+      - Some fixup due to shifting indentataion not matching PEP8.
+
+2018-07-22  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Added function to fix unadorned strings (testing/fix_unadorned_strings.py)
+      - Fixes by inserting 'u' before token string
+      - Solves 99.9% of the use cases we have
+      - Fix unadorned strings to unicode in bin/duplicity and bin/rdiffdir
+      - Add import for __future__.print_function to find_unadorned_strings.py
+
+2018-07-20  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~aaron-whitehouse/duplicity/08-adorn-strings
+      - Adorning string literals (normally to make these unicode), in support of a transition to Python 3.
+        See https://blueprints.launchpad.net/duplicity/+spec/adorn-string-literals
+      - Adorn string in duplicity/globmatch.py.
+      - Adorn strings in testing/unit/test_globmatch.py
+      - Adorn strings in selection.py
+      - Adorn strings in functional/test_selection.py and unit/test_selection.py
+      - Remove ignores for these files in test_code.py
+
+2018-07-08  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1780617 Test fail when GnuPG >= 2.2.8
+      - Relevant change in GnuPG 2.2.8: https://dev.gnupg.org/T3981
+      - Added '--ignore-mdc-error' to all gpg calls made.
+
+2018-07-07  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~excitablesnowball/duplicity/s3-onezone-ia
+      - Add option --s3-use-onezone-ia S3 One Zone Infrequent Access Storage
+
+2018-06-09  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~aaron-whitehouse/duplicity/08-pycodestyle
+      - Tox changes to accommodate new pycodestyle version warnings.
+        Ignored W504 for now and marked as a TODO.
+        Marked W503 as a permanent ignore, as it is prefered to the (mutually exclusive) W504 under PEP8.
+      - Marked various regex strings as raw strings to avoid the new W605 "invalid escape sequence".
+    * Merged in lp:~aaron-whitehouse/duplicity/08-unadorned-strings
+      - Added new script to find unadorned strings (testing/find_unadorned_strings.py python_file)
+        which prints all unadorned strings in a .py file.
+      - Added a new test to test_code.py that checks across all files for unadorned strings and gives
+        an error if any are found (most files are in an ignore list at this stage, but this will allow
+        us to incrementally remove the exceptions as we adorn the strings in each file).
+      - Adorn string literals in test_code.py with u/b
+
+2018-05-07  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1768954 with patch from Max Hallden
+      - Add AZURE_ENDPOINT_SUFFIX environ variable to allow setting to non-U.S. servers
+    * Fixed bug #x1717935 with suggestion from strainu
+      - Use urllib.quote_plus() to properly quote pathnames passed via URL
+
+2018-05-01  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~dawgfoto/duplicity/fixup1252
+      * only check decryptable remote manifests
+        - fixup of revision 1252 which introduces a non-fatal error message (see #1729796)
+        - for backups the GPG private key and/or it's password are typically not available
+        - also avoid interactive password queries through e.g. gpg agent
+
+2018-01-21  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~dawgfoto/duplicity/fixup1251
+      - Avoid redundant replication of already present backup sets.
+      - Fixed by adding back BackupSet.__eq__ which was accidentally(?) removed in 1251.
+
+2017-12-24  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Reduce dependencies on backend libraries
+      - Moved backend imports into backend class __init__ method
+      - Surrounded imports with try/except to allow better errors
+      - Put all library dependencies in requirements.txt
+
+2017-12-22  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~aaron-whitehouse/duplicity/08-ufn-to-fsdecode
+      - Change util.fsdecode to use "replace" instead of "ignore" (matching behaviour of util.ufn)
+      - Replace all uses of ufn with fsdecode
+      - Make backend.tobytes use util.fsencode rather than reimplementing
+
+2017-12-20  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixes so pylint 1.8.1 does not complain about missing conditional imports.
+      - Fix dpbxbackend so that imports require instantiation of the class.
+      - Added pylint: disable=import-error to a couple of conditional imports
+
+2017-12-14  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~aaron-whitehouse/duplicity/08-ufn-to-uc_name
+      - Replace util.ufn(path.name) with path.uc_name throughout.
+
+2017-12-13  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * More pytest changes
+      - Use requirements.txt for dependencies
+      - Run unit tests first, then functional
+      - Some general cleanup
+
+2017-12-12  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Converted to use pytest instead of unittest (setup.py test is now discouraged)
+      - We use @pytest.mark.nocapture to mark the tests (gpg) that require
+        no capture of file streams (currently 10 tests).
+      - The rest of the tests are run normally
+
+2017-12-03  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~aaron-whitehouse/duplicity/08-unicode
+      - Many strings have been changed to unicode for better handling of international
+        characters and to make the transition to Python 3 significantly easier, primarily
+        on the 'local' side of duplicity (selection, commandline arguments etc) rather
+        than any backends etc.
+
+2017-11-28  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~crosser/duplicity/fix-small-file-upload
+      - Fixed small file upload changes made in Dropbox SDK v7.1
+
+2017-11-25  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~crosser/duplicity/fix-oauth-flow
+      - Fixed bug #1638236 "BackendException with oauth2client 4.0.0"
+    * Merged in lp:~crosser/duplicity/dpbx-fix-file-listing
+      - Fixed bug #1639664 "Dropbox support needs to be updated for Dropbox SDK v7.1"
+
+2017-11-23  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * More fixes for Unicode handling
+      - Default to 'utf-8' if sys.getfilesystemencoding() returns 'ascii' or None
+      - Fixed bug #1386373 with suggestion from Eugene Morozov
+
+2017-11-18  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1733057 AttributeError: 'GPGError' object has no attribute 'decode'
+      - Replaced call to util.ufn() with call to util.uexc().  Stupid typo!
+
+2017-11-09  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1730902 GPG Error Handling
+      - use util.ufn() not str() to handle encoding
+
+2017-11-01  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1723890 with patch from Killian Lackhove
+      - Fixes error handling in pydrivebackend.py
+
+2017-10-31  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1720159 - Cannot allocate memory with large manifest file since 0.7.03
+      - filelist is not read if --file-changed option in collection-status not present
+      - This will keep memory usage lower in non collection-status operations
+
+2017-10-26  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1448094 with patch from Tomáš Zvala
+      - Don't log incremental deletes for chains that have no incrementals
+    * Fixed bug #1724144 "--gpg-options unused with some commands"
+      - Add --gpg-options to get version run command
+
+2017-10-16  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1654756 with new b2backend.py module from Vincent Rouille
+      - Faster (big files are uploaded in chunks)
+      - Added upload progress reporting support
+
+2017-10-12  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Patched in lp:~mterry/duplicity/rename-dep
+      - Make rename command a dependency for LP build
+
+2017-09-22  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1714663 "Volume signed by XXXXXXXXXXXXXXXX, not XXXXXXXX"
+      - Normalized comparison length to min length of compared keys before comparison
+      - Avoids comparing mix of short, long, or fingerprint size keys.
+
+2017-09-13  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1715650 with patch from Mattheww S
+      - Fix to make duplicity attempt a get first, then create, a container
+        in order to support container ACLs.
+
+2017-09-07  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mterry/duplicity/more-decode-issues
+      - Here's some fixes for another couple UnicodeDecodeErrors.
+      - The duplicity/dup_time.py fixes when a user passes a utf8 date string (or a string with bogus
+        utf8 characters, but they have to really try to do that). This is bug 1334436.
+      - The bin/duplicity change from str(e) to util.uexc(e) fixes bug 1324188.
+      - The rest of the changes (util.exception_traceback and bin/duplicity changes to use it) are to
+        make the printing of exceptions prettier. Without this, if you see a French exception, you see
+        "accept\xe9es" instead of "acceptées".
+      - You can test all of these changes in one simple line:
+        $ LANGUAGE=fr duplicity remove-older-than $'accept\xffées'
+    * Fix backend.py to allow string, list, and tuple types to support megabackend.py.
+
+2017-09-06  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug introduced in new megabackend.py where process_commandline()
+      takes a string not a list.  Now it takes both.
+    * Updated web page for new megabackend requirements.
+
+2017-09-01  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1538333 Assertion error in manifest.py: assert filecount == ...
+      - Made sure to never pass .part files as true manifest files
+      - Changed assert to log.Error to warn about truncated/corrupt filelist
+      - Added unit test to make sure detection works
+      - Note: while this condition is serious, it will not affect the basic backup and restore
+        functions.  Interactive options like --list-files-changed and --file-changed will not
+        work correctly for this backup set, so it is advised to run a full backup as soon as
+        possible after this error occurs.
+    * Fixed bug #1638033 Remove leading slash on --file-to-restore
+      - code already used rstrip('/') so change to just strip('/')
+
+2017-08-29  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1394386 with new module megabackend.py from Tomas Vondra
+      - uses megatools from https://megatools.megous.com/ instead of mega.py library
+        which has been deprecated
+      - fixed copyright and PEP8 issues
+      - replaced subprocess.call() with self.subprocess_popen() to standardize
+
+2017-08-28  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1711905 with suggestion from Schneider
+      - log.Warn was invoked with log.warn in webdavbackend.py
+    * Merged in lp:~mterry/duplicity/gpg-tag-versions
+      - Support gpg versions numbers that have tags on them.
+      - This can happen if you build gpg from git trunk (e.g. 2.1.15-beta20). Or if you run
+        against the freedesktop flatpak runtime (e.g. 2.1.14-unknown).
+
+2017-08-15  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Fixed bug #1709047 with suggestion from Gary Hasson
+      - fixed so default was to use original filename
+    * Fixed PEP8 errors in bin/duplicity
+    * Merged in lp:~mterry/duplicity/gio_child_for_display_name
+      - gio: be slightly more correct and get child GFiles based on display name
+
+2017-08-06  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+
+    * Merged in lp:~mterry/duplicity/giobackend-display-name
+      - giobackend: handle a wider variety of gio backends by making less assumptions;
+        in particular, this fixes the google-drive: backend
+
+>>>>>>> MERGE-SOURCE
 2017-07-20  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Fixed encrypted remote manifest handling to merely put out a non-fatal

=== modified file 'README'
--- README	2017-08-06 21:10:28 +0000
+++ README	2019-02-22 19:07:43 +0000
@@ -19,7 +19,11 @@
 
 REQUIREMENTS:
 
+<<<<<<< TREE
  * Python v2.6 or later
+=======
+ * Python v2.7
+>>>>>>> MERGE-SOURCE
  * librsync v0.9.6 or later
  * GnuPG for encryption
  * fasteners 0.14.1 or later for concurrency locking

=== modified file 'bin/duplicity'
--- bin/duplicity	2018-09-28 13:55:53 +0000
+++ bin/duplicity	2019-02-22 19:07:43 +0000
@@ -27,6 +27,16 @@
 # Please send mail to me or the mailing list if you find bugs or have
 # any suggestions.
 
+<<<<<<< TREE
+=======
+from builtins import filter
+from builtins import next
+from builtins import map
+from builtins import range
+from builtins import object
+import duplicity.errors
+import copy
+>>>>>>> MERGE-SOURCE
 import gzip
 import os
 import sys
@@ -37,6 +47,12 @@
 import statvfs
 import resource
 import re
+<<<<<<< TREE
+=======
+import resource
+from os import statvfs
+import sys
+>>>>>>> MERGE-SOURCE
 import threading
 from datetime import datetime
 import fasteners
@@ -64,15 +80,19 @@
 from duplicity import progress
 
 
+<<<<<<< TREE
 if '--pydevd' in sys.argv or os.getenv('PYDEVD', None):
     import pydevd  # @UnresolvedImport
+=======
+if u'--pydevd' in sys.argv or os.getenv(u'PYDEVD', None):
+    import pydevd  # pylint: disable=import-error
+>>>>>>> MERGE-SOURCE
     pydevd.settrace()
     # In a dev environment the path is screwed so fix it.
     base = sys.path.pop(0)
     base = base.split(os.path.sep)[:-1]
     base = os.path.sep.join(base)
     sys.path.insert(0, base)
-# end remote debugger startup
 
 # If exit_val is not None, exit with given value at end.
 exit_val = None
@@ -83,12 +103,13 @@
     # in non-ascii characters.
     import getpass
     import locale
-    message = message.encode(locale.getpreferredencoding(), 'replace')
+    if sys.version_info.major == 2:
+        message = message.encode(locale.getpreferredencoding(), u'replace')
     return getpass.getpass(message)
 
 
 def get_passphrase(n, action, for_signing=False):
-    """
+    u"""
     Check to make sure passphrase is indeed needed, then get
     the passphrase from environment, from gpg-agent, or user
 
@@ -109,9 +130,9 @@
     # First try the environment
     try:
         if for_signing:
-            return os.environ['SIGN_PASSPHRASE']
+            return os.environ[u'SIGN_PASSPHRASE']
         else:
-            return os.environ['PASSPHRASE']
+            return os.environ[u'PASSPHRASE']
     except KeyError:
         pass
 
@@ -120,16 +141,16 @@
     if (for_signing and
             (globals.gpg_profile.sign_key in globals.gpg_profile.recipients or
              globals.gpg_profile.sign_key in globals.gpg_profile.hidden_recipients) and
-             'PASSPHRASE' in os.environ):  # noqa
-        log.Notice(_("Reuse configured PASSPHRASE as SIGN_PASSPHRASE"))
-        return os.environ['PASSPHRASE']
+             u'PASSPHRASE' in os.environ):  # noqa
+        log.Notice(_(u"Reuse configured PASSPHRASE as SIGN_PASSPHRASE"))
+        return os.environ[u'PASSPHRASE']
     # if one encryption key is also the signing key assume that the passphrase is identical
     if (not for_signing and
             (globals.gpg_profile.sign_key in globals.gpg_profile.recipients or
              globals.gpg_profile.sign_key in globals.gpg_profile.hidden_recipients) and
-             'SIGN_PASSPHRASE' in os.environ):  # noqa
-        log.Notice(_("Reuse configured SIGN_PASSPHRASE as PASSPHRASE"))
-        return os.environ['SIGN_PASSPHRASE']
+             u'SIGN_PASSPHRASE' in os.environ):  # noqa
+        log.Notice(_(u"Reuse configured SIGN_PASSPHRASE as PASSPHRASE"))
+        return os.environ[u'SIGN_PASSPHRASE']
 
     # Next, verify we need to ask the user
 
@@ -140,34 +161,34 @@
 
     # no passphrase if --no-encryption or --use-agent
     if not globals.encryption or globals.use_agent:
-        return ""
+        return u""
 
     # these commands don't need a password
-    elif action in ["collection-status",
-                    "list-current",
-                    "remove-all-but-n-full",
-                    "remove-all-inc-of-but-n-full",
-                    "remove-old",
+    elif action in [u"collection-status",
+                    u"list-current",
+                    u"remove-all-but-n-full",
+                    u"remove-all-inc-of-but-n-full",
+                    u"remove-old",
                     ]:
-        return ""
+        return u""
 
     # for a full backup, we don't need a password if
     # there is no sign_key and there are recipients
-    elif (action == "full" and
+    elif (action == u"full" and
           (globals.gpg_profile.recipients or globals.gpg_profile.hidden_recipients) and not
           globals.gpg_profile.sign_key and not globals.restart):
-        return ""
+        return u""
 
     # for an inc backup, we don't need a password if
     # there is no sign_key and there are recipients
-    elif (action == "inc" and
+    elif (action == u"inc" and
           (globals.gpg_profile.recipients or globals.gpg_profile.hidden_recipients) and not
           globals.gpg_profile.sign_key and not globals.restart):
-        return ""
+        return u""
 
     # Finally, ask the user for the passphrase
     else:
-        log.Info(_("PASSPHRASE variable not set, asking user."))
+        log.Info(_(u"PASSPHRASE variable not set, asking user."))
         use_cache = True
         while 1:
             # ask the user to enter a new passphrase to avoid an infinite loop
@@ -182,29 +203,29 @@
                     if use_cache and globals.gpg_profile.signing_passphrase:
                         pass1 = globals.gpg_profile.signing_passphrase
                     else:
-                        pass1 = getpass_safe(_("GnuPG passphrase for signing key:") + " ")
+                        pass1 = getpass_safe(_(u"GnuPG passphrase for signing key:") + u" ")
                 else:
                     if use_cache and globals.gpg_profile.passphrase:
                         pass1 = globals.gpg_profile.passphrase
                     else:
-                        pass1 = getpass_safe(_("GnuPG passphrase:") + " ")
+                        pass1 = getpass_safe(_(u"GnuPG passphrase:") + u" ")
 
             if n == 1:
                 pass2 = pass1
             elif for_signing:
-                pass2 = getpass_safe(_("Retype passphrase for signing key to confirm: "))
+                pass2 = getpass_safe(_(u"Retype passphrase for signing key to confirm: "))
             else:
-                pass2 = getpass_safe(_("Retype passphrase to confirm: "))
+                pass2 = getpass_safe(_(u"Retype passphrase to confirm: "))
 
             if not pass1 == pass2:
-                log.Log(_("First and second passphrases do not match!  Please try again."),
+                log.Log(_(u"First and second passphrases do not match!  Please try again."),
                         log.WARNING, force_print=True)
                 use_cache = False
                 continue
 
             if not pass1 and not (globals.gpg_profile.recipients or
                                   globals.gpg_profile.hidden_recipients) and not for_signing:
-                log.Log(_("Cannot use empty passphrase with symmetric encryption!  Please try again."),
+                log.Log(_(u"Cannot use empty passphrase with symmetric encryption!  Please try again."),
                         log.WARNING, force_print=True)
                 use_cache = False
                 continue
@@ -213,7 +234,7 @@
 
 
 def dummy_backup(tarblock_iter):
-    """
+    u"""
     Fake writing to backend, but do go through all the source paths.
 
     @type tarblock_iter: tarblock_iter
@@ -224,7 +245,7 @@
     """
     try:
         # Just spin our wheels
-        while tarblock_iter.next():
+        while next(tarblock_iter):
             pass
     except StopIteration:
         pass
@@ -233,7 +254,7 @@
 
 
 def restart_position_iterator(tarblock_iter):
-    """
+    u"""
     Fake writing to backend, but do go through all the source paths.
     Stop when we have processed the last file and block from the
     last backup.  Normal backup will proceed at the start of the
@@ -249,7 +270,7 @@
     last_block = globals.restart.last_block
     try:
         # Just spin our wheels
-        iter_result = tarblock_iter.next()
+        iter_result = next(tarblock_iter)
         while iter_result:
             if (tarblock_iter.previous_index == last_index):
                 # If both the previous index and this index are done, exit now
@@ -261,23 +282,23 @@
                 if last_block and tarblock_iter.previous_block > last_block:
                     break
             if tarblock_iter.previous_index > last_index:
-                log.Warn(_("File %s complete in backup set.\n"
-                           "Continuing restart on file %s.") %
+                log.Warn(_(u"File %s complete in backup set.\n"
+                           u"Continuing restart on file %s.") %
                          (util.uindex(last_index), util.uindex(tarblock_iter.previous_index)),
                          log.ErrorCode.restart_file_not_found)
                 # We went too far! Stuff the data back into place before restarting
                 tarblock_iter.queue_index_data(iter_result)
                 break
-            iter_result = tarblock_iter.next()
+            iter_result = next(tarblock_iter)
     except StopIteration:
-        log.Warn(_("File %s missing in backup set.\n"
-                   "Continuing restart on file %s.") %
+        log.Warn(_(u"File %s missing in backup set.\n"
+                   u"Continuing restart on file %s.") %
                  (util.uindex(last_index), util.uindex(tarblock_iter.previous_index)),
                  log.ErrorCode.restart_file_not_found)
 
 
 def write_multivol(backup_type, tarblock_iter, man_outfp, sig_outfp, backend):
-    """
+    u"""
     Encrypt volumes of tarblock_iter and write to backend
 
     backup_type should be "inc" or "full" and only matters here when
@@ -297,7 +318,7 @@
     """
 
     def get_indicies(tarblock_iter):
-        """Return start_index and end_index of previous volume"""
+        u"""Return start_index and end_index of previous volume"""
         start_index, start_block = tarblock_iter.recall_index()
         if start_index is None:
             start_index = ()
@@ -313,6 +334,7 @@
         return start_index, start_block, end_index, end_block
 
     def validate_block(orig_size, dest_filename):
+<<<<<<< TREE
         """
         Compare the remote size to the local one to ensure the transfer went
         through.
@@ -331,13 +353,19 @@
                         util.escape(dest_filename),
                         orig_size))
             time.sleep(1 + (attempt * 0.5))
+=======
+        info = backend.query_info([dest_filename])[dest_filename]
+        size = info[u'size']
+        if size is None:
+            return  # error querying file
+>>>>>>> MERGE-SOURCE
         if size != orig_size:
-            code_extra = "%s %d %d" % (util.escape(dest_filename), orig_size, size)
-            log.FatalError(_("File %s was corrupted during upload.") % util.ufn(dest_filename),
+            code_extra = u"%s %d %d" % (util.escape(dest_filename), orig_size, size)
+            log.FatalError(_(u"File %s was corrupted during upload.") % util.fsdecode(dest_filename),
                            log.ErrorCode.volume_wrong_size, code_extra)
 
     def put(tdp, dest_filename, vol_num):
-        """
+        u"""
         Retrieve file size *before* calling backend.put(), which may (at least
         in case of the localbackend) rename the temporary file to the target
         instead of copying.
@@ -351,7 +379,7 @@
         return putsize
 
     def validate_encryption_settings(backup_set, manifest):
-        """
+        u"""
         When restarting a backup, we have no way to verify that the current
         passphrase is the same as the one used for the beginning of the backup.
         This is because the local copy of the manifest is unencrypted and we
@@ -366,8 +394,8 @@
                                         encrypted=globals.encryption,
                                         gzipped=globals.compression)
         if vol1_filename != backup_set.volume_name_dict[1]:
-            log.FatalError(_("Restarting backup, but current encryption "
-                             "settings do not match original settings"),
+            log.FatalError(_(u"Restarting backup, but current encryption "
+                             u"settings do not match original settings"),
                            log.ErrorCode.enryption_mismatch)
 
         # Settings are same, let's check passphrase itself if we are encrypted
@@ -389,7 +417,7 @@
         validate_encryption_settings(globals.restart.last_backup, mf)
         mf.fh = man_outfp
         last_block = globals.restart.last_block
-        log.Notice(_("Restarting after volume %s, file %s, block %s") %
+        log.Notice(_(u"Restarting after volume %s, file %s, block %s") %
                    (globals.restart.start_vol,
                     util.uindex(globals.restart.last_index),
                     globals.restart.last_block))
@@ -442,7 +470,7 @@
         # Add volume information to manifest
         vi = manifest.VolumeInfo()
         vi.set_info(vol_num, *get_indicies(tarblock_iter))
-        vi.set_hash("SHA1", gpg.get_hash("SHA1", tdp))
+        vi.set_hash(u"SHA1", gpg.get_hash(u"SHA1", tdp))
         mf.add_volume_info(vi)
 
         # Checkpoint after each volume so restart has a place to restart.
@@ -459,14 +487,14 @@
                                                         (tdp, dest_filename, vol_num)))
 
         # Log human-readable version as well as raw numbers for machine consumers
-        log.Progress(_('Processed volume %d') % vol_num, diffdir.stats.SourceFileSize)
+        log.Progress(_(u'Processed volume %d') % vol_num, diffdir.stats.SourceFileSize)
         # Snapshot (serialize) progress now as a Volume has been completed.
         # This is always the last restore point when it comes to restart a failed backup
         if globals.progress:
             progress.tracker.snapshot_progress(vol_num)
 
         # for testing purposes only - assert on inc or full
-        assert globals.fail_on_volume != vol_num, "Forced assertion for testing at volume %d" % vol_num
+        assert globals.fail_on_volume != vol_num, u"Forced assertion for testing at volume %d" % vol_num
 
     # Collect byte count from all asynchronous jobs; also implicitly waits
     # for them all to complete.
@@ -480,7 +508,7 @@
 
 
 def get_man_fileobj(backup_type):
-    """
+    u"""
     Return a fileobj opened for writing, save results as manifest
 
     Save manifest in globals.archive_dir gzipped.
@@ -492,7 +520,7 @@
     @rtype: fileobj
     @return: fileobj opened for writing
     """
-    assert backup_type == "full" or backup_type == "inc"
+    assert backup_type == u"full" or backup_type == u"inc"
 
     part_man_filename = file_naming.get(backup_type,
                                         manifest=True,
@@ -511,7 +539,7 @@
 
 
 def get_sig_fileobj(sig_type):
-    """
+    u"""
     Return a fileobj opened for writing, save results as signature
 
     Save signatures in globals.archive_dir gzipped.
@@ -523,7 +551,7 @@
     @rtype: fileobj
     @return: fileobj opened for writing
     """
-    assert sig_type in ["full-sig", "new-sig"]
+    assert sig_type in [u"full-sig", u"new-sig"]
 
     part_sig_filename = file_naming.get(sig_type,
                                         gzipped=False,
@@ -542,8 +570,13 @@
 
 
 def full_backup(col_stats):
+<<<<<<< TREE
     """
     Do full backup of directory to backend, using archive_dir
+=======
+    u"""
+    Do full backup of directory to backend, using archive_dir_path
+>>>>>>> MERGE-SOURCE
 
     @type col_stats: CollectionStatus object
     @param col_stats: collection status
@@ -568,11 +601,11 @@
         bytes_written = dummy_backup(tarblock_iter)
         col_stats.set_values(sig_chain_warning=None)
     else:
-        sig_outfp = get_sig_fileobj("full-sig")
-        man_outfp = get_man_fileobj("full")
+        sig_outfp = get_sig_fileobj(u"full-sig")
+        man_outfp = get_man_fileobj(u"full")
         tarblock_iter = diffdir.DirFull_WriteSig(globals.select,
                                                  sig_outfp)
-        bytes_written = write_multivol("full", tarblock_iter,
+        bytes_written = write_multivol(u"full", tarblock_iter,
                                        man_outfp, sig_outfp,
                                        globals.backend)
 
@@ -600,7 +633,7 @@
 
 
 def check_sig_chain(col_stats):
-    """
+    u"""
     Get last signature chain for inc backup, or None if none available
 
     @type col_stats: CollectionStatus object
@@ -608,17 +641,17 @@
     """
     if not col_stats.matched_chain_pair:
         if globals.incremental:
-            log.FatalError(_("Fatal Error: Unable to start incremental backup.  "
-                             "Old signatures not found and incremental specified"),
+            log.FatalError(_(u"Fatal Error: Unable to start incremental backup.  "
+                             u"Old signatures not found and incremental specified"),
                            log.ErrorCode.inc_without_sigs)
         else:
-            log.Warn(_("No signatures found, switching to full backup."))
+            log.Warn(_(u"No signatures found, switching to full backup."))
         return None
     return col_stats.matched_chain_pair[0]
 
 
 def print_statistics(stats, bytes_written):
-    """
+    u"""
     If globals.print_statistics, print stats after adding bytes_written
 
     @rtype: void
@@ -626,13 +659,18 @@
     """
     if globals.print_statistics:
         diffdir.stats.TotalDestinationSizeChange = bytes_written
-        logstring = diffdir.stats.get_stats_logstring(_("Backup Statistics"))
+        logstring = diffdir.stats.get_stats_logstring(_(u"Backup Statistics"))
         log.Log(logstring, log.NOTICE, force_print=True)
 
 
 def incremental_backup(sig_chain):
+<<<<<<< TREE
     """
     Do incremental backup of directory to backend, using archive_dir
+=======
+    u"""
+    Do incremental backup of directory to backend, using archive_dir_path
+>>>>>>> MERGE-SOURCE
 
     @rtype: void
     @return: void
@@ -643,7 +681,7 @@
             time.sleep(2)
             dup_time.setcurtime()
             assert dup_time.curtime != dup_time.prevtime, \
-                "time not moving forward at appropriate pace - system clock issues?"
+                u"time not moving forward at appropriate pace - system clock issues?"
 
     if globals.progress:
         progress.tracker = progress.ProgressTracker()
@@ -663,12 +701,12 @@
                                          sig_chain.get_fileobjs())
         bytes_written = dummy_backup(tarblock_iter)
     else:
-        new_sig_outfp = get_sig_fileobj("new-sig")
-        new_man_outfp = get_man_fileobj("inc")
+        new_sig_outfp = get_sig_fileobj(u"new-sig")
+        new_man_outfp = get_man_fileobj(u"inc")
         tarblock_iter = diffdir.DirDelta_WriteSig(globals.select,
                                                   sig_chain.get_fileobjs(),
                                                   new_sig_outfp)
-        bytes_written = write_multivol("inc", tarblock_iter,
+        bytes_written = write_multivol(u"inc", tarblock_iter,
                                        new_man_outfp, new_sig_outfp,
                                        globals.backend)
 
@@ -694,7 +732,7 @@
 
 
 def list_current(col_stats):
-    """
+    u"""
     List the files current in the archive (examining signature only)
 
     @type col_stats: CollectionStatus object
@@ -707,18 +745,18 @@
     sig_chain = col_stats.get_signature_chain_at_time(time)
     path_iter = diffdir.get_combined_path_iter(sig_chain.get_fileobjs(time))
     for path in path_iter:
-        if path.difftype != "deleted":
+        if path.difftype != u"deleted":
             user_info = u"%s %s" % (dup_time.timetopretty(path.getmtime()),
-                                    util.ufn(path.get_relative_path()))
-            log_info = "%s %s %s" % (dup_time.timetostring(path.getmtime()),
-                                     util.escape(path.get_relative_path()),
-                                     path.type)
+                                    util.fsdecode(path.get_relative_path()))
+            log_info = u"%s %s %s" % (dup_time.timetostring(path.getmtime()),
+                                      util.escape(path.get_relative_path()),
+                                      path.type)
             log.Log(user_info, log.INFO, log.InfoCode.file_list,
                     log_info, True)
 
 
 def restore(col_stats):
-    """
+    u"""
     Restore archive in globals.backend to globals.local_path
 
     @type col_stats: CollectionStatus object
@@ -732,23 +770,23 @@
     if not patchdir.Write_ROPaths(globals.local_path,
                                   restore_get_patched_rop_iter(col_stats)):
         if globals.restore_dir:
-            log.FatalError(_("%s not found in archive - no files restored.")
-                           % (util.ufn(globals.restore_dir)),
+            log.FatalError(_(u"%s not found in archive - no files restored.")
+                           % (util.fsdecode(globals.restore_dir)),
                            log.ErrorCode.restore_dir_not_found)
         else:
-            log.FatalError(_("No files found in archive - nothing restored."),
+            log.FatalError(_(u"No files found in archive - nothing restored."),
                            log.ErrorCode.no_restore_files)
 
 
 def restore_get_patched_rop_iter(col_stats):
-    """
+    u"""
     Return iterator of patched ROPaths of desired restore data
 
     @type col_stats: CollectionStatus object
     @param col_stats: collection status
     """
     if globals.restore_dir:
-        index = tuple(globals.restore_dir.split("/"))
+        index = tuple(globals.restore_dir.split(b"/"))
     else:
         index = ()
     time = globals.restore_time or dup_time.curtime
@@ -761,7 +799,7 @@
     cur_vol = [0]
 
     def get_fileobj_iter(backup_set):
-        """Get file object iterator from backup_set contain given index"""
+        u"""Get file object iterator from backup_set contain given index"""
         manifest = backup_set.get_manifest()
         volumes = manifest.get_containing_volumes(index)
         for vol_num in volumes:
@@ -769,10 +807,10 @@
                                           backup_set.volume_name_dict[vol_num],
                                           manifest.volume_info_dict[vol_num])
             cur_vol[0] += 1
-            log.Progress(_('Processed volume %d of %d') % (cur_vol[0], num_vols),
+            log.Progress(_(u'Processed volume %d of %d') % (cur_vol[0], num_vols),
                          cur_vol[0], num_vols)
 
-    if hasattr(globals.backend, 'pre_process_download'):
+    if hasattr(globals.backend, u'pre_process_download'):
         file_names = []
         for backup_set in backup_setlist:
             manifest = backup_set.get_manifest()
@@ -787,7 +825,7 @@
 
 
 def restore_get_enc_fileobj(backend, filename, volume_info):
-    """
+    u"""
     Return plaintext fileobj from encrypted filename on backend
 
     If volume_info is set, the hash of the file will be checked,
@@ -799,25 +837,25 @@
     tdp = dup_temp.new_tempduppath(parseresults)
     backend.get(filename, tdp)
 
-    """ verify hash of the remote file """
+    u""" verify hash of the remote file """
     verified, hash_pair, calculated_hash = restore_check_hash(volume_info, tdp)
     if not verified:
-        log.FatalError("%s\n %s\n %s\n %s\n" %
-                       (_("Invalid data - %s hash mismatch for file:") %
+        log.FatalError(u"%s\n %s\n %s\n %s\n" %
+                       (_(u"Invalid data - %s hash mismatch for file:") %
                         hash_pair[0],
-                        util.ufn(filename),
-                        _("Calculated hash: %s") % calculated_hash,
-                        _("Manifest hash: %s") % hash_pair[1]),
+                        util.fsdecode(filename),
+                        _(u"Calculated hash: %s") % calculated_hash,
+                        _(u"Manifest hash: %s") % hash_pair[1]),
                        log.ErrorCode.mismatched_hash)
 
-    fileobj = tdp.filtered_open_with_delete("rb")
+    fileobj = tdp.filtered_open_with_delete(u"rb")
     if parseresults.encrypted and globals.gpg_profile.sign_key:
         restore_add_sig_check(fileobj)
     return fileobj
 
 
 def restore_check_hash(volume_info, vol_path):
-    """
+    u"""
     Check the hash of vol_path path against data in volume_info
 
     @rtype: boolean
@@ -828,12 +866,12 @@
         calculated_hash = gpg.get_hash(hash_pair[0], vol_path)
         if calculated_hash != hash_pair[1]:
             return False, hash_pair, calculated_hash
-    """ reached here, verification passed """
+    u""" reached here, verification passed """
     return True, hash_pair, calculated_hash
 
 
 def restore_add_sig_check(fileobj):
-    """
+    u"""
     Require signature when closing fileobj matches sig in gpg_profile
 
     @rtype: void
@@ -843,8 +881,9 @@
             isinstance(fileobj.fileobj, gpg.GPGFile)), fileobj
 
     def check_signature():
-        """Thunk run when closing volume file"""
+        u"""Thunk run when closing volume file"""
         actual_sig = fileobj.fileobj.get_signature()
+<<<<<<< TREE
         actual_sig = "None" if actual_sig is None else actual_sig
         sign_key = globals.gpg_profile.sign_key
         sign_key = "None" if sign_key is None else sign_key
@@ -852,13 +891,22 @@
         if actual_sig[ofs:] != sign_key[ofs:]:
             log.FatalError(_("Volume was signed by key %s, not %s") %
                            (actual_sig[ofs:], sign_key[ofs:]),
+=======
+        actual_sig = u"None" if actual_sig is None else actual_sig
+        sign_key = globals.gpg_profile.sign_key
+        sign_key = u"None" if sign_key is None else sign_key
+        ofs = -min(len(actual_sig), len(sign_key))
+        if actual_sig[ofs:] != sign_key[ofs:]:
+            log.FatalError(_(u"Volume was signed by key %s, not %s") %
+                           (actual_sig[ofs:], sign_key[ofs:]),
+>>>>>>> MERGE-SOURCE
                            log.ErrorCode.unsigned_volume)
 
     fileobj.addhook(check_signature)
 
 
 def verify(col_stats):
-    """
+    u"""
     Verify files, logging differences
 
     @type col_stats: CollectionStatus object
@@ -882,17 +930,17 @@
         total_count += 1
     # Unfortunately, ngettext doesn't handle multiple number variables, so we
     # split up the string.
-    log.Notice(_("Verify complete: %s, %s.") %
-               (ngettext("%d file compared",
-                         "%d files compared", total_count) % total_count,
-                ngettext("%d difference found",
-                         "%d differences found", diff_count) % diff_count))
+    log.Notice(_(u"Verify complete: %s, %s.") %
+               (ngettext(u"%d file compared",
+                         u"%d files compared", total_count) % total_count,
+                ngettext(u"%d difference found",
+                         u"%d differences found", diff_count) % diff_count))
     if diff_count >= 1:
         exit_val = 1
 
 
 def cleanup(col_stats):
-    """
+    u"""
     Delete the extraneous files in the current backend
 
     @type col_stats: CollectionStatus object
@@ -904,13 +952,13 @@
     ext_local, ext_remote = col_stats.get_extraneous(globals.extra_clean)
     extraneous = ext_local + ext_remote
     if not extraneous:
-        log.Warn(_("No extraneous files found, nothing deleted in cleanup."))
+        log.Warn(_(u"No extraneous files found, nothing deleted in cleanup."))
         return
 
-    filestr = u"\n".join(map(util.ufn, extraneous))
+    filestr = u"\n".join(map(util.fsdecode, extraneous))
     if globals.force:
-        log.Notice(ngettext("Deleting this file from backend:",
-                            "Deleting these files from backend:",
+        log.Notice(ngettext(u"Deleting this file from backend:",
+                            u"Deleting these files from backend:",
                             len(extraneous)) + u"\n" + filestr)
         if not globals.dry_run:
             col_stats.backend.delete(ext_remote)
@@ -920,14 +968,14 @@
                 except Exception:
                     pass
     else:
-        log.Notice(ngettext("Found the following file to delete:",
-                            "Found the following files to delete:",
+        log.Notice(ngettext(u"Found the following file to delete:",
+                            u"Found the following files to delete:",
                             len(extraneous)) + u"\n" + filestr + u"\n" +
-                   _("Run duplicity again with the --force option to actually delete."))
+                   _(u"Run duplicity again with the --force option to actually delete."))
 
 
 def remove_all_but_n_full(col_stats):
-    """
+    u"""
     Remove backup files older than the last n full backups.
 
     @type col_stats: CollectionStatus object
@@ -944,7 +992,7 @@
 
 
 def remove_old(col_stats):
-    """
+    u"""
     Remove backup files older than globals.remove_time from backend
 
     @type col_stats: CollectionStatus object
@@ -956,25 +1004,25 @@
     assert globals.remove_time is not None
 
     def set_times_str(setlist):
-        """Return string listing times of sets in setlist"""
-        return "\n".join([dup_time.timetopretty(s.get_time()) for s in setlist])
+        u"""Return string listing times of sets in setlist"""
+        return u"\n".join([dup_time.timetopretty(s.get_time()) for s in setlist])
 
     def chain_times_str(chainlist):
-        """Return string listing times of chains in chainlist"""
-        return "\n".join([dup_time.timetopretty(s.end_time) for s in chainlist])
+        u"""Return string listing times of chains in chainlist"""
+        return u"\n".join([dup_time.timetopretty(s.end_time) for s in chainlist])
 
     req_list = col_stats.get_older_than_required(globals.remove_time)
     if req_list:
-        log.Warn("%s\n%s\n%s" %
-                 (_("There are backup set(s) at time(s):"),
+        log.Warn(u"%s\n%s\n%s" %
+                 (_(u"There are backup set(s) at time(s):"),
                   set_times_str(req_list),
-                  _("Which can't be deleted because newer sets depend on them.")))
+                  _(u"Which can't be deleted because newer sets depend on them.")))
 
     if (col_stats.matched_chain_pair and
             col_stats.matched_chain_pair[1].end_time < globals.remove_time):
-        log.Warn(_("Current active backup chain is older than specified time.  "
-                   "However, it will not be deleted.  To remove all your backups, "
-                   "manually purge the repository."))
+        log.Warn(_(u"Current active backup chain is older than specified time.  "
+                   u"However, it will not be deleted.  To remove all your backups, "
+                   u"manually purge the repository."))
 
     chainlist = col_stats.get_chains_older_than(globals.remove_time)
 
@@ -985,13 +1033,13 @@
                          (isinstance(x, collections.BackupChain) and x.incset_list))
 
     if not chainlist:
-        log.Notice(_("No old backup sets found, nothing deleted."))
+        log.Notice(_(u"No old backup sets found, nothing deleted."))
         return
     if globals.force:
-        log.Notice(ngettext("Deleting backup chain at time:",
-                            "Deleting backup chains at times:",
+        log.Notice(ngettext(u"Deleting backup chain at time:",
+                            u"Deleting backup chains at times:",
                             len(chainlist)) +
-                   "\n" + chain_times_str(chainlist))
+                   u"\n" + chain_times_str(chainlist))
         # Add signature files too, since they won't be needed anymore
         chainlist += col_stats.get_signature_chains_older_than(globals.remove_time)
         chainlist.reverse()  # save oldest for last
@@ -1000,28 +1048,147 @@
             # incrementals one and not full
             if globals.remove_all_inc_of_but_n_full_mode:
                 if isinstance(chain, collections.SignatureChain):
-                    chain_desc = _("Deleting any incremental signature chain rooted at %s")
+                    chain_desc = _(u"Deleting any incremental signature chain rooted at %s")
                 else:
-                    chain_desc = _("Deleting any incremental backup chain rooted at %s")
+                    chain_desc = _(u"Deleting any incremental backup chain rooted at %s")
             else:
                 if isinstance(chain, collections.SignatureChain):
-                    chain_desc = _("Deleting complete signature chain %s")
+                    chain_desc = _(u"Deleting complete signature chain %s")
                 else:
-                    chain_desc = _("Deleting complete backup chain %s")
+                    chain_desc = _(u"Deleting complete backup chain %s")
             log.Notice(chain_desc % dup_time.timetopretty(chain.end_time))
             if not globals.dry_run:
                 chain.delete(keep_full=globals.remove_all_inc_of_but_n_full_mode)
         col_stats.set_values(sig_chain_warning=None)
     else:
-        log.Notice(ngettext("Found old backup chain at the following time:",
-                            "Found old backup chains at the following times:",
+        log.Notice(ngettext(u"Found old backup chain at the following time:",
+                            u"Found old backup chains at the following times:",
                             len(chainlist)) +
-                   "\n" + chain_times_str(chainlist) + "\n" +
-                   _("Rerun command with --force option to actually delete."))
-
-
+                   u"\n" + chain_times_str(chainlist) + u"\n" +
+                   _(u"Rerun command with --force option to actually delete."))
+
+
+<<<<<<< TREE
+=======
+def replicate():
+    u"""
+    Replicate backup files from one remote to another, possibly encrypting or adding parity.
+
+    @rtype: void
+    @return: void
+    """
+    action = u"replicate"
+    time = globals.restore_time or dup_time.curtime
+    src_stats = collections.CollectionsStatus(globals.src_backend, None, action).set_values(sig_chain_warning=None)
+    tgt_stats = collections.CollectionsStatus(globals.backend, None, action).set_values(sig_chain_warning=None)
+
+    src_list = globals.src_backend.list()
+    tgt_list = globals.backend.list()
+
+    src_chainlist = src_stats.get_signature_chains(local=False, filelist=src_list)[0]
+    tgt_chainlist = tgt_stats.get_signature_chains(local=False, filelist=tgt_list)[0]
+    sorted(src_chainlist, key=lambda chain: chain.start_time)
+    sorted(tgt_chainlist, key=lambda chain: chain.start_time)
+    if not src_chainlist:
+        log.Notice(_(u"No old backup sets found."))
+        return
+    for src_chain in src_chainlist:
+        try:
+            tgt_chain = list(filter(lambda chain: chain.start_time == src_chain.start_time, tgt_chainlist))[0]
+        except IndexError:
+            tgt_chain = None
+
+        tgt_sigs = list(map(file_naming.parse, tgt_chain.get_filenames())) if tgt_chain else []
+        for src_sig_filename in src_chain.get_filenames():
+            src_sig = file_naming.parse(src_sig_filename)
+            if not (src_sig.time or src_sig.end_time) < time:
+                continue
+            try:
+                tgt_sigs.remove(src_sig)
+                log.Info(_(u"Signature %s already replicated") % (src_sig_filename,))
+                continue
+            except ValueError:
+                pass
+            if src_sig.type == u'new-sig':
+                dup_time.setprevtime(src_sig.start_time)
+            dup_time.setcurtime(src_sig.time or src_sig.end_time)
+            log.Notice(_(u"Replicating %s.") % (src_sig_filename,))
+            fileobj = globals.src_backend.get_fileobj_read(src_sig_filename)
+            filename = file_naming.get(src_sig.type, encrypted=globals.encryption, gzipped=globals.compression)
+            tdp = dup_temp.new_tempduppath(file_naming.parse(filename))
+            tmpobj = tdp.filtered_open(mode=u'wb')
+            util.copyfileobj(fileobj, tmpobj)  # decrypt, compress, (re)-encrypt
+            fileobj.close()
+            tmpobj.close()
+            globals.backend.put(tdp, filename)
+            tdp.delete()
+
+    src_chainlist = src_stats.get_backup_chains(filename_list=src_list)[0]
+    tgt_chainlist = tgt_stats.get_backup_chains(filename_list=tgt_list)[0]
+    sorted(src_chainlist, key=lambda chain: chain.start_time)
+    sorted(tgt_chainlist, key=lambda chain: chain.start_time)
+    for src_chain in src_chainlist:
+        try:
+            tgt_chain = list(filter(lambda chain: chain.start_time == src_chain.start_time, tgt_chainlist))[0]
+        except IndexError:
+            tgt_chain = None
+
+        tgt_sets = tgt_chain.get_all_sets() if tgt_chain else []
+        for src_set in src_chain.get_all_sets():
+            if not src_set.get_time() < time:
+                continue
+            try:
+                tgt_sets.remove(src_set)
+                log.Info(_(u"Backupset %s already replicated") % (src_set.remote_manifest_name,))
+                continue
+            except ValueError:
+                pass
+            if src_set.type == u'inc':
+                dup_time.setprevtime(src_set.start_time)
+            dup_time.setcurtime(src_set.get_time())
+            rmf = src_set.get_remote_manifest()
+            mf_filename = file_naming.get(src_set.type, manifest=True)
+            mf_tdp = dup_temp.new_tempduppath(file_naming.parse(mf_filename))
+            mf = manifest.Manifest(fh=mf_tdp.filtered_open(mode=u'wb'))
+            for i, filename in src_set.volume_name_dict.items():
+                log.Notice(_(u"Replicating %s.") % (filename,))
+                fileobj = restore_get_enc_fileobj(globals.src_backend, filename, rmf.volume_info_dict[i])
+                filename = file_naming.get(src_set.type, i, encrypted=globals.encryption, gzipped=globals.compression)
+                tdp = dup_temp.new_tempduppath(file_naming.parse(filename))
+                tmpobj = tdp.filtered_open(mode=u'wb')
+                util.copyfileobj(fileobj, tmpobj)  # decrypt, compress, (re)-encrypt
+                fileobj.close()
+                tmpobj.close()
+                globals.backend.put(tdp, filename)
+
+                vi = copy.copy(rmf.volume_info_dict[i])
+                vi.set_hash(u"SHA1", gpg.get_hash(u"SHA1", tdp))
+                mf.add_volume_info(vi)
+
+                tdp.delete()
+
+            mf.fh.close()
+            # incremental GPG writes hang on close, so do any encryption here at once
+            mf_fileobj = mf_tdp.filtered_open_with_delete(mode=u'rb')
+            mf_final_filename = file_naming.get(src_set.type,
+                                                manifest=True,
+                                                encrypted=globals.encryption,
+                                                gzipped=globals.compression)
+            mf_final_tdp = dup_temp.new_tempduppath(file_naming.parse(mf_final_filename))
+            mf_final_fileobj = mf_final_tdp.filtered_open(mode=u'wb')
+            util.copyfileobj(mf_fileobj, mf_final_fileobj)  # compress, encrypt
+            mf_fileobj.close()
+            mf_final_fileobj.close()
+            globals.backend.put(mf_final_tdp, mf_final_filename)
+            mf_final_tdp.delete()
+
+    globals.src_backend.close()
+    globals.backend.close()
+
+
+>>>>>>> MERGE-SOURCE
 def sync_archive():
-    """
+    u"""
     Synchronize local archive manifest file and sig chains to remote archives.
     Copy missing files from remote to local as needed to make sure the local
     archive is synchronized to remote storage.
@@ -1029,10 +1196,10 @@
     @rtype: void
     @return: void
     """
-    suffixes = [".g", ".gpg", ".z", ".gz", ".part"]
+    suffixes = [u".g", u".gpg", u".z", u".gz", u".part"]
 
     def get_metafiles(filelist):
-        """
+        u"""
         Return metafiles of interest from the file list.
         Files of interest are:
           sigtar - signature files
@@ -1053,7 +1220,7 @@
                 continue
             if pr.encrypted:
                 need_passphrase = True
-            if pr.type in ["full-sig", "new-sig"] or pr.manifest:
+            if pr.type in [u"full-sig", u"new-sig"] or pr.manifest:
                 base, ext = os.path.splitext(fn)
                 if ext not in suffixes:
                     base = fn
@@ -1064,20 +1231,20 @@
         return metafiles, partials, need_passphrase
 
     def copy_raw(src_iter, filename):
-        """
+        u"""
         Copy data from src_iter to file at fn
         """
-        file = open(filename, "wb")
+        file = open(filename, u"wb")
         while True:
             try:
-                data = src_iter.next().data
+                data = src_iter.__next__().data
             except StopIteration:
                 break
             file.write(data)
         file.close()
 
     def resolve_basename(fn):
-        """
+        u"""
         @return: (parsedresult, local_name, remote_name)
         """
         pr = file_naming.parse(fn)
@@ -1094,44 +1261,47 @@
     def remove_local(fn):
         del_name = globals.archive_dir.append(fn).name
 
-        log.Notice(_("Deleting local %s (not authoritative at backend).") %
-                   util.ufn(del_name))
+        log.Notice(_(u"Deleting local %s (not authoritative at backend).") %
+                   util.fsdecode(del_name))
         try:
             util.ignore_missing(os.unlink, del_name)
         except Exception as e:
-            log.Warn(_("Unable to delete %s: %s") % (util.ufn(del_name),
-                                                     util.uexc(e)))
+            log.Warn(_(u"Unable to delete %s: %s") % (util.fsdecode(del_name),
+                                                      util.uexc(e)))
 
     def copy_to_local(fn):
-        """
+        u"""
         Copy remote file fn to local cache.
         """
-        class Block:
-            """
+        class Block(object):
+            u"""
             Data block to return from SrcIter
             """
 
             def __init__(self, data):
                 self.data = data
 
-        class SrcIter:
-            """
+        class SrcIter(object):
+            u"""
             Iterate over source and return Block of data.
             """
 
             def __init__(self, fileobj):
                 self.fileobj = fileobj
 
-            def next(self):
+            def __next__(self):
                 try:
                     res = Block(self.fileobj.read(self.get_read_size()))
                 except Exception:
-                    if hasattr(self.fileobj, 'name'):
+                    if hasattr(self.fileobj, u'name'):
                         name = self.fileobj.name
+                        # name may be a path
+                        if hasattr(name, u'name'):
+                            name = name.name
                     else:
                         name = None
-                    log.FatalError(_("Failed to read %s: %s") %
-                                   (util.ufn(name), sys.exc_info()),
+                    log.FatalError(_(u"Failed to read %s: %s") %
+                                   (util.fsdecode(name), sys.exc_info()),
                                    log.ErrorCode.generic)
                 if not res.data:
                     self.fileobj.close()
@@ -1142,9 +1312,9 @@
                 return 128 * 1024
 
             def get_footer(self):
-                return ""
+                return b""
 
-        log.Notice(_("Copying %s to local cache.") % util.ufn(fn))
+        log.Notice(_(u"Copying %s to local cache.") % util.fsdecode(fn))
 
         pr, loc_name, rem_name = resolve_basename(fn)
 
@@ -1169,8 +1339,8 @@
     # we have the list of metafiles on both sides. remote is always
     # authoritative. figure out which are local spurious (should not
     # be there) and missing (should be there but are not).
-    local_keys = local_metafiles.keys()
-    remote_keys = remote_metafiles.keys()
+    local_keys = list(local_metafiles.keys())
+    remote_keys = list(remote_metafiles.keys())
 
     local_missing = []
     local_spurious = []
@@ -1192,32 +1362,32 @@
 
     # finally finish the process
     if not local_missing and not local_spurious:
-        log.Notice(_("Local and Remote metadata are synchronized, no sync needed."))
+        log.Notice(_(u"Local and Remote metadata are synchronized, no sync needed."))
     else:
         local_missing.sort()
         local_spurious.sort()
         if not globals.dry_run:
-            log.Notice(_("Synchronizing remote metadata to local cache..."))
+            log.Notice(_(u"Synchronizing remote metadata to local cache..."))
             if local_missing and (rem_needpass or loc_needpass):
                 # password for the --encrypt-key
-                globals.gpg_profile.passphrase = get_passphrase(1, "sync")
+                globals.gpg_profile.passphrase = get_passphrase(1, u"sync")
             for fn in local_spurious:
                 remove_local(fn)
-            if hasattr(globals.backend, 'pre_process_download'):
+            if hasattr(globals.backend, u'pre_process_download'):
                 globals.backend.pre_process_download(local_missing)
             for fn in local_missing:
                 copy_to_local(fn)
         else:
             if local_missing:
-                log.Notice(_("Sync would copy the following from remote to local:") +
-                           u"\n" + u"\n".join(map(util.ufn, local_missing)))
+                log.Notice(_(u"Sync would copy the following from remote to local:") +
+                           u"\n" + u"\n".join(map(util.fsdecode, local_missing)))
             if local_spurious:
-                log.Notice(_("Sync would remove the following spurious local files:") +
-                           u"\n" + u"\n".join(map(util.ufn, local_spurious)))
+                log.Notice(_(u"Sync would remove the following spurious local files:") +
+                           u"\n" + u"\n".join(map(util.fsdecode, local_spurious)))
 
 
 def check_last_manifest(col_stats):
-    """
+    u"""
     Check consistency and hostname/directory of last manifest
 
     @type col_stats: CollectionStatus object
@@ -1226,14 +1396,15 @@
     @rtype: void
     @return: void
     """
-    if not col_stats.all_backup_chains:
-        return
+    assert col_stats.all_backup_chains
     last_backup_set = col_stats.all_backup_chains[-1].get_last()
-    last_backup_set.check_manifests()
+    # check remote manifest only if we can decrypt it (see #1729796)
+    check_remote = not globals.encryption or globals.gpg_profile.passphrase
+    last_backup_set.check_manifests(check_remote=check_remote)
 
 
 def check_resources(action):
-    """
+    u"""
     Check for sufficient resources:
       - temp space for volume build
       - enough max open files
@@ -1245,7 +1416,7 @@
     @rtype: void
     @return: void
     """
-    if action in ["full", "inc", "restore"]:
+    if action in [u"full", u"inc", u"restore"]:
         # Make sure we have enough resouces to run
         # First check disk space in temp area.
         tempfile, tempname = tempdir.default().mkstemp()
@@ -1255,18 +1426,18 @@
         try:
             stats = os.statvfs(tempfs)
         except Exception:
-            log.FatalError(_("Unable to get free space on temp."),
+            log.FatalError(_(u"Unable to get free space on temp."),
                            log.ErrorCode.get_freespace_failed)
         # Calculate space we need for at least 2 volumes of full or inc
         # plus about 30% of one volume for the signature files.
-        freespace = stats[statvfs.F_FRSIZE] * stats[statvfs.F_BAVAIL]
+        freespace = stats.f_frsize * stats.f_bavail
         needspace = (((globals.async_concurrency + 1) * globals.volsize) +
                      int(0.30 * globals.volsize))
         if freespace < needspace:
-            log.FatalError(_("Temp space has %d available, backup needs approx %d.") %
+            log.FatalError(_(u"Temp space has %d available, backup needs approx %d.") %
                            (freespace, needspace), log.ErrorCode.not_enough_freespace)
         else:
-            log.Info(_("Temp has %d available, backup will use approx %d.") %
+            log.Info(_(u"Temp has %d available, backup will use approx %d.") %
                      (freespace, needspace))
 
         # Some environments like Cygwin run with an artificially
@@ -1274,29 +1445,30 @@
         try:
             soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
         except resource.error:
-            log.FatalError(_("Unable to get max open files."),
+            log.FatalError(_(u"Unable to get max open files."),
                            log.ErrorCode.get_ulimit_failed)
         maxopen = min([l for l in (soft, hard) if l > -1])
         if maxopen < 1024:
-            log.FatalError(_("Max open files of %s is too low, should be >= 1024.\n"
-                             "Use 'ulimit -n 1024' or higher to correct.\n") % (maxopen,),
+            log.FatalError(_(u"Max open files of %s is too low, should be >= 1024.\n"
+                             u"Use 'ulimit -n 1024' or higher to correct.\n") % (maxopen,),
                            log.ErrorCode.maxopen_too_low)
 
 
 def log_startup_parms(verbosity=log.INFO):
-    """
+    u"""
     log Python, duplicity, and system versions
     """
     log.Log(u'=' * 80, verbosity)
     log.Log(u"duplicity $version ($reldate)", verbosity)
-    log.Log(u"Args: %s" % util.ufn(' '.join(sys.argv)), verbosity)
+    u_args = (util.fsdecode(arg) for arg in sys.argv)
+    log.Log(u"Args: %s" % u' '.join(u_args), verbosity)
     log.Log(u' '.join(platform.uname()), verbosity)
     log.Log(u"%s %s" % (sys.executable or sys.platform, sys.version), verbosity)
     log.Log(u'=' * 80, verbosity)
 
 
-class Restart:
-    """
+class Restart(object):
+    u"""
     Class to aid in restart of inc or full backup.
     Instance in globals.restart if restart in progress.
     """
@@ -1313,10 +1485,10 @@
 
     def setParms(self, last_backup):
         if last_backup.time:
-            self.type = "full"
+            self.type = u"full"
             self.time = last_backup.time
         else:
-            self.type = "inc"
+            self.type = u"inc"
             self.end_time = last_backup.end_time
             self.start_time = last_backup.start_time
         # We start one volume back in case we weren't able to finish writing
@@ -1329,22 +1501,22 @@
         if (mf_len != self.start_vol) or not (mf_len and self.start_vol):
             if self.start_vol == 0:
                 # upload of 1st vol failed, clean and restart
-                log.Notice(_("RESTART: The first volume failed to upload before termination.\n"
-                             "         Restart is impossible...starting backup from beginning."))
+                log.Notice(_(u"RESTART: The first volume failed to upload before termination.\n"
+                             u"         Restart is impossible...starting backup from beginning."))
                 self.last_backup.delete()
                 os.execve(sys.argv[0], sys.argv, os.environ)
             elif mf_len - self.start_vol > 0:
                 # upload of N vols failed, fix manifest and restart
-                log.Notice(_("RESTART: Volumes %d to %d failed to upload before termination.\n"
-                             "         Restarting backup at volume %d.") %
+                log.Notice(_(u"RESTART: Volumes %d to %d failed to upload before termination.\n"
+                             u"         Restarting backup at volume %d.") %
                            (self.start_vol + 1, mf_len, self.start_vol + 1))
                 for vol in range(self.start_vol + 1, mf_len + 1):
                     mf.del_volume_info(vol)
             else:
                 # this is an 'impossible' state, remove last partial and restart
-                log.Notice(_("RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
-                             "         Restart is impossible ... duplicity will clean off the last partial\n"
-                             "         backup then restart the backup from the beginning.") %
+                log.Notice(_(u"RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
+                             u"         Restart is impossible ... duplicity will clean off the last partial\n"
+                             u"         backup then restart the backup from the beginning.") %
                            (mf_len, self.start_vol))
                 self.last_backup.delete()
                 os.execve(sys.argv[0], sys.argv, os.environ)
@@ -1356,14 +1528,14 @@
 
 
 def main():
-    """
+    u"""
     Start/end here
     """
     # per bug https://bugs.launchpad.net/duplicity/+bug/931175
     # duplicity crashes when PYTHONOPTIMIZE is set, so check
     # and refuse to run if it is set.
-    if 'PYTHONOPTIMIZE' in os.environ:
-        log.FatalError(_("""
+    if u'PYTHONOPTIMIZE' in os.environ:
+        log.FatalError(_(u"""
 PYTHONOPTIMIZE in the environment causes duplicity to fail to
 recognize its own backups.  Please remove PYTHONOPTIMIZE from
 the environment and rerun the backup.
@@ -1384,12 +1556,16 @@
     # determine what action we're performing and process command line
     action = commandline.ProcessCommandLine(sys.argv[1:])
 
+<<<<<<< TREE
     globals.lockpath = os.path.join(globals.archive_dir.name, "lockfile")
+=======
+    globals.lockpath = os.path.join(globals.archive_dir_path.name, b"lockfile")
+>>>>>>> MERGE-SOURCE
     globals.lockfile = fasteners.process_lock.InterProcessLock(globals.lockpath)
-    log.Debug(_("Acquiring lockfile %s") % globals.lockpath)
+    log.Debug(_(u"Acquiring lockfile %s") % globals.lockpath)
     if not globals.lockfile.acquire(blocking=False):
         log.FatalError(
-            "Another duplicity instance is already running with this archive directory\n",
+            u"Another duplicity instance is already running with this archive directory\n",
             log.ErrorCode.user_error)
         log.shutdown()
         sys.exit(2)
@@ -1415,7 +1591,11 @@
     check_resources(action)
 
     # check archive synch with remote, fix if needed
+<<<<<<< TREE
     if action not in ["collection-status"]:
+=======
+    if action not in [u"collection-status", u"replicate"]:
+>>>>>>> MERGE-SOURCE
         sync_archive()
 
     # get current collection status
@@ -1426,29 +1606,29 @@
     while True:
         # if we have to clean up the last partial, then col_stats are invalidated
         # and we have to start the process all over again until clean.
-        if action in ["full", "inc", "cleanup"]:
+        if action in [u"full", u"inc", u"cleanup"]:
             last_full_chain = col_stats.get_last_backup_chain()
             if not last_full_chain:
                 break
             last_backup = last_full_chain.get_last()
             if last_backup.partial:
-                if action in ["full", "inc"]:
+                if action in [u"full", u"inc"]:
                     # set restart parms from last_backup info
                     globals.restart = Restart(last_backup)
                     # (possibly) reset action
                     action = globals.restart.type
                     # reset the time strings
-                    if action == "full":
+                    if action == u"full":
                         dup_time.setcurtime(globals.restart.time)
                     else:
                         dup_time.setcurtime(globals.restart.end_time)
                         dup_time.setprevtime(globals.restart.start_time)
                     # log it -- main restart heavy lifting is done in write_multivol
-                    log.Notice(_("Last %s backup left a partial set, restarting." % action))
+                    log.Notice(_(u"Last %s backup left a partial set, restarting." % action))
                     break
                 else:
                     # remove last partial backup and get new collection status
-                    log.Notice(_("Cleaning up previous partial %s backup set, restarting." % action))
+                    log.Notice(_(u"Cleaning up previous partial %s backup set, restarting." % action))
                     last_backup.delete()
                     col_stats = collections.CollectionsStatus(globals.backend,
                                                               globals.archive_dir,
@@ -1460,12 +1640,13 @@
     # OK, now we have a stable collection
     last_full_time = col_stats.get_last_full_backup_time()
     if last_full_time > 0:
-        log.Notice(_("Last full backup date:") + " " + dup_time.timetopretty(last_full_time))
+        log.Notice(_(u"Last full backup date:") + u" " + dup_time.timetopretty(last_full_time))
     else:
-        log.Notice(_("Last full backup date: none"))
-    if not globals.restart and action == "inc" and last_full_time < globals.full_force_time:
-        log.Notice(_("Last full backup is too old, forcing full backup"))
-        action = "full"
+        log.Notice(_(u"Last full backup date: none"))
+    if not globals.restart and action == u"inc" and globals.full_force_time is not None and \
+       last_full_time < globals.full_force_time:
+        log.Notice(_(u"Last full backup is too old, forcing full backup"))
+        action = u"full"
     log.PrintCollectionStatus(col_stats)
 
     os.umask(0o77)
@@ -1473,24 +1654,38 @@
     # get the passphrase if we need to based on action/options
     globals.gpg_profile.passphrase = get_passphrase(1, action)
 
-    if action == "restore":
+    if action == u"restore":
         restore(col_stats)
-    elif action == "verify":
+    elif action == u"verify":
         verify(col_stats)
-    elif action == "list-current":
+    elif action == u"list-current":
         list_current(col_stats)
+<<<<<<< TREE
     elif action == "collection-status":
         log.PrintCollectionStatus(col_stats, True)
     elif action == "cleanup":
+=======
+    elif action == u"collection-status":
+        if not globals.file_changed:
+            log.PrintCollectionStatus(col_stats, True)
+        else:
+            log.PrintCollectionFileChangedStatus(col_stats, globals.file_changed, True)
+    elif action == u"cleanup":
+>>>>>>> MERGE-SOURCE
         cleanup(col_stats)
-    elif action == "remove-old":
+    elif action == u"remove-old":
         remove_old(col_stats)
-    elif action == "remove-all-but-n-full" or action == "remove-all-inc-of-but-n-full":
+    elif action == u"remove-all-but-n-full" or action == u"remove-all-inc-of-but-n-full":
         remove_all_but_n_full(col_stats)
-    elif action == "sync":
+    elif action == u"sync":
         sync_archive()
+<<<<<<< TREE
+=======
+    elif action == u"replicate":
+        replicate()
+>>>>>>> MERGE-SOURCE
     else:
-        assert action == "inc" or action == "full", action
+        assert action == u"inc" or action == u"full", action
         # the passphrase for full and inc is used by --sign-key
         # the sign key can have a different passphrase than the encrypt
         # key, therefore request a passphrase
@@ -1508,11 +1703,11 @@
             if (globals.gpg_profile.signing_passphrase and
                     globals.gpg_profile.passphrase != globals.gpg_profile.signing_passphrase):
                 log.FatalError(_(
-                    "When using symmetric encryption, the signing passphrase "
-                    "must equal the encryption passphrase."),
+                    u"When using symmetric encryption, the signing passphrase "
+                    u"must equal the encryption passphrase."),
                     log.ErrorCode.user_error)
 
-        if action == "full":
+        if action == u"full":
             full_backup(col_stats)
         else:  # attempt incremental
             sig_chain = check_sig_chain(col_stats)
@@ -1524,7 +1719,7 @@
                     # only ask for a passphrase if there was a previous backup
                     if col_stats.all_backup_chains:
                         globals.gpg_profile.passphrase = get_passphrase(1, action)
-                    check_last_manifest(col_stats)  # not needed for full backup
+                        check_last_manifest(col_stats)  # not needed for full backups
                 incremental_backup(sig_chain)
     globals.backend.close()
     log.shutdown()
@@ -1533,7 +1728,7 @@
 
 
 def with_tempdir(fn):
-    """
+    u"""
     Execute function and guarantee cleanup of tempdir is called
 
     @type fn: callable function
@@ -1547,8 +1742,13 @@
     finally:
         tempdir.default().cleanup()
 
+<<<<<<< TREE
 
 if __name__ == "__main__":
+=======
+
+if __name__ == u"__main__":
+>>>>>>> MERGE-SOURCE
     try:
 
         #         import cProfile
@@ -1577,7 +1777,7 @@
 
     except KeyboardInterrupt as e:
         # No traceback, just get out
-        log.Info(_("INT intercepted...exiting."))
+        log.Info(_(u"INT intercepted...exiting."))
         util.release_lockfile()
         sys.exit(4)
 
@@ -1585,8 +1785,13 @@
         # For gpg errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
         util.release_lockfile()
+<<<<<<< TREE
         log.Info(_("GPG error detail: %s")
                  % util.exception_traceback())
+=======
+        log.Info(_(u"GPG error detail: %s")
+                 % util.exception_traceback())
+>>>>>>> MERGE-SOURCE
         log.FatalError(u"%s: %s" % (e.__class__.__name__, e.args[0]),
                        log.ErrorCode.gpg_failed,
                        e.__class__.__name__)
@@ -1595,8 +1800,13 @@
         util.release_lockfile()
         # For user errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
+<<<<<<< TREE
         log.Info(_("User error detail: %s")
                  % util.exception_traceback())
+=======
+        log.Info(_(u"User error detail: %s")
+                 % util.exception_traceback())
+>>>>>>> MERGE-SOURCE
         log.FatalError(u"%s: %s" % (e.__class__.__name__, util.uexc(e)),
                        log.ErrorCode.user_error,
                        e.__class__.__name__)
@@ -1605,15 +1815,24 @@
         util.release_lockfile()
         # For backend errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
+<<<<<<< TREE
         log.Info(_("Backend error detail: %s")
                  % util.exception_traceback())
+=======
+        log.Info(_(u"Backend error detail: %s")
+                 % util.exception_traceback())
+>>>>>>> MERGE-SOURCE
         log.FatalError(u"%s: %s" % (e.__class__.__name__, util.uexc(e)),
                        log.ErrorCode.user_error,
                        e.__class__.__name__)
 
     except Exception as e:
         util.release_lockfile()
+<<<<<<< TREE
         if "Forced assertion for testing" in util.uexc(e):
+=======
+        if u"Forced assertion for testing" in util.uexc(e):
+>>>>>>> MERGE-SOURCE
             log.FatalError(u"%s: %s" % (e.__class__.__name__, util.uexc(e)),
                            log.ErrorCode.exception,
                            e.__class__.__name__)

=== modified file 'bin/duplicity.1'
--- bin/duplicity.1	2019-01-26 16:37:54 +0000
+++ bin/duplicity.1	2019-02-22 19:07:43 +0000
@@ -739,6 +739,12 @@
 recovery.
 
 .TP
+.BI "--s3-use-onezone-ia"
+Store volumes using One Zone - Infrequent Access when uploading to Amazon S3.
+This storage is similar to Standard - Infrequent Access, but only stores object
+data in one Availability Zone.
+
+.TP
 .BI "--s3-use-multiprocessing"
 Allow multipart volumne uploads to S3 through multiprocessing. This option
 requires Python 2.6 and can be used to make uploads to S3 more efficient.
@@ -773,6 +779,32 @@
 when uploading to S3 to ensure you kill connections to slow S3 endpoints.
 
 .TP
+<<<<<<< TREE
+=======
+.BI "--azure-blob-tier"
+Standard storage tier used for backup files (Hot|Cool|Archive).
+
+.TP
+.BI "--azure-max-single-put-size"
+Specify the number of the largest supported upload size where the Azure
+library makes only one put call. If the content size is known and below this
+value the Azure library will only perform one put request to upload one block.
+The number is expected to be in bytes.
+
+.TP
+.BI "--azure-max-block-size"
+Specify the number for the block size used by the Azure library to upload
+blobs if it is split into multiple blocks.
+The maximum block size the service supports is 104857600 (100MiB) and the
+default is 4194304 (4MiB)
+
+.TP
+.BI "--azure-max-connections"
+Specify the number of maximum connections to transfer one blob to Azure
+blob size exceeds 64MB. The default values is 2.
+
+.TP
+>>>>>>> MERGE-SOURCE
 .BI "--scp-command " command
 .B (only ssh pexpect backend with --use-scp enabled)
 The
@@ -1411,11 +1443,11 @@
 and
 .BI --exclude-filelist ,
 options also introduce file selection conditions.  They direct
-duplicity to read in a file, each line of which is a file
-specification, and to include or exclude the matching files.  Lines
-are separated by newlines or nulls, depending on whether the
---null-separator switch was given.  Each line in the filelist will be
-interpreted as a globbing pattern the way
+duplicity to read in a text file (either ASCII or UTF-8), each line
+of which is a file specification, and to include or exclude the
+matching files.  Lines are separated by newlines or nulls, depending
+on whether the --null-separator switch was given.  Each line in the
+filelist will be interpreted as a globbing pattern the way
 .BI --include
 and
 .BI --exclude

=== modified file 'bin/rdiffdir'
--- bin/rdiffdir	2015-05-01 13:56:13 +0000
+++ bin/rdiffdir	2019-02-22 19:07:43 +0000
@@ -25,6 +25,8 @@
 # Please send mail to me or the mailing list if you find bugs or have
 # any suggestions.
 
+from __future__ import print_function
+from builtins import str
 import sys
 import getopt
 import gzip
@@ -50,110 +52,110 @@
 
 
 def parse_cmdline_options(arglist):
-    """Parse argument list"""
+    u"""Parse argument list"""
     global gzip_compress, select_opts, select_files, sig_fileobj
 
     def sel_fl(filename):
-        """Helper function for including/excluding filelists below"""
+        u"""Helper function for including/excluding filelists below"""
         try:
-            return open(filename, "r")
+            return open(filename, u"r")
         except IOError:
-            log.FatalError(_("Error opening file %s") % util.ufn(filename))
+            log.FatalError(_(u"Error opening file %s") % util.fsdecode(filename))
 
     try:
-        optlist, args = getopt.getopt(arglist, "v:Vz",
-                                      ["gzip-compress", "exclude=", "exclude-device-files",
-                                       "exclude-filelist=", "exclude-filelist-stdin",
-                                       "exclude-globbing-filelist", "exclude-other-filesystems",
-                                       "exclude-regexp=", "include=", "include-filelist=",
-                                       "include-filelist-stdin", "include-globbing-filelist",
-                                       "include-regexp=", "max-blocksize", "null-separator",
-                                       "verbosity=", "write-sig-to=", "ignore-errors"])
+        optlist, args = getopt.getopt(arglist, u"v:Vz",
+                                      [u"gzip-compress", u"exclude=", u"exclude-device-files",
+                                       u"exclude-filelist=", u"exclude-filelist-stdin",
+                                       u"exclude-globbing-filelist", u"exclude-other-filesystems",
+                                       u"exclude-regexp=", u"include=", u"include-filelist=",
+                                       u"include-filelist-stdin", u"include-globbing-filelist",
+                                       u"include-regexp=", u"max-blocksize", u"null-separator",
+                                       u"verbosity=", u"write-sig-to=", u"ignore-errors"])
     except getopt.error as e:
-        command_line_error("Bad command line option: %s" % (str(e),))
+        command_line_error(u"Bad command line option: %s" % (str(e),))
 
     for opt, arg in optlist:
-        if opt == "--gzip_compress" or opt == "-z":
+        if opt == u"--gzip_compress" or opt == u"-z":
             gzip_compress = 1
-        elif (opt == "--exclude" or opt == "--exclude-regexp" or
-              opt == "--include" or opt == "--include-regexp"):
+        elif (opt == u"--exclude" or opt == u"--exclude-regexp" or
+              opt == u"--include" or opt == u"--include-regexp"):
             select_opts.append((opt, arg))
-        elif (opt == "--exclude-device-files" or
-              opt == "--exclude-other-filesystems"):
+        elif (opt == u"--exclude-device-files" or
+              opt == u"--exclude-other-filesystems"):
             select_opts.append((opt, None))
-        elif (opt == "--exclude-filelist" or opt == "--include-filelist" or
-              opt == "--exclude-globbing-filelist" or
-              opt == "--include-globbing-filelist"):
+        elif (opt == u"--exclude-filelist" or opt == u"--include-filelist" or
+              opt == u"--exclude-globbing-filelist" or
+              opt == u"--include-globbing-filelist"):
             select_opts.append((opt, arg))
             select_files.append(sel_fl(arg))
-        elif opt == "--exclude-filelist-stdin":
-            select_opts.append(("--exclude-filelist", "standard input"))
-            select_files.append(sys.stdin)
-        elif opt == "--include-filelist-stdin":
-            select_opts.append(("--include-filelist", "standard input"))
-            select_files.append(sys.stdin)
-        elif opt == "--max-blocksize":
+        elif opt == u"--exclude-filelist-stdin":
+            select_opts.append((u"--exclude-filelist", u"standard input"))
+            select_files.append(sys.stdin)
+        elif opt == u"--include-filelist-stdin":
+            select_opts.append((u"--include-filelist", u"standard input"))
+            select_files.append(sys.stdin)
+        elif opt == u"--max-blocksize":
             globals.max_blocksize = int(arg)
-        elif opt == "--null-separator":
+        elif opt == u"--null-separator":
             globals.null_separator = 1
-        elif opt == "-V":
-            print "rdiffdir", str(globals.version)
+        elif opt == u"-V":
+            print(u"rdiffdir", str(globals.version))
             sys.exit(0)
-        elif opt == "-v" or opt == "--verbosity":
+        elif opt == u"-v" or opt == u"--verbosity":
             log.setverbosity(int(arg))
-        elif opt == "--write-sig-to" or opt == "--write-signature-to":
-            sig_fileobj = get_fileobj(arg, "wb")
-        elif opt == "--ignore-errors":
+        elif opt == u"--write-sig-to" or opt == u"--write-signature-to":
+            sig_fileobj = get_fileobj(arg, u"wb")
+        elif opt == u"--ignore-errors":
             globals.ignore_errors = 1
         else:
-            command_line_error("Unknown option %s" % opt)
+            command_line_error(u"Unknown option %s" % opt)
 
     return args
 
 
 def command_line_error(message):
-    """Indicate a command line error and exit"""
-    sys.stderr.write("Error: %s\n" % (message,))
-    sys.stderr.write("See the rdiffdir manual page for instructions\n")
+    u"""Indicate a command line error and exit"""
+    sys.stderr.write(u"Error: %s\n" % (message,))
+    sys.stderr.write(u"See the rdiffdir manual page for instructions\n")
     sys.exit(1)
 
 
 def check_does_not_exist(filename):
-    """Exit with error message if filename already exists"""
+    u"""Exit with error message if filename already exists"""
     try:
         os.lstat(filename)
     except OSError:
         pass
     else:
-        log.FatalError(_("File %s already exists, will not "
-                         "overwrite.") % util.ufn(filename))
+        log.FatalError(_(u"File %s already exists, will not "
+                         u"overwrite.") % util.fsdecode(filename))
 
 
 def get_action(args):
-    """Figure out the main action from the arguments"""
+    u"""Figure out the main action from the arguments"""
     def require_args(num, upper_bound_too=True):
         if len(args) - 1 < num:
-            command_line_error("Too few arguments")
+            command_line_error(u"Too few arguments")
         elif upper_bound_too and len(args) - 1 > num:
-            command_line_error("Too many arguments")
+            command_line_error(u"Too many arguments")
 
     if not args:
-        command_line_error("No arguments found")
+        command_line_error(u"No arguments found")
     command = args[0]
-    if command == "sig" or command == "signature":
-        require_args(2)
-        command = "sig"
-    elif command == "tar":
-        require_args(2)
-    elif command == "delta":
+    if command == u"sig" or command == u"signature":
+        require_args(2)
+        command = u"sig"
+    elif command == u"tar":
+        require_args(2)
+    elif command == u"delta":
         require_args(3, False)
-    elif command == "patch":
+    elif command == u"patch":
         require_args(2)
     return command, args[1:]
 
 
 def get_selection(filename):
-    """Return selection iter starting at path with arguments applied"""
+    u"""Return selection iter starting at path with arguments applied"""
     global select_opts, select_files
     sel = selection.Select(path.Path(filename))
     sel.ParseArgs(select_opts, select_files)
@@ -161,20 +163,20 @@
 
 
 def get_fileobj(filename, mode):
-    """Get file object or stdin/stdout from filename"""
-    if mode == "r" or mode == "rb":
-        if filename == "-":
+    u"""Get file object or stdin/stdout from filename"""
+    if mode == u"r" or mode == u"rb":
+        if filename == u"-":
             fp = sys.stdin
         else:
             fp = open(filename, mode)
-    elif mode == "w" or mode == "wb":
-        if filename == "-":
+    elif mode == u"w" or mode == u"wb":
+        if filename == u"-":
             fp = sys.stdout
         else:
             check_does_not_exist(filename)
             fp = open(filename, mode)
     else:
-        assert 0, "Unknown mode " + str(mode)
+        assert 0, u"Unknown mode " + str(mode)
 
     if gzip_compress:
         return gzip.GzipFile(None, fp.mode, 9, fp)
@@ -183,19 +185,19 @@
 
 
 def write_sig(dirname, outfp):
-    """Write signature of dirname into file object outfp"""
+    u"""Write signature of dirname into file object outfp"""
     diffdir.write_block_iter(diffdir.DirSig(get_selection(dirname)), outfp)
 
 
 def write_delta(dirname, sig_infp, outfp):
-    """Write delta to fileobj outfp, reading from dirname and sig_infp"""
+    u"""Write delta to fileobj outfp, reading from dirname and sig_infp"""
     delta_iter = diffdir.DirDelta(get_selection(dirname), sig_infp)
     diffdir.write_block_iter(delta_iter, outfp)
     assert not outfp.close()
 
 
 def write_delta_and_sig(dirname, sig_infp, outfp, sig_outfp):
-    """Write delta and also signature of dirname"""
+    u"""Write delta and also signature of dirname"""
     sel = get_selection(dirname)
     delta_iter = diffdir.DirDelta_WriteSig(sel, sig_infp, sig_outfp)
     diffdir.write_block_iter(delta_iter, outfp)
@@ -203,48 +205,48 @@
 
 
 def patch(dirname, deltafp):
-    """Patch dirname, reading delta tar from deltafp"""
+    u"""Patch dirname, reading delta tar from deltafp"""
     patchdir.Patch(path.Path(dirname), deltafp)
 
 
 def write_tar(dirname, outfp):
-    """Store dirname into a tarfile, write to outfp"""
+    u"""Store dirname into a tarfile, write to outfp"""
     diffdir.write_block_iter(diffdir.DirFull(get_selection(dirname)), outfp)
 
 
 def write_tar_and_sig(dirname, outfp, sig_outfp):
-    """Write tar of dirname to outfp, signature of same to sig_outfp"""
+    u"""Write tar of dirname to outfp, signature of same to sig_outfp"""
     full_iter = diffdir.DirFull_WriteSig(get_selection(dirname), sig_outfp)
     diffdir.write_block_iter(full_iter, outfp)
 
 
 def main():
-    """Start here"""
+    u"""Start here"""
     log.setup()
     args = parse_cmdline_options(sys.argv[1:])
     action, file_args = get_action(args)
-    if action == "sig":
-        write_sig(file_args[0], get_fileobj(file_args[1], "wb"))
-    elif action == "delta":
-        sig_infp = [get_fileobj(fname, "rb") for fname in file_args[0:-2]]
-        delta_outfp = get_fileobj(file_args[-1], "wb")
+    if action == u"sig":
+        write_sig(file_args[0], get_fileobj(file_args[1], u"wb"))
+    elif action == u"delta":
+        sig_infp = [get_fileobj(fname, u"rb") for fname in file_args[0:-2]]
+        delta_outfp = get_fileobj(file_args[-1], u"wb")
         if sig_fileobj:
             write_delta_and_sig(file_args[-2], sig_infp,
                                 delta_outfp, sig_fileobj)
         else:
             write_delta(file_args[-2], sig_infp, delta_outfp)
-    elif action == "patch":
-        patch(file_args[0], get_fileobj(file_args[1], "rb"))
-    elif action == "tar":
+    elif action == u"patch":
+        patch(file_args[0], get_fileobj(file_args[1], u"rb"))
+    elif action == u"tar":
         if sig_fileobj:
             write_tar_and_sig(file_args[0],
-                              get_fileobj(file_args[1], "wb"),
+                              get_fileobj(file_args[1], u"wb"),
                               sig_fileobj)
         else:
-            write_tar(file_args[0], get_fileobj(file_args[1], "wb"))
+            write_tar(file_args[0], get_fileobj(file_args[1], u"wb"))
     else:
-        command_line_error("Bad command " + action)
-
-
-if __name__ == "__main__":
+        command_line_error(u"Bad command " + action)
+
+
+if __name__ == u"__main__":
     main()

=== modified file 'debian/control'
--- debian/control	2017-10-12 16:33:19 +0000
+++ debian/control	2019-02-22 19:07:43 +0000
@@ -11,6 +11,7 @@
                pylint,
                python-dev,
                python-fasteners,
+               python-future,
                python-mock,
                python-pexpect,
                python-setuptools,
@@ -28,6 +29,7 @@
          ${shlibs:Depends},
          gnupg,
          python-fasteners,
+         python-future,
          python-pexpect,
 Suggests: ncftp,
           python-boto,

=== added directory 'docs'
=== added file 'docs/Makefile.OTHER'
--- docs/Makefile.OTHER	1970-01-01 00:00:00 +0000
+++ docs/Makefile.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line.
+SPHINXOPTS    =
+SPHINXBUILD   = sphinx-build
+SPHINXPROJ    = duplicity-src8
+SOURCEDIR     = .
+BUILDDIR      = _build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file

=== added file 'docs/conf.py.OTHER'
--- docs/conf.py.OTHER	1970-01-01 00:00:00 +0000
+++ docs/conf.py.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,187 @@
+# -*- coding: utf-8 -*-
+#
+# Configuration file for the Sphinx documentation builder.
+#
+# This file does only contain a selection of the most common options. For a
+# full list see the documentation:
+# http://www.sphinx-doc.org/en/master/config
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+# import os
+# import sys
+# sys.path.insert(0, u'/Users/ken/workspace/duplicity-src8')
+
+
+# -- Project information -----------------------------------------------------
+
+project = u'duplicity-src8'
+copyright = u'2018, Author'
+author = u'Author'
+
+# The short X.Y version
+version = u''
+# The full version, including alpha/beta/rc tags
+release = u''
+
+
+# -- General configuration ---------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+    'sphinx.ext.autodoc',
+    'sphinx.ext.viewcode',
+    'sphinx.ext.todo',
+]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The master toctree document.
+master_doc = 'index'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = 'en'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path .
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'alabaster'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Custom sidebar templates, must be a dictionary that maps document names
+# to template names.
+#
+# The default sidebars (for documents that don't match any pattern) are
+# defined by theme itself.  Builtin themes are using these templates by
+# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
+# 'searchbox.html']``.
+#
+# html_sidebars = {}
+
+
+# -- Options for HTMLHelp output ---------------------------------------------
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'duplicity-src8doc'
+
+
+# -- Options for LaTeX output ------------------------------------------------
+
+latex_elements = {
+    # The paper size ('letterpaper' or 'a4paper').
+    #
+    # 'papersize': 'letterpaper',
+
+    # The font size ('10pt', '11pt' or '12pt').
+    #
+    # 'pointsize': '10pt',
+
+    # Additional stuff for the LaTeX preamble.
+    #
+    # 'preamble': '',
+
+    # Latex figure (float) alignment
+    #
+    # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+#  author, documentclass [howto, manual, or own class]).
+latex_documents = [
+    (master_doc, 'duplicity-src8.tex', u'duplicity-src8 Documentation',
+     u'Author', 'manual'),
+]
+
+
+# -- Options for manual page output ------------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+    (master_doc, 'duplicity-src8', u'duplicity-src8 Documentation',
+     [author], 1)
+]
+
+
+# -- Options for Texinfo output ----------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+#  dir menu entry, description, category)
+texinfo_documents = [
+    (master_doc, 'duplicity-src8', u'duplicity-src8 Documentation',
+     author, 'duplicity-src8', 'One line description of project.',
+     'Miscellaneous'),
+]
+
+
+# -- Options for Epub output -------------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = project
+epub_author = author
+epub_publisher = author
+epub_copyright = copyright
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#
+# epub_identifier = ''
+
+# A unique identification for the text.
+#
+# epub_uid = ''
+
+# A list of files that should not be packed into the epub file.
+epub_exclude_files = ['search.html']
+
+
+# -- Extension configuration -------------------------------------------------
+
+# -- Options for todo extension ----------------------------------------------
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = True
\ No newline at end of file

=== added file 'docs/duplicity.backends.adbackend.rst'
--- docs/duplicity.backends.adbackend.rst	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.adbackend.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.adbackend module
+===================================
+
+.. automodule:: duplicity.backends.adbackend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.jottacloudbackend.rst'
--- docs/duplicity.backends.jottacloudbackend.rst	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.jottacloudbackend.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.jottacloudbackend module
+===========================================
+
+.. automodule:: duplicity.backends.jottacloudbackend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.pcabackend.rst'
--- docs/duplicity.backends.pcabackend.rst	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.pcabackend.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.pcabackend module
+====================================
+
+.. automodule:: duplicity.backends.pcabackend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.pyrax_identity.hubic.rst.OTHER'
--- docs/duplicity.backends.pyrax_identity.hubic.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.pyrax_identity.hubic.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.pyrax\_identity.hubic module
+===============================================
+
+.. automodule:: duplicity.backends.pyrax_identity.hubic
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.pyrax_identity.rst.OTHER'
--- docs/duplicity.backends.pyrax_identity.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.pyrax_identity.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,17 @@
+duplicity.backends.pyrax\_identity package
+==========================================
+
+Submodules
+----------
+
+.. toctree::
+
+   duplicity.backends.pyrax_identity.hubic
+
+Module contents
+---------------
+
+.. automodule:: duplicity.backends.pyrax_identity
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.rst.OTHER'
--- docs/duplicity.backends.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,52 @@
+duplicity.backends package
+==========================
+
+Subpackages
+-----------
+
+.. toctree::
+
+    duplicity.backends.pyrax_identity
+
+Submodules
+----------
+
+.. toctree::
+
+   duplicity.backends.adbackend
+   duplicity.backends.azurebackend
+   duplicity.backends.b2backend
+   duplicity.backends.botobackend
+   duplicity.backends.cfbackend
+   duplicity.backends.dpbxbackend
+   duplicity.backends.gdocsbackend
+   duplicity.backends.giobackend
+   duplicity.backends.hsibackend
+   duplicity.backends.hubicbackend
+   duplicity.backends.imapbackend
+   duplicity.backends.jottacloudbackend
+   duplicity.backends.lftpbackend
+   duplicity.backends.localbackend
+   duplicity.backends.mediafirebackend
+   duplicity.backends.megabackend
+   duplicity.backends.multibackend
+   duplicity.backends.ncftpbackend
+   duplicity.backends.onedrivebackend
+   duplicity.backends.par2backend
+   duplicity.backends.pcabackend
+   duplicity.backends.pydrivebackend
+   duplicity.backends.rsyncbackend
+   duplicity.backends.ssh_paramiko_backend
+   duplicity.backends.ssh_pexpect_backend
+   duplicity.backends.swiftbackend
+   duplicity.backends.sxbackend
+   duplicity.backends.tahoebackend
+   duplicity.backends.webdavbackend
+
+Module contents
+---------------
+
+.. automodule:: duplicity.backends
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.ssh_paramiko_backend.rst.OTHER'
--- docs/duplicity.backends.ssh_paramiko_backend.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.ssh_paramiko_backend.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.ssh\_paramiko\_backend module
+================================================
+
+.. automodule:: duplicity.backends.ssh_paramiko_backend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.backends.ssh_pexpect_backend.rst.OTHER'
--- docs/duplicity.backends.ssh_pexpect_backend.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.backends.ssh_pexpect_backend.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.backends.ssh\_pexpect\_backend module
+===============================================
+
+.. automodule:: duplicity.backends.ssh_pexpect_backend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.cached_ops.rst.OTHER'
--- docs/duplicity.cached_ops.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.cached_ops.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.cached\_ops module
+============================
+
+.. automodule:: duplicity.cached_ops
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.compilec.rst'
--- docs/duplicity.compilec.rst	1970-01-01 00:00:00 +0000
+++ docs/duplicity.compilec.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.compilec module
+=========================
+
+.. automodule:: duplicity.compilec
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.dup_temp.rst.OTHER'
--- docs/duplicity.dup_temp.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.dup_temp.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.dup\_temp module
+==========================
+
+.. automodule:: duplicity.dup_temp
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.dup_threading.rst.OTHER'
--- docs/duplicity.dup_threading.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.dup_threading.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.dup\_threading module
+===============================
+
+.. automodule:: duplicity.dup_threading
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.dup_time.rst.OTHER'
--- docs/duplicity.dup_time.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.dup_time.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.dup\_time module
+==========================
+
+.. automodule:: duplicity.dup_time
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.file_naming.rst.OTHER'
--- docs/duplicity.file_naming.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.file_naming.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+duplicity.file\_naming module
+=============================
+
+.. automodule:: duplicity.file_naming
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/duplicity.rst.OTHER'
--- docs/duplicity.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/duplicity.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,53 @@
+duplicity package
+=================
+
+Subpackages
+-----------
+
+.. toctree::
+
+    duplicity.backends
+
+Submodules
+----------
+
+.. toctree::
+
+   duplicity.asyncscheduler
+   duplicity.backend
+   duplicity.cached_ops
+   duplicity.collections
+   duplicity.commandline
+   duplicity.compilec
+   duplicity.diffdir
+   duplicity.dup_temp
+   duplicity.dup_threading
+   duplicity.dup_time
+   duplicity.errors
+   duplicity.file_naming
+   duplicity.filechunkio
+   duplicity.globals
+   duplicity.globmatch
+   duplicity.gpg
+   duplicity.gpginterface
+   duplicity.lazy
+   duplicity.librsync
+   duplicity.log
+   duplicity.manifest
+   duplicity.patchdir
+   duplicity.path
+   duplicity.progress
+   duplicity.robust
+   duplicity.selection
+   duplicity.statistics
+   duplicity.tarfile
+   duplicity.tempdir
+   duplicity.util
+
+Module contents
+---------------
+
+.. automodule:: duplicity
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/index.rst.OTHER'
--- docs/index.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/index.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,23 @@
+.. duplicity-src8 documentation master file, created by
+   sphinx-quickstart on Tue Jul 24 09:22:22 2018.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to duplicity-src8's documentation!
+==========================================
+
+.. toctree::
+   :maxdepth: 4
+   :caption: Contents:
+
+   duplicity
+   setup
+   testing
+
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`

=== added file 'docs/make.bat.OTHER'
--- docs/make.bat.OTHER	1970-01-01 00:00:00 +0000
+++ docs/make.bat.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,36 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+	set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=.
+set BUILDDIR=_build
+set SPHINXPROJ=duplicity-src8
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+	echo.
+	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+	echo.installed, then set the SPHINXBUILD environment variable to point
+	echo.to the full path of the 'sphinx-build' executable. Alternatively you
+	echo.may add the Sphinx directory to PATH.
+	echo.
+	echo.If you don't have Sphinx installed, grab it from
+	echo.http://sphinx-doc.org/
+	exit /b 1
+)
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
+
+:end
+popd

=== added file 'docs/setup.rst'
--- docs/setup.rst	1970-01-01 00:00:00 +0000
+++ docs/setup.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+setup module
+============
+
+.. automodule:: setup
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.find_unadorned_strings.rst'
--- docs/testing.find_unadorned_strings.rst	1970-01-01 00:00:00 +0000
+++ docs/testing.find_unadorned_strings.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.find\_unadorned\_strings module
+=======================================
+
+.. automodule:: testing.find_unadorned_strings
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.fix_unadorned_strings.rst'
--- docs/testing.fix_unadorned_strings.rst	1970-01-01 00:00:00 +0000
+++ docs/testing.fix_unadorned_strings.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.fix\_unadorned\_strings module
+======================================
+
+.. automodule:: testing.fix_unadorned_strings
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.rst.OTHER'
--- docs/testing.functional.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,25 @@
+testing.functional package
+==========================
+
+Submodules
+----------
+
+.. toctree::
+
+   testing.functional.test_badupload
+   testing.functional.test_cleanup
+   testing.functional.test_final
+   testing.functional.test_log
+   testing.functional.test_rdiffdir
+   testing.functional.test_replicate
+   testing.functional.test_restart
+   testing.functional.test_selection
+   testing.functional.test_verify
+
+Module contents
+---------------
+
+.. automodule:: testing.functional
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_badupload.rst.OTHER'
--- docs/testing.functional.test_badupload.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_badupload.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_badupload module
+=========================================
+
+.. automodule:: testing.functional.test_badupload
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_cleanup.rst.OTHER'
--- docs/testing.functional.test_cleanup.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_cleanup.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_cleanup module
+=======================================
+
+.. automodule:: testing.functional.test_cleanup
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_final.rst.OTHER'
--- docs/testing.functional.test_final.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_final.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_final module
+=====================================
+
+.. automodule:: testing.functional.test_final
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_log.rst.OTHER'
--- docs/testing.functional.test_log.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_log.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_log module
+===================================
+
+.. automodule:: testing.functional.test_log
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_rdiffdir.rst.OTHER'
--- docs/testing.functional.test_rdiffdir.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_rdiffdir.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_rdiffdir module
+========================================
+
+.. automodule:: testing.functional.test_rdiffdir
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_replicate.rst'
--- docs/testing.functional.test_replicate.rst	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_replicate.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_replicate module
+=========================================
+
+.. automodule:: testing.functional.test_replicate
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_restart.rst.OTHER'
--- docs/testing.functional.test_restart.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_restart.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_restart module
+=======================================
+
+.. automodule:: testing.functional.test_restart
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_selection.rst.OTHER'
--- docs/testing.functional.test_selection.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_selection.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_selection module
+=========================================
+
+.. automodule:: testing.functional.test_selection
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.functional.test_verify.rst.OTHER'
--- docs/testing.functional.test_verify.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.functional.test_verify.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.functional.test\_verify module
+======================================
+
+.. automodule:: testing.functional.test_verify
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.manual.rst'
--- docs/testing.manual.rst	1970-01-01 00:00:00 +0000
+++ docs/testing.manual.rst	2019-02-22 19:07:43 +0000
@@ -0,0 +1,10 @@
+testing.manual package
+======================
+
+Module contents
+---------------
+
+.. automodule:: testing.manual
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.rst.OTHER'
--- docs/testing.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,29 @@
+testing package
+===============
+
+Subpackages
+-----------
+
+.. toctree::
+
+    testing.functional
+    testing.manual
+    testing.overrides
+    testing.unit
+
+Submodules
+----------
+
+.. toctree::
+
+   testing.find_unadorned_strings
+   testing.fix_unadorned_strings
+   testing.test_code
+
+Module contents
+---------------
+
+.. automodule:: testing
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.test_code.rst.OTHER'
--- docs/testing.test_code.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.test_code.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.test\_code module
+=========================
+
+.. automodule:: testing.test_code
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_backend.rst.OTHER'
--- docs/testing.unit.test_backend.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_backend.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_backend module
+=================================
+
+.. automodule:: testing.unit.test_backend
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_backend_instance.rst.OTHER'
--- docs/testing.unit.test_backend_instance.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_backend_instance.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_backend\_instance module
+===========================================
+
+.. automodule:: testing.unit.test_backend_instance
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_collections.rst.OTHER'
--- docs/testing.unit.test_collections.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_collections.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_collections module
+=====================================
+
+.. automodule:: testing.unit.test_collections
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_diffdir.rst.OTHER'
--- docs/testing.unit.test_diffdir.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_diffdir.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_diffdir module
+=================================
+
+.. automodule:: testing.unit.test_diffdir
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_dup_temp.rst.OTHER'
--- docs/testing.unit.test_dup_temp.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_dup_temp.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_dup\_temp module
+===================================
+
+.. automodule:: testing.unit.test_dup_temp
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_dup_time.rst.OTHER'
--- docs/testing.unit.test_dup_time.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_dup_time.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_dup\_time module
+===================================
+
+.. automodule:: testing.unit.test_dup_time
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_file_naming.rst.OTHER'
--- docs/testing.unit.test_file_naming.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_file_naming.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_file\_naming module
+======================================
+
+.. automodule:: testing.unit.test_file_naming
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_globmatch.rst.OTHER'
--- docs/testing.unit.test_globmatch.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_globmatch.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_globmatch module
+===================================
+
+.. automodule:: testing.unit.test_globmatch
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_gpg.rst.OTHER'
--- docs/testing.unit.test_gpg.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_gpg.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_gpg module
+=============================
+
+.. automodule:: testing.unit.test_gpg
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_gpginterface.rst.OTHER'
--- docs/testing.unit.test_gpginterface.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_gpginterface.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_gpginterface module
+======================================
+
+.. automodule:: testing.unit.test_gpginterface
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_lazy.rst.OTHER'
--- docs/testing.unit.test_lazy.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_lazy.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_lazy module
+==============================
+
+.. automodule:: testing.unit.test_lazy
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_manifest.rst.OTHER'
--- docs/testing.unit.test_manifest.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_manifest.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_manifest module
+==================================
+
+.. automodule:: testing.unit.test_manifest
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_patchdir.rst.OTHER'
--- docs/testing.unit.test_patchdir.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_patchdir.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_patchdir module
+==================================
+
+.. automodule:: testing.unit.test_patchdir
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_path.rst.OTHER'
--- docs/testing.unit.test_path.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_path.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_path module
+==============================
+
+.. automodule:: testing.unit.test_path
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_selection.rst.OTHER'
--- docs/testing.unit.test_selection.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_selection.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_selection module
+===================================
+
+.. automodule:: testing.unit.test_selection
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_statistics.rst.OTHER'
--- docs/testing.unit.test_statistics.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_statistics.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_statistics module
+====================================
+
+.. automodule:: testing.unit.test_statistics
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_tarfile.rst.OTHER'
--- docs/testing.unit.test_tarfile.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_tarfile.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_tarfile module
+=================================
+
+.. automodule:: testing.unit.test_tarfile
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== added file 'docs/testing.unit.test_tempdir.rst.OTHER'
--- docs/testing.unit.test_tempdir.rst.OTHER	1970-01-01 00:00:00 +0000
+++ docs/testing.unit.test_tempdir.rst.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,7 @@
+testing.unit.test\_tempdir module
+=================================
+
+.. automodule:: testing.unit.test_tempdir
+    :members:
+    :undoc-members:
+    :show-inheritance:

=== modified file 'duplicity/__init__.py'
--- duplicity/__init__.py	2014-04-16 20:45:09 +0000
+++ duplicity/__init__.py	2019-02-22 19:07:43 +0000
@@ -19,5 +19,9 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import sys
 import gettext
-gettext.install('duplicity', unicode=True, names=['ngettext'])
+if sys.version_info.major >= 3:
+    gettext.install(u'duplicity', names=[u'ngettext'])
+else:
+    gettext.install(u'duplicity', names=[u'ngettext'])

=== modified file 'duplicity/_librsyncmodule.c'
--- duplicity/_librsyncmodule.c	2017-08-06 21:10:28 +0000
+++ duplicity/_librsyncmodule.c	2019-02-22 19:07:43 +0000
@@ -89,7 +89,11 @@
   rs_buffers_t buf;
   rs_result result;
 
+#if PY_MAJOR_VERSION >= 3
+  if (!PyArg_ParseTuple(args, "y#:cycle", &inbuf, &inbuf_length))
+#else
   if (!PyArg_ParseTuple(args, "s#:cycle", &inbuf, &inbuf_length))
+#endif
     return NULL;
 
   buf.next_in = inbuf;
@@ -105,7 +109,11 @@
     return NULL;
   }
 
+#if PY_MAJOR_VERSION >= 3
+  return Py_BuildValue("(ily#)", (result == RS_DONE),
+#else
   return Py_BuildValue("(ils#)", (result == RS_DONE),
+#endif
                        (long)inbuf_length - (long)buf.avail_in,
                        outbuf, RS_JOB_BLOCKSIZE - (long)buf.avail_out);
 }
@@ -169,7 +177,11 @@
   rs_buffers_t buf;
   rs_result result;
 
+#if PY_MAJOR_VERSION >= 3
+  if (!PyArg_ParseTuple(args,"y#:new_deltamaker", &sig_string, &sig_length))
+#else
   if (!PyArg_ParseTuple(args,"s#:new_deltamaker", &sig_string, &sig_length))
+#endif
     return NULL;
 
   dm = PyObject_New(_librsync_DeltaMakerObject, &_librsync_DeltaMakerType);
@@ -222,7 +234,11 @@
   rs_buffers_t buf;
   rs_result result;
 
+#if PY_MAJOR_VERSION >= 3
+  if (!PyArg_ParseTuple(args, "y#:cycle", &inbuf, &inbuf_length))
+#else
   if (!PyArg_ParseTuple(args, "s#:cycle", &inbuf, &inbuf_length))
+#endif
     return NULL;
 
   buf.next_in = inbuf;
@@ -237,7 +253,11 @@
     return NULL;
   }
 
+#if PY_MAJOR_VERSION >= 3
+  return Py_BuildValue("(ily#)", (result == RS_DONE),
+#else
   return Py_BuildValue("(ils#)", (result == RS_DONE),
+#endif
                        (long)inbuf_length - (long)buf.avail_in,
                        outbuf, RS_JOB_BLOCKSIZE - (long)buf.avail_out);
 }
@@ -351,7 +371,11 @@
   rs_buffers_t buf;
   rs_result result;
 
+#if PY_MAJOR_VERSION >= 3
+  if (!PyArg_ParseTuple(args, "y#:cycle", &inbuf, &inbuf_length))
+#else
   if (!PyArg_ParseTuple(args, "s#:cycle", &inbuf, &inbuf_length))
+#endif
     return NULL;
 
   buf.next_in = inbuf;
@@ -366,7 +390,11 @@
     return NULL;
   }
 
+#if PY_MAJOR_VERSION >= 3
+  return Py_BuildValue("(ily#)", (result == RS_DONE),
+#else
   return Py_BuildValue("(ils#)", (result == RS_DONE),
+#endif
                        (long)inbuf_length - (long)buf.avail_in,
                        outbuf, RS_JOB_BLOCKSIZE - (long)buf.avail_out);
 }

=== modified file 'duplicity/asyncscheduler.py'
--- duplicity/asyncscheduler.py	2015-01-31 23:30:49 +0000
+++ duplicity/asyncscheduler.py	2019-02-22 19:07:43 +0000
@@ -20,11 +20,15 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""
+u"""
 Asynchronous job scheduler, for concurrent execution with minimalistic
 dependency guarantees.
 """
 
+from future import standard_library
+standard_library.install_aliases()
+from builtins import object
+import gettext
 import duplicity
 from duplicity import log
 from duplicity.dup_threading import require_threading
@@ -36,8 +40,8 @@
 threading = duplicity.dup_threading.threading_module()
 
 
-class AsyncScheduler:
-    """
+class AsyncScheduler(object):
+    u"""
     Easy-to-use scheduler of function calls to be executed
     concurrently. A very simple dependency mechanism exists in the
     form of barriers (see insert_barrier()).
@@ -59,14 +63,14 @@
     """
 
     def __init__(self, concurrency):
-        """
+        u"""
         Create an asynchronous scheduler that executes jobs with the
         given level of concurrency.
         """
-        log.Info("%s: %s" % (self.__class__.__name__,
-                             _("instantiating at concurrency %d") %
-                             (concurrency)))
-        assert concurrency >= 0, "%s concurrency level must be >= 0" % (self.__class__.__name__,)
+        log.Info(u"%s: %s" % (self.__class__.__name__,
+                              _(u"instantiating at concurrency %d") %
+                              (concurrency)))
+        assert concurrency >= 0, u"%s concurrency level must be >= 0" % (self.__class__.__name__,)
 
         self.__failed = False  # has at least one task failed so far?
         self.__failed_waiter = None  # when __failed, the waiter of the first task that failed
@@ -79,10 +83,10 @@
 #                                                    # are not technically efficient.
 
         if concurrency > 0:
-            require_threading("concurrency > 0 (%d)" % (concurrency,))
+            require_threading(u"concurrency > 0 (%d)" % (concurrency,))
 
     def insert_barrier(self):
-        """
+        u"""
         Proclaim that any tasks scheduled prior to the call to this
         method MUST be executed prior to any tasks scheduled after the
         call to this method.
@@ -91,7 +95,7 @@
         barrier must be inserted in between to guarantee that A
         happens before B.
         """
-        log.Debug("%s: %s" % (self.__class__.__name__, _("inserting barrier")))
+        log.Debug(u"%s: %s" % (self.__class__.__name__, _(u"inserting barrier")))
         # With concurrency 0 it's a NOOP, and due to the special case in
         # task scheduling we do not want to append to the queue (will never
         # be popped).
@@ -102,7 +106,7 @@
             with_lock(self.__cv, _insert_barrier)
 
     def schedule_task(self, fn, params):
-        """
+        u"""
         Schedule the given task (callable, typically function) for
         execution. Pass the given parameters to the function when
         calling it. Returns a callable which can optionally be used
@@ -139,20 +143,20 @@
         if self.__concurrency == 0:
             # special case this to not require any platform support for
             # threading at all
-            log.Info("%s: %s" % (self.__class__.__name__,
-                     _("running task synchronously (asynchronicity disabled)")),
+            log.Info(u"%s: %s" % (self.__class__.__name__,
+                     _(u"running task synchronously (asynchronicity disabled)")),
                      log.InfoCode.synchronous_upload_begin)
 
             return self.__run_synchronously(fn, params)
         else:
-            log.Info("%s: %s" % (self.__class__.__name__,
-                     _("scheduling task for asynchronous execution")),
+            log.Info(u"%s: %s" % (self.__class__.__name__,
+                     _(u"scheduling task for asynchronous execution")),
                      log.InfoCode.asynchronous_upload_begin)
 
             return self.__run_asynchronously(fn, params)
 
     def wait(self):
-        """
+        u"""
         Wait for the scheduler to become entirely empty (i.e., all
         tasks having run to completion).
 
@@ -174,8 +178,8 @@
         def _waiter():
             return ret
 
-        log.Info("%s: %s" % (self.__class__.__name__,
-                 _("task completed successfully")),
+        log.Info(u"%s: %s" % (self.__class__.__name__,
+                 _(u"task completed successfully")),
                  log.InfoCode.synchronous_upload_done)
 
         return _waiter
@@ -185,19 +189,19 @@
 
         def check_pending_failure():
             if self.__failed:
-                log.Info("%s: %s" % (self.__class__.__name__,
-                         _("a previously scheduled task has failed; "
-                           "propagating the result immediately")),
+                log.Info(u"%s: %s" % (self.__class__.__name__,
+                         _(u"a previously scheduled task has failed; "
+                           u"propagating the result immediately")),
                          log.InfoCode.asynchronous_upload_done)
                 self.__failed_waiter()
-                raise AssertionError("%s: waiter should have raised an exception; "
-                                     "this is a bug" % (self.__class__.__name__,))
+                raise AssertionError(u"%s: waiter should have raised an exception; "
+                                     u"this is a bug" % (self.__class__.__name__,))
 
         def wait_for_and_register_launch():
             check_pending_failure()  # raise on fail
             while self.__worker_count >= self.__concurrency or self.__barrier:
                 if self.__worker_count == 0:
-                    assert self.__barrier, "barrier should be in effect"
+                    assert self.__barrier, u"barrier should be in effect"
                     self.__barrier = False
                     self.__cv.notifyAll()
                 else:
@@ -208,8 +212,8 @@
                 check_pending_failure()  # raise on fail
 
             self.__worker_count += 1
-            log.Debug("%s: %s" % (self.__class__.__name__,
-                                  _("active workers = %d") % (self.__worker_count,)))
+            log.Debug(u"%s: %s" % (self.__class__.__name__,
+                                   _(u"active workers = %d") % (self.__worker_count,)))
 
         # simply wait for an OK condition to start, then launch our worker. the worker
         # never waits on us, we just wait for them.
@@ -220,7 +224,7 @@
         return waiter
 
     def __start_worker(self, caller):
-        """
+        u"""
         Start a new worker.
         """
         def trampoline():
@@ -229,8 +233,8 @@
             finally:
                 def complete_worker():
                     self.__worker_count -= 1
-                    log.Debug("%s: %s" % (self.__class__.__name__,
-                                          _("active workers = %d") % (self.__worker_count,)))
+                    log.Debug(u"%s: %s" % (self.__class__.__name__,
+                                           _(u"active workers = %d") % (self.__worker_count,)))
                     self.__cv.notifyAll()
                 with_lock(self.__cv, complete_worker)
 
@@ -249,6 +253,6 @@
                         self.__cv.notifyAll()
                 with_lock(self.__cv, _signal_failed)
 
-            log.Info("%s: %s" % (self.__class__.__name__,
-                     _("task execution done (success: %s)") % succeeded),
+            log.Info(u"%s: %s" % (self.__class__.__name__,
+                     _(u"task execution done (success: %s)") % succeeded),
                      log.InfoCode.asynchronous_upload_done)

=== modified file 'duplicity/backend.py'
--- duplicity/backend.py	2017-11-23 13:09:42 +0000
+++ duplicity/backend.py	2019-02-22 19:07:43 +0000
@@ -19,11 +19,16 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""
+u"""
 Provides a common interface to all backends and certain sevices
 intended to be used by the backends themselves.
 """
 
+from future import standard_library
+standard_library.install_aliases()
+from builtins import str
+from builtins import range
+from builtins import object
 import errno
 import os
 import sys
@@ -34,8 +39,9 @@
 import gettext
 import re
 import types
-import urllib
-import urlparse
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 
 from duplicity import dup_temp
 from duplicity import file_naming
@@ -56,7 +62,6 @@
 
 import duplicity.backends
 
-
 # todo: this should really NOT be done here
 socket.setdefaulttimeout(globals.timeout)
 
@@ -78,7 +83,7 @@
 
 
 def import_backends():
-    """
+    u"""
     Import files in the duplicity/backends directory where
     the filename ends in 'backend.py' and ignore the rest.
 
@@ -86,26 +91,26 @@
     @return: void
     """
     path = duplicity.backends.__path__[0]
-    assert path.endswith("duplicity/backends"), duplicity.backends.__path__
+    assert path.endswith(u"duplicity/backends"), duplicity.backends.__path__
 
     files = os.listdir(path)
     files.sort()
     for fn in files:
-        if fn.endswith("backend.py"):
+        if fn.endswith(u"backend.py"):
             fn = fn[:-3]
-            imp = "duplicity.backends.%s" % (fn,)
+            imp = u"duplicity.backends.%s" % (fn,)
             try:
                 __import__(imp)
-                res = "Succeeded"
+                res = u"Succeeded"
             except Exception:
-                res = "Failed: " + str(sys.exc_info()[1])
-            log.Log(_("Import of %s %s") % (imp, res), log.INFO)
+                res = u"Failed: " + str(sys.exc_info()[1])
+            log.Log(_(u"Import of %s %s") % (imp, res), log.INFO)
         else:
             continue
 
 
 def register_backend(scheme, backend_factory):
-    """
+    u"""
     Register a given backend factory responsible for URL:s with the
     given scheme.
 
@@ -120,18 +125,18 @@
     """
     global _backends
 
-    assert callable(backend_factory), "backend factory must be callable"
+    assert callable(backend_factory), u"backend factory must be callable"
 
     if scheme in _backends:
-        raise ConflictingScheme("the scheme %s already has a backend "
-                                "associated with it"
-                                "" % (scheme,))
+        raise ConflictingScheme(u"the scheme %s already has a backend "
+                                u"associated with it"
+                                u"" % (scheme,))
 
     _backends[scheme] = backend_factory
 
 
 def register_backend_prefix(scheme, backend_factory):
-    """
+    u"""
     Register a given backend factory responsible for URL:s with the
     given scheme prefix.
 
@@ -146,25 +151,25 @@
     """
     global _backend_prefixes
 
-    assert callable(backend_factory), "backend factory must be callable"
+    assert callable(backend_factory), u"backend factory must be callable"
 
     if scheme in _backend_prefixes:
-        raise ConflictingScheme("the prefix %s already has a backend "
-                                "associated with it"
-                                "" % (scheme,))
+        raise ConflictingScheme(u"the prefix %s already has a backend "
+                                u"associated with it"
+                                u"" % (scheme,))
 
     _backend_prefixes[scheme] = backend_factory
 
 
 def strip_prefix(url_string, prefix_scheme):
-    """
+    u"""
     strip the prefix from a string e.g. par2+ftp://... -> ftp://...
     """
-    return re.sub('(?i)^' + re.escape(prefix_scheme) + '\+', '', url_string)
+    return re.sub(r'(?i)^' + re.escape(prefix_scheme) + r'\+', r'', url_string)
 
 
 def is_backend_url(url_string):
-    """
+    u"""
     @return Whether the given string looks like a backend URL.
     """
     pu = ParsedUrl(url_string)
@@ -177,7 +182,7 @@
 
 
 def get_backend_object(url_string):
-    """
+    u"""
     Find the right backend class instance for the given URL, or return None
     if the given string looks like a local path rather than a URL.
 
@@ -189,12 +194,12 @@
     global _backends, _backend_prefixes
 
     pu = ParsedUrl(url_string)
-    assert pu.scheme, "should be a backend url according to is_backend_url"
+    assert pu.scheme, u"should be a backend url according to is_backend_url"
 
     factory = None
 
     for prefix in _backend_prefixes:
-        if url_string.startswith(prefix + '+'):
+        if url_string.startswith(prefix + u'+'):
             factory = _backend_prefixes[prefix]
             pu = ParsedUrl(strip_prefix(url_string, prefix))
             break
@@ -208,26 +213,26 @@
     try:
         return factory(pu)
     except ImportError:
-        raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
+        raise BackendException(_(u"Could not initialize backend: %s") % str(sys.exc_info()[1]))
 
 
 def get_backend(url_string):
-    """
+    u"""
     Instantiate a backend suitable for the given URL, or return None
     if the given string looks like a local path rather than a URL.
 
     Raise InvalidBackendURL if the URL is not a valid URL.
     """
     if globals.use_gio:
-        url_string = 'gio+' + url_string
+        url_string = u'gio+' + url_string
     obj = get_backend_object(url_string)
     if obj:
         obj = BackendWrapper(obj)
     return obj
 
 
-class ParsedUrl:
-    """
+class ParsedUrl(object):
+    u"""
     Parse the given URL as a duplicity backend URL.
 
     Returns the data of a parsed URL with the same names as that of
@@ -242,7 +247,7 @@
 
         # Python < 2.6.5 still examine urlparse.uses_netlock when parsing urls,
         # so stuff our custom list in there before we parse.
-        urlparse.uses_netloc = uses_netloc
+        urllib.parse.uses_netloc = uses_netloc
 
         # While useful in some cases, the fact is that the urlparser makes
         # all the properties in the URL deferred or lazy.  This means that
@@ -250,49 +255,49 @@
         # problems here, so they will be caught early.
 
         try:
-            pu = urlparse.urlparse(url_string)
+            pu = urllib.parse.urlparse(url_string)
         except Exception:
-            raise InvalidBackendURL("Syntax error in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error in: %s" % url_string)
 
         try:
             self.scheme = pu.scheme
         except Exception:
-            raise InvalidBackendURL("Syntax error (scheme) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (scheme) in: %s" % url_string)
 
         try:
             self.netloc = pu.netloc
         except Exception:
-            raise InvalidBackendURL("Syntax error (netloc) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (netloc) in: %s" % url_string)
 
         try:
             self.path = pu.path
             if self.path:
-                self.path = urllib.unquote(self.path)
+                self.path = urllib.parse.unquote(self.path)
         except Exception:
-            raise InvalidBackendURL("Syntax error (path) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (path) in: %s" % url_string)
 
         try:
             self.username = pu.username
         except Exception:
-            raise InvalidBackendURL("Syntax error (username) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (username) in: %s" % url_string)
         if self.username:
-            self.username = urllib.unquote(pu.username)
+            self.username = urllib.parse.unquote(pu.username)
         else:
             self.username = None
 
         try:
             self.password = pu.password
         except Exception:
-            raise InvalidBackendURL("Syntax error (password) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (password) in: %s" % url_string)
         if self.password:
-            self.password = urllib.unquote(self.password)
+            self.password = urllib.parse.unquote(self.password)
         else:
             self.password = None
 
         try:
             self.hostname = pu.hostname
         except Exception:
-            raise InvalidBackendURL("Syntax error (hostname) in: %s" % url_string)
+            raise InvalidBackendURL(u"Syntax error (hostname) in: %s" % url_string)
 
         # init to None, overwrite with actual value on success
         self.port = None
@@ -300,21 +305,21 @@
             self.port = pu.port
         except Exception:  # not raised in python2.7+, just returns None
             # old style rsync://host::[/]dest, are still valid, though they contain no port
-            if not (self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string)):
-                raise InvalidBackendURL("Syntax error (port) in: %s A%s B%s C%s" %
-                                        (url_string, (self.scheme in ['rsync']),
-                                         re.search('::[^:]+$', self.netloc), self.netloc))
+            if not (self.scheme in [u'rsync'] and re.search(u'::[^:]*$', self.url_string)):
+                raise InvalidBackendURL(u"Syntax error (port) in: %s A%s B%s C%s" %
+                                        (url_string, (self.scheme in [u'rsync']),
+                                         re.search(u'::[^:]+$', self.netloc), self.netloc))
 
         # Our URL system uses two slashes more than urlparse's does when using
         # non-netloc URLs.  And we want to make sure that if urlparse assuming
         # a netloc where we don't want one, that we correct it.
         if self.scheme not in uses_netloc:
             if self.netloc:
-                self.path = '//' + self.netloc + self.path
-                self.netloc = ''
+                self.path = u'//' + self.netloc + self.path
+                self.netloc = u''
                 self.hostname = None
-            elif not self.path.startswith('//') and self.path.startswith('/'):
-                self.path = '//' + self.path
+            elif not self.path.startswith(u'//') and self.path.startswith(u'/'):
+                self.path = u'//' + self.path
 
         # This happens for implicit local paths.
         if not self.scheme:
@@ -322,33 +327,33 @@
 
         # Our backends do not handle implicit hosts.
         if self.scheme in uses_netloc and not self.hostname:
-            raise InvalidBackendURL("Missing hostname in a backend URL which "
-                                    "requires an explicit hostname: %s"
-                                    "" % (url_string))
+            raise InvalidBackendURL(u"Missing hostname in a backend URL which "
+                                    u"requires an explicit hostname: %s"
+                                    u"" % (url_string))
 
         # Our backends do not handle implicit relative paths.
-        if self.scheme not in uses_netloc and not self.path.startswith('//'):
-            raise InvalidBackendURL("missing // - relative paths not supported "
-                                    "for scheme %s: %s"
-                                    "" % (self.scheme, url_string))
+        if self.scheme not in uses_netloc and not self.path.startswith(u'//'):
+            raise InvalidBackendURL(u"missing // - relative paths not supported "
+                                    u"for scheme %s: %s"
+                                    u"" % (self.scheme, url_string))
 
     def geturl(self):
         return self.url_string
 
 
 def strip_auth_from_url(parsed_url):
-    """Return a URL from a urlparse object without a username or password."""
+    u"""Return a URL from a urlparse object without a username or password."""
 
-    clean_url = re.sub('^([^:/]+://)(.*@)?(.*)', r'\1\3', parsed_url.geturl())
+    clean_url = re.sub(u'^([^:/]+://)(.*@)?(.*)', r'\1\3', parsed_url.geturl())
     return clean_url
 
 
 def _get_code_from_exception(backend, operation, e):
     if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error:
         return e.code
-    elif hasattr(backend, '_error_code'):
+    elif hasattr(backend, u'_error_code'):
         return backend._error_code(operation, e) or log.ErrorCode.backend_error
-    elif hasattr(e, 'errno'):
+    elif hasattr(e, u'errno'):
         # A few backends return such errors (local, paramiko, etc)
         if e.errno == errno.EACCES:
             return log.ErrorCode.backend_permission_denied
@@ -372,7 +377,7 @@
                     raise e
                 except Exception as e:
                     # retry on anything else
-                    log.Debug(_("Backtrace of previous error: %s")
+                    log.Debug(_(u"Backtrace of previous error: %s")
                               % exception_traceback())
                     at_end = n == globals.num_retries
                     code = _get_code_from_exception(self.backend, operation, e)
@@ -383,22 +388,22 @@
                     if at_end and fatal:
                         def make_filename(f):
                             if isinstance(f, path.ROPath):
-                                return util.escape(f.name)
+                                return util.escape(f.uc_name)
                             else:
                                 return util.escape(f)
-                        extra = ' '.join([operation] + [make_filename(x) for x in args if x])
-                        log.FatalError(_("Giving up after %s attempts. %s: %s")
+                        extra = u' '.join([operation] + [make_filename(x) for x in args if (x and isinstance(x, str))])
+                        log.FatalError(_(u"Giving up after %s attempts. %s: %s")
                                        % (n, e.__class__.__name__,
                                           util.uexc(e)), code=code, extra=extra)
                     else:
-                        log.Warn(_("Attempt %s failed. %s: %s")
+                        log.Warn(_(u"Attempt %s failed. %s: %s")
                                  % (n, e.__class__.__name__, util.uexc(e)))
                     if not at_end:
                         if isinstance(e, TemporaryLoadException):
                             time.sleep(3 * globals.backend_retry_delay)  # wait longer before trying again
                         else:
                             time.sleep(globals.backend_retry_delay)  # wait a bit before trying again
-                        if hasattr(self.backend, '_retry_cleanup'):
+                        if hasattr(self.backend, u'_retry_cleanup'):
                             self.backend._retry_cleanup()
 
         return inner_retry
@@ -406,17 +411,17 @@
 
 
 class Backend(object):
-    """
+    u"""
     See README in backends directory for information on how to write a backend.
     """
     def __init__(self, parsed_url):
         self.parsed_url = parsed_url
 
-    """ use getpass by default, inherited backends may overwrite this behaviour """
+    u""" use getpass by default, inherited backends may overwrite this behaviour """
     use_getpass = True
 
     def get_password(self):
-        """
+        u"""
         Return a password for authentication purposes. The password
         will be obtained from the backend URL, the environment, by
         asking the user, or by some other method. When applicable, the
@@ -426,18 +431,18 @@
             return self.parsed_url.password
 
         try:
-            password = os.environ['FTP_PASSWORD']
+            password = os.environ[u'FTP_PASSWORD']
         except KeyError:
             if self.use_getpass:
-                password = getpass.getpass("Password for '%s@%s': " %
+                password = getpass.getpass(u"Password for '%s@%s': " %
                                            (self.parsed_url.username, self.parsed_url.hostname))
-                os.environ['FTP_PASSWORD'] = password
+                os.environ[u'FTP_PASSWORD'] = password
             else:
                 password = None
         return password
 
     def munge_password(self, commandline):
-        """
+        u"""
         Remove password from commandline by substituting the password
         found in the URL, if any, with a generic place-holder.
 
@@ -450,8 +455,13 @@
         else:
             return commandline
 
+<<<<<<< TREE
     def __subprocess_popen(self, args):
         """
+=======
+    def __subprocess_popen(self, args):
+        u"""
+>>>>>>> MERGE-SOURCE
         For internal use.
         Execute the given command line, interpreted as a shell command.
         Returns int Exitcode, string StdOut, string StdErr
@@ -464,17 +474,18 @@
 
         return p.returncode, stdout, stderr
 
-    """ a dictionary for breaking exceptions, syntax is
+    u""" a dictionary for breaking exceptions, syntax is
         { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
     popen_breaks = {}
 
     def subprocess_popen(self, commandline):
-        """
+        u"""
         Execute the given command line with error check.
         Returns int Exitcode, string StdOut, string StdErr
 
         Raise a BackendException on failure.
         """
+<<<<<<< TREE
         import shlex
 
         if isinstance(commandline, (types.ListType, types.TupleType)):
@@ -488,20 +499,40 @@
         log.Info(_("Reading results of '%s'") % logstr)
 
         result, stdout, stderr = self.__subprocess_popen(args)
+=======
+        import shlex
+
+        if isinstance(commandline, (list, tuple)):
+            logstr = u' '.join(commandline)
+            args = commandline
+        else:
+            logstr = commandline
+            args = shlex.split(commandline)
+
+        logstr = self.munge_password(logstr)
+        log.Info(_(u"Reading results of '%s'") % logstr)
+
+        result, stdout, stderr = self.__subprocess_popen(args)
+>>>>>>> MERGE-SOURCE
         if result != 0:
             try:
                 ignores = self.popen_breaks[args[0]]
                 ignores.index(result)
-                """ ignore a predefined set of error codes """
-                return 0, '', ''
+                u""" ignore a predefined set of error codes """
+                return 0, u'', u''
             except (KeyError, ValueError):
+<<<<<<< TREE
                 raise BackendException("Error running '%s': returned %d, with output:\n%s" %
                                        (logstr, result, stdout + '\n' + stderr))
+=======
+                raise BackendException(u"Error running '%s': returned %d, with output:\n%s" %
+                                       (logstr, result, stdout.decode() + u'\n' + stderr.decode()))
+>>>>>>> MERGE-SOURCE
         return result, stdout, stderr
 
 
 class BackendWrapper(object):
-    """
+    u"""
     Represents a generic duplicity backend, capable of storing and
     retrieving files.
     """
@@ -510,15 +541,15 @@
         self.backend = backend
 
     def __do_put(self, source_path, remote_filename):
-        if hasattr(self.backend, '_put'):
-            log.Info(_("Writing %s") % util.ufn(remote_filename))
+        if hasattr(self.backend, u'_put'):
+            log.Info(_(u"Writing %s") % util.fsdecode(remote_filename))
             self.backend._put(source_path, remote_filename)
         else:
             raise NotImplementedError()
 
-    @retry('put', fatal=True)
+    @retry(u'put', fatal=True)
     def put(self, source_path, remote_filename=None):
-        """
+        u"""
         Transfer source_path (Path object) to remote_filename (string)
 
         If remote_filename is None, get the filename from the last
@@ -528,9 +559,9 @@
             remote_filename = source_path.get_filename()
         self.__do_put(source_path, remote_filename)
 
-    @retry('move', fatal=True)
+    @retry(u'move', fatal=True)
     def move(self, source_path, remote_filename=None):
-        """
+        u"""
         Move source_path (Path object) to remote_filename (string)
 
         Same as put(), but unlinks source_path in the process.  This allows the
@@ -538,41 +569,45 @@
         """
         if not remote_filename:
             remote_filename = source_path.get_filename()
-        if hasattr(self.backend, '_move'):
+        if hasattr(self.backend, u'_move'):
             if self.backend._move(source_path, remote_filename) is not False:
                 source_path.setdata()
                 return
         self.__do_put(source_path, remote_filename)
         source_path.delete()
 
-    @retry('get', fatal=True)
+    @retry(u'get', fatal=True)
     def get(self, remote_filename, local_path):
-        """Retrieve remote_filename and place in local_path"""
-        if hasattr(self.backend, '_get'):
+        u"""Retrieve remote_filename and place in local_path"""
+        if hasattr(self.backend, u'_get'):
             self.backend._get(remote_filename, local_path)
             local_path.setdata()
             if not local_path.exists():
-                raise BackendException(_("File %s not found locally after get "
-                                         "from backend") % util.ufn(local_path.name))
+                raise BackendException(_(u"File %s not found locally after get "
+                                         u"from backend") % local_path.uc_name)
         else:
             raise NotImplementedError()
 
-    @retry('list', fatal=True)
+    @retry(u'list', fatal=True)
     def list(self):
-        """
+        u"""
         Return list of filenames (byte strings) present in backend
         """
         def tobytes(filename):
-            "Convert a (maybe unicode) filename to bytes"
-            if isinstance(filename, unicode):
+            u"Convert a (maybe unicode) filename to bytes"
+            if isinstance(filename, str):
                 # There shouldn't be any encoding errors for files we care
                 # about, since duplicity filenames are ascii.  But user files
                 # may be in the same directory.  So just replace characters.
+<<<<<<< TREE
                 return filename.encode(globals.fsencoding, 'replace')
+=======
+                return util.fsencode(filename)
+>>>>>>> MERGE-SOURCE
             else:
                 return filename
 
-        if hasattr(self.backend, '_list'):
+        if hasattr(self.backend, u'_list'):
             # Make sure that duplicity internals only ever see byte strings
             # for filenames, no matter what the backend thinks it is talking.
             return [tobytes(x) for x in self.backend._list()]
@@ -580,26 +615,26 @@
             raise NotImplementedError()
 
     def delete(self, filename_list):
-        """
+        u"""
         Delete each filename in filename_list, in order if possible.
         """
-        assert not isinstance(filename_list, types.StringType)
-        if hasattr(self.backend, '_delete_list'):
+        assert not isinstance(filename_list, bytes)
+        if hasattr(self.backend, u'_delete_list'):
             self._do_delete_list(filename_list)
-        elif hasattr(self.backend, '_delete'):
+        elif hasattr(self.backend, u'_delete'):
             for filename in filename_list:
                 self._do_delete(filename)
         else:
             raise NotImplementedError()
 
-    @retry('delete', fatal=False)
+    @retry(u'delete', fatal=False)
     def _do_delete_list(self, filename_list):
         while filename_list:
             sublist = filename_list[:100]
             self.backend._delete_list(sublist)
             filename_list = filename_list[100:]
 
-    @retry('delete', fatal=False)
+    @retry(u'delete', fatal=False)
     def _do_delete(self, filename):
         self.backend._delete(filename)
 
@@ -614,15 +649,15 @@
     # Returned dictionary is guaranteed to contain a metadata dictionary for
     # each filename, and all metadata are guaranteed to be present.
     def query_info(self, filename_list):
-        """
+        u"""
         Return metadata about each filename in filename_list
         """
         info = {}
-        if hasattr(self.backend, '_query_list'):
+        if hasattr(self.backend, u'_query_list'):
             info = self._do_query_list(filename_list)
             if info is None:
                 info = {}
-        elif hasattr(self.backend, '_query'):
+        elif hasattr(self.backend, u'_query'):
             for filename in filename_list:
                 info[filename] = self._do_query(filename)
 
@@ -631,39 +666,39 @@
         for filename in filename_list:
             if filename not in info or info[filename] is None:
                 info[filename] = {}
-            for metadata in ['size']:
+            for metadata in [u'size']:
                 info[filename].setdefault(metadata, None)
 
         return info
 
-    @retry('query', fatal=False)
+    @retry(u'query', fatal=False)
     def _do_query_list(self, filename_list):
         info = self.backend._query_list(filename_list)
         if info is None:
             info = {}
         return info
 
-    @retry('query', fatal=False)
+    @retry(u'query', fatal=False)
     def _do_query(self, filename):
         try:
             return self.backend._query(filename)
         except Exception as e:
-            code = _get_code_from_exception(self.backend, 'query', e)
+            code = _get_code_from_exception(self.backend, u'query', e)
             if code == log.ErrorCode.backend_not_found:
-                return {'size': -1}
+                return {u'size': -1}
             else:
                 raise e
 
     def close(self):
-        """
+        u"""
         Close the backend, releasing any resources held and
         invalidating any file objects obtained from the backend.
         """
-        if hasattr(self.backend, '_close'):
+        if hasattr(self.backend, u'_close'):
             self.backend._close()
 
     def get_fileobj_read(self, filename, parseresults=None):
-        """
+        u"""
         Return fileobject opened for reading of filename on backend
 
         The file will be downloaded first into a temp file.  When the
@@ -671,14 +706,14 @@
         """
         if not parseresults:
             parseresults = file_naming.parse(filename)
-            assert parseresults, "Filename not correctly parsed"
+            assert parseresults, u"Filename not correctly parsed"
         tdp = dup_temp.new_tempduppath(parseresults)
         self.get(filename, tdp)
         tdp.setdata()
-        return tdp.filtered_open_with_delete("rb")
+        return tdp.filtered_open_with_delete(u"rb")
 
     def get_data(self, filename, parseresults=None):
-        """
+        u"""
         Retrieve a file from backend, process it, return contents.
         """
         fin = self.get_fileobj_read(filename, parseresults)

=== modified file 'duplicity/backends/__init__.py'
--- duplicity/backends/__init__.py	2009-07-15 14:02:09 +0000
+++ duplicity/backends/__init__.py	2019-02-22 19:07:43 +0000
@@ -19,7 +19,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""
+u"""
 Imports of backends should not be done directly in this module.  All
 backend imports are done via import_backends() in backend.py.  This
 file is only to instantiate the duplicity.backends module itself.

=== modified file 'duplicity/backends/_boto_multi.py'
--- duplicity/backends/_boto_multi.py	2016-06-28 21:03:46 +0000
+++ duplicity/backends/_boto_multi.py	2019-02-22 19:07:43 +0000
@@ -20,10 +20,15 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import division
+from future import standard_library
+standard_library.install_aliases()
+from builtins import range
+from past.utils import old_div
 import os
 import sys
 import threading
-import Queue
+import queue
 import time
 import traceback
 
@@ -36,18 +41,18 @@
 from ._boto_single import BotoBackend as BotoSingleBackend
 from ._boto_single import get_connection
 
-BOTO_MIN_VERSION = "2.1.1"
+BOTO_MIN_VERSION = u"2.1.1"
 
 # Multiprocessing is not supported on *BSD
-if sys.platform not in ('darwin', 'linux2'):
+if sys.platform not in (u'darwin', u'linux2'):
     from multiprocessing import dummy as multiprocessing
-    log.Debug('Multiprocessing is not supported on %s, will use threads instead.' % sys.platform)
+    log.Debug(u'Multiprocessing is not supported on %s, will use threads instead.' % sys.platform)
 else:
     import multiprocessing
 
 
 class ConsumerThread(threading.Thread):
-    """
+    u"""
     A background thread that collects all written bytes from all
     the pool workers, and reports it to the progress module.
     Wakes up every second to check for termination
@@ -63,12 +68,12 @@
             try:
                 args = self.queue.get(True, 1)
                 progress.report_transfer(args[0], args[1])
-            except Queue.Empty as e:
+            except queue.Empty as e:
                 pass
 
 
 class BotoBackend(BotoSingleBackend):
-    """
+    u"""
     Backend for Amazon's Simple Storage System, (aka Amazon S3), though
     the use of the boto module, (http://code.google.com/p/boto/).
 
@@ -81,6 +86,10 @@
 
     def __init__(self, parsed_url):
         BotoSingleBackend.__init__(self, parsed_url)
+        try:
+            import boto
+        except ImportError:
+            raise
         self._setup_pool()
 
     def _setup_pool(self):
@@ -88,29 +97,28 @@
         if not number_of_procs:
             number_of_procs = multiprocessing.cpu_count()
 
-        if getattr(self, '_pool', False):
-            log.Debug("A process pool already exists. Destroying previous pool.")
+        if getattr(self, u'_pool', False):
+            log.Debug(u"A process pool already exists. Destroying previous pool.")
             self._pool.terminate()
             self._pool.join()
             self._pool = None
 
-        log.Debug("Setting multipart boto backend process pool to %d processes" % number_of_procs)
+        log.Debug(u"Setting multipart boto backend process pool to %d processes" % number_of_procs)
 
         self._pool = multiprocessing.Pool(processes=number_of_procs)
 
     def _close(self):
         BotoSingleBackend._close(self)
-        log.Debug("Closing pool")
+        log.Debug(u"Closing pool")
         self._pool.terminate()
         self._pool.join()
 
     def upload(self, filename, key, headers=None):
-        import boto
         chunk_size = globals.s3_multipart_chunk_size
 
         # Check minimum chunk size for S3
         if chunk_size < globals.s3_multipart_minimum_chunk_size:
-            log.Warn("Minimum chunk size is %d, but %d specified." % (
+            log.Warn(u"Minimum chunk size is %d, but %d specified." % (
                 globals.s3_multipart_minimum_chunk_size, chunk_size))
             chunk_size = globals.s3_multipart_minimum_chunk_size
 
@@ -119,11 +127,11 @@
         if bytes < chunk_size:
             chunks = 1
         else:
-            chunks = bytes / chunk_size
+            chunks = old_div(bytes, chunk_size)
             if (bytes % chunk_size):
                 chunks += 1
 
-        log.Debug("Uploading %d bytes in %d chunks" % (bytes, chunks))
+        log.Debug(u"Uploading %d bytes in %d chunks" % (bytes, chunks))
 
         mp = self.bucket.initiate_multipart_upload(key.key, headers, encrypt_key=globals.s3_use_sse)
 
@@ -143,7 +151,7 @@
                       queue]
             tasks.append(self._pool.apply_async(multipart_upload_worker, params))
 
-        log.Debug("Waiting for the pool to finish processing %s tasks" % len(tasks))
+        log.Debug(u"Waiting for the pool to finish processing %s tasks" % len(tasks))
         while tasks:
             try:
                 tasks[0].wait(timeout=globals.s3_multipart_max_timeout)
@@ -151,18 +159,18 @@
                     if tasks[0].successful():
                         del tasks[0]
                     else:
-                        log.Debug("Part upload not successful, aborting multipart upload.")
+                        log.Debug(u"Part upload not successful, aborting multipart upload.")
                         self._setup_pool()
                         break
                 else:
                     raise multiprocessing.TimeoutError
             except multiprocessing.TimeoutError:
-                log.Debug("%s tasks did not finish by the specified timeout,"
-                          "aborting multipart upload and resetting pool." % len(tasks))
+                log.Debug(u"%s tasks did not finish by the specified timeout,"
+                          u"aborting multipart upload and resetting pool." % len(tasks))
                 self._setup_pool()
                 break
 
-        log.Debug("Done waiting for the pool to finish processing")
+        log.Debug(u"Done waiting for the pool to finish processing")
 
         # Terminate the consumer thread, if any
         if globals.progress:
@@ -171,14 +179,14 @@
 
         if len(tasks) > 0 or len(mp.get_all_parts()) < chunks:
             mp.cancel_upload()
-            raise BackendException("Multipart upload failed. Aborted.")
+            raise BackendException(u"Multipart upload failed. Aborted.")
 
         return mp.complete_upload()
 
 
 def multipart_upload_worker(scheme, parsed_url, storage_uri, bucket_name, multipart_id,
                             filename, offset, bytes, num_retries, queue):
-    """
+    u"""
     Worker method for uploading a file chunk to S3 using multipart upload.
     Note that the file chunk is read into memory, so it's important to keep
     this number reasonably small.
@@ -186,29 +194,30 @@
 
     def _upload_callback(uploaded, total):
         worker_name = multiprocessing.current_process().name
-        log.Debug("%s: Uploaded %s/%s bytes" % (worker_name, uploaded, total))
+        log.Debug(u"%s: Uploaded %s/%s bytes" % (worker_name, uploaded, total))
         if queue is not None:
             queue.put([uploaded, total])  # Push data to the consumer thread
 
     def _upload(num_retries):
         worker_name = multiprocessing.current_process().name
-        log.Debug("%s: Uploading chunk %d" % (worker_name, offset + 1))
+        log.Debug(u"%s: Uploading chunk %d" % (worker_name, offset + 1))
         try:
             conn = get_connection(scheme, parsed_url, storage_uri)
             bucket = conn.lookup(bucket_name)
 
             for mp in bucket.list_multipart_uploads():
                 if mp.id == multipart_id:
-                    with FileChunkIO(filename, 'r', offset=offset * bytes, bytes=bytes) as fd:
+                    with FileChunkIO(filename, u'r', offset=offset * bytes, bytes=bytes) as fd:
                         start = time.time()
                         mp.upload_part_from_file(fd, offset + 1, cb=_upload_callback,
                                                  num_cb=max(2, 8 * bytes / (1024 * 1024))
                                                  )  # Max num of callbacks = 8 times x megabyte
                         end = time.time()
-                        log.Debug(("{name}: Uploaded chunk {chunk}"
-                                  "at roughly {speed} bytes/second").format(name=worker_name,
-                                                                            chunk=offset + 1,
-                                                                            speed=(bytes / max(1, abs(end - start)))))
+                        log.Debug((u"{name}: Uploaded chunk {chunk}"
+                                   u"at roughly {speed} bytes/second").format(name=worker_name,
+                                                                              chunk=offset + 1,
+                                                                              speed=(old_div(bytes, max(1,
+                                                                                             abs(end - start))))))
                     break
             conn.close()
             conn = None
@@ -217,12 +226,12 @@
         except Exception as e:
             traceback.print_exc()
             if num_retries:
-                log.Debug("%s: Upload of chunk %d failed. Retrying %d more times..." % (
+                log.Debug(u"%s: Upload of chunk %d failed. Retrying %d more times..." % (
                     worker_name, offset + 1, num_retries - 1))
                 return _upload(num_retries - 1)
-            log.Debug("%s: Upload of chunk %d failed. Aborting..." % (
+            log.Debug(u"%s: Upload of chunk %d failed. Aborting..." % (
                 worker_name, offset + 1))
             raise e
-        log.Debug("%s: Upload of chunk %d complete" % (worker_name, offset + 1))
+        log.Debug(u"%s: Upload of chunk %d complete" % (worker_name, offset + 1))
 
     return _upload(num_retries)

=== modified file 'duplicity/backends/_boto_single.py'
--- duplicity/backends/_boto_single.py	2016-11-21 20:19:04 +0000
+++ duplicity/backends/_boto_single.py	2019-02-22 19:07:43 +0000
@@ -19,6 +19,9 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import division
+from builtins import str
+from past.utils import old_div
 import os
 import time
 
@@ -28,13 +31,13 @@
 from duplicity.errors import FatalBackendException, BackendException
 from duplicity import progress
 
-BOTO_MIN_VERSION = "2.1.1"
+BOTO_MIN_VERSION = u"2.1.1"
 
 
 def get_connection(scheme, parsed_url, storage_uri):
     try:
         from boto.s3.connection import S3Connection
-        assert hasattr(S3Connection, 'lookup')
+        assert hasattr(S3Connection, u'lookup')
 
         # Newer versions of boto default to using
         # virtual hosting for buckets as a result of
@@ -73,11 +76,11 @@
             if cfs_supported:
                 calling_format = SubdomainCallingFormat()
             else:
-                log.FatalError("Use of new-style (subdomain) S3 bucket addressing was"
-                               "requested, but does not seem to be supported by the "
-                               "boto library. Either you need to upgrade your boto "
-                               "library or duplicity has failed to correctly detect "
-                               "the appropriate support.",
+                log.FatalError(u"Use of new-style (subdomain) S3 bucket addressing was"
+                               u"requested, but does not seem to be supported by the "
+                               u"boto library. Either you need to upgrade your boto "
+                               u"library or duplicity has failed to correctly detect "
+                               u"the appropriate support.",
                                log.ErrorCode.boto_old_style)
         else:
             if cfs_supported:
@@ -86,23 +89,23 @@
                 calling_format = None
 
     except ImportError:
-        log.FatalError("This backend (s3) requires boto library, version %s or later, "
-                       "(http://code.google.com/p/boto/)." % BOTO_MIN_VERSION,
+        log.FatalError(u"This backend (s3) requires boto library, version %s or later, "
+                       u"(http://code.google.com/p/boto/)." % BOTO_MIN_VERSION,
                        log.ErrorCode.boto_lib_too_old)
 
     if not parsed_url.hostname:
         # Use the default host.
         conn = storage_uri.connect(is_secure=(not globals.s3_unencrypted_connection))
     else:
-        assert scheme == 's3'
+        assert scheme == u's3'
         conn = storage_uri.connect(host=parsed_url.hostname, port=parsed_url.port,
                                    is_secure=(not globals.s3_unencrypted_connection))
 
-    if hasattr(conn, 'calling_format'):
+    if hasattr(conn, u'calling_format'):
         if calling_format is None:
-            log.FatalError("It seems we previously failed to detect support for calling "
-                           "formats in the boto library, yet the support is there. This is "
-                           "almost certainly a duplicity bug.",
+            log.FatalError(u"It seems we previously failed to detect support for calling "
+                           u"formats in the boto library, yet the support is there. This is "
+                           u"almost certainly a duplicity bug.",
                            log.ErrorCode.boto_calling_format)
         else:
             conn.calling_format = calling_format
@@ -110,12 +113,12 @@
     else:
         # Duplicity hangs if boto gets a null bucket name.
         # HC: Caught a socket error, trying to recover
-        raise BackendException('Boto requires a bucket name.')
+        raise BackendException(u'Boto requires a bucket name.')
     return conn
 
 
 class BotoBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for Amazon's Simple Storage System, (aka Amazon S3), though
     the use of the boto module, (http://code.google.com/p/boto/).
 
@@ -127,42 +130,46 @@
     """
 
     def __init__(self, parsed_url):
-        import boto
-        from boto.s3.connection import Location
         duplicity.backend.Backend.__init__(self, parsed_url)
 
+        try:
+            import boto
+            from boto.s3.connection import Location
+        except ImportError:
+            raise
+
         assert boto.Version >= BOTO_MIN_VERSION
 
         # This folds the null prefix and all null parts, which means that:
         #  //MyBucket/ and //MyBucket are equivalent.
         #  //MyBucket//My///My/Prefix/ and //MyBucket/My/Prefix are equivalent.
-        self.url_parts = [x for x in parsed_url.path.split('/') if x != '']
+        self.url_parts = [x for x in parsed_url.path.split(u'/') if x != u'']
 
         if self.url_parts:
             self.bucket_name = self.url_parts.pop(0)
         else:
             # Duplicity hangs if boto gets a null bucket name.
             # HC: Caught a socket error, trying to recover
-            raise BackendException('Boto requires a bucket name.')
+            raise BackendException(u'Boto requires a bucket name.')
 
         self.scheme = parsed_url.scheme
 
         if self.url_parts:
-            self.key_prefix = '%s/' % '/'.join(self.url_parts)
+            self.key_prefix = u'%s/' % u'/'.join(self.url_parts)
         else:
-            self.key_prefix = ''
+            self.key_prefix = u''
 
         self.straight_url = duplicity.backend.strip_auth_from_url(parsed_url)
         self.parsed_url = parsed_url
 
         # duplicity and boto.storage_uri() have different URI formats.
         # boto uses scheme://bucket[/name] and specifies hostname on connect()
-        self.boto_uri_str = '://'.join((parsed_url.scheme[:2],
-                                        parsed_url.path.lstrip('/')))
+        self.boto_uri_str = u'://'.join((parsed_url.scheme[:2],
+                                         parsed_url.path.lstrip(u'/')))
         if globals.s3_european_buckets:
             self.my_location = Location.EU
         else:
-            self.my_location = ''
+            self.my_location = u''
         self.resetConnection()
         self._listed_keys = {}
 
@@ -176,8 +183,7 @@
         del self.storage_uri
 
     def resetConnection(self):
-        import boto
-        if getattr(self, 'conn', False):
+        if getattr(self, u'conn', False):
             self.conn.close()
         self.bucket = None
         self.conn = None
@@ -198,15 +204,15 @@
     def _put(self, source_path, remote_filename):
         if globals.s3_european_buckets:
             if not globals.s3_use_new_style:
-                raise FatalBackendException("European bucket creation was requested, but not new-style "
-                                            "bucket addressing (--s3-use-new-style)",
+                raise FatalBackendException(u"European bucket creation was requested, but not new-style "
+                                            u"bucket addressing (--s3-use-new-style)",
                                             code=log.ErrorCode.s3_bucket_not_style)
 
         if self.bucket is None:
             try:
                 self.bucket = self.conn.get_bucket(self.bucket_name)
             except Exception as e:
-                if "NoSuchBucket" in str(e):
+                if u"NoSuchBucket" in str(e):
                     self.bucket = self.conn.create_bucket(self.bucket_name,
                                                           location=self.my_location)
                 else:
@@ -215,30 +221,45 @@
         key = self.bucket.new_key(self.key_prefix + remote_filename)
 
         if globals.s3_use_rrs:
-            storage_class = 'REDUCED_REDUNDANCY'
+            storage_class = u'REDUCED_REDUNDANCY'
         elif globals.s3_use_ia:
-            storage_class = 'STANDARD_IA'
+            storage_class = u'STANDARD_IA'
+        elif globals.s3_use_onezone_ia:
+            storage_class = u'ONEZONE_IA'
         else:
-            storage_class = 'STANDARD'
-        log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
+            storage_class = u'STANDARD'
+        log.Info(u"Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
         if globals.s3_use_sse:
             headers = {
+                u'Content-Type': u'application/octet-stream',
+                u'x-amz-storage-class': storage_class,
+                u'x-amz-server-side-encryption': u'AES256'
+            }
+        elif globals.s3_use_sse_kms:
+            if globals.s3_kms_key_id is None:
+                raise FatalBackendException("S3 USE SSE KMS was requested, but key id not provided "
+                                            "require (--s3-kms-key-id)",
+                                            code=log.ErrorCode.s3_kms_no_id)
+            headers = {
                 'Content-Type': 'application/octet-stream',
                 'x-amz-storage-class': storage_class,
-                'x-amz-server-side-encryption': 'AES256'
+                'x-amz-server-side-encryption': 'aws:kms',
+                'x-amz-server-side-encryption-aws-kms-key-id': globals.s3_kms_key_id
             }
+            if globals.s3_kms_grant is not None:
+                headers['x-amz-grant-full-control'] = globals.s3_kms_grant
         else:
             headers = {
-                'Content-Type': 'application/octet-stream',
-                'x-amz-storage-class': storage_class
+                u'Content-Type': u'application/octet-stream',
+                u'x-amz-storage-class': storage_class
             }
 
         upload_start = time.time()
         self.upload(source_path.name, key, headers)
         upload_end = time.time()
         total_s = abs(upload_end - upload_start) or 1  # prevent a zero value!
-        rough_upload_speed = os.path.getsize(source_path.name) / total_s
-        log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" %
+        rough_upload_speed = old_div(os.path.getsize(source_path.name), total_s)
+        log.Debug(u"Uploaded %s/%s to %s Storage at roughly %f bytes/second" %
                   (self.straight_url, remote_filename, storage_class,
                    rough_upload_speed))
 
@@ -251,7 +272,7 @@
 
     def _list(self):
         if not self.bucket:
-            raise BackendException("No connection to backend")
+            raise BackendException(u"No connection to backend")
         return self.list_filenames_in_bucket()
 
     def list_filenames_in_bucket(self):
@@ -267,10 +288,10 @@
         filename_list = []
         for k in self.bucket.list(prefix=self.key_prefix):
             try:
-                filename = k.key.replace(self.key_prefix, '', 1)
+                filename = k.key.replace(self.key_prefix, u'', 1)
                 filename_list.append(filename)
                 self._listed_keys[k.key] = k
-                log.Debug("Listed %s/%s" % (self.straight_url, filename))
+                log.Debug(u"Listed %s/%s" % (self.straight_url, filename))
             except AttributeError:
                 pass
         return filename_list
@@ -281,8 +302,8 @@
     def _query(self, filename):
         key = self.bucket.lookup(self.key_prefix + filename)
         if key is None:
-            return {'size': -1}
-        return {'size': key.size}
+            return {u'size': -1}
+        return {u'size': key.size}
 
     def upload(self, filename, key, headers):
         key.set_contents_from_filename(filename, headers,
@@ -298,14 +319,14 @@
             self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
         key = self._listed_keys[key_name]
 
-        if key.storage_class == "GLACIER":
+        if key.storage_class == u"GLACIER":
             # We need to move the file out of glacier
             if not self.bucket.get_key(key.key).ongoing_restore:
-                log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
+                log.Info(u"File %s is in Glacier storage, restoring to S3" % remote_filename)
                 key.restore(days=1)  # Shouldn't need this again after 1 day
             if wait:
-                log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
+                log.Info(u"Waiting for file %s to restore from Glacier" % remote_filename)
                 while self.bucket.get_key(key.key).ongoing_restore:
                     time.sleep(60)
                     self.resetConnection()
-                log.Info("File %s was successfully restored from Glacier" % remote_filename)
+                log.Info(u"File %s was successfully restored from Glacier" % remote_filename)

=== modified file 'duplicity/backends/_cf_cloudfiles.py'
--- duplicity/backends/_cf_cloudfiles.py	2017-07-11 14:55:38 +0000
+++ duplicity/backends/_cf_cloudfiles.py	2019-02-22 19:07:43 +0000
@@ -18,6 +18,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import str
 import os
 
 import duplicity.backend
@@ -27,7 +28,7 @@
 
 
 class CloudFilesBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for Rackspace's CloudFiles
     """
     def __init__(self, parsed_url):
@@ -35,41 +36,41 @@
             from cloudfiles import Connection
             from cloudfiles.errors import ResponseError
             from cloudfiles import consts
+            from cloudfiles.errors import NoSuchObject
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Cloudfiles backend requires the cloudfiles library available from Rackspace.
 Exception: %s""" % str(e))
 
         self.resp_exc = ResponseError
         conn_kwargs = {}
 
-        if 'CLOUDFILES_USERNAME' not in os.environ:
-            raise BackendException('CLOUDFILES_USERNAME environment variable'
-                                   'not set.')
-
-        if 'CLOUDFILES_APIKEY' not in os.environ:
-            raise BackendException('CLOUDFILES_APIKEY environment variable not set.')
-
-        conn_kwargs['username'] = os.environ['CLOUDFILES_USERNAME']
-        conn_kwargs['api_key'] = os.environ['CLOUDFILES_APIKEY']
-
-        if 'CLOUDFILES_AUTHURL' in os.environ:
-            conn_kwargs['authurl'] = os.environ['CLOUDFILES_AUTHURL']
+        if u'CLOUDFILES_USERNAME' not in os.environ:
+            raise BackendException(u'CLOUDFILES_USERNAME environment variable'
+                                   u'not set.')
+
+        if u'CLOUDFILES_APIKEY' not in os.environ:
+            raise BackendException(u'CLOUDFILES_APIKEY environment variable not set.')
+
+        conn_kwargs[u'username'] = os.environ[u'CLOUDFILES_USERNAME']
+        conn_kwargs[u'api_key'] = os.environ[u'CLOUDFILES_APIKEY']
+
+        if u'CLOUDFILES_AUTHURL' in os.environ:
+            conn_kwargs[u'authurl'] = os.environ[u'CLOUDFILES_AUTHURL']
         else:
-            conn_kwargs['authurl'] = consts.default_authurl
+            conn_kwargs[u'authurl'] = consts.default_authurl
 
-        container = parsed_url.path.lstrip('/')
+        container = parsed_url.path.lstrip(u'/')
 
         try:
             conn = Connection(**conn_kwargs)
         except Exception as e:
-            log.FatalError("Connection failed, please check your credentials: %s %s"
+            log.FatalError(u"Connection failed, please check your credentials: %s %s"
                            % (e.__class__.__name__, util.uexc(e)),
                            log.ErrorCode.connection_failed)
         self.container = conn.create_container(container)
 
     def _error_code(self, operation, e):
-        from cloudfiles.errors import NoSuchObject
         if isinstance(e, NoSuchObject):
             return log.ErrorCode.backend_not_found
         elif isinstance(e, self.resp_exc):
@@ -82,7 +83,7 @@
 
     def _get(self, remote_filename, local_path):
         sobject = self.container.create_object(remote_filename)
-        with open(local_path.name, 'wb') as f:
+        with open(local_path.name, u'wb') as f:
             for chunk in sobject.stream():
                 f.write(chunk)
 
@@ -101,4 +102,4 @@
 
     def _query(self, filename):
         sobject = self.container.get_object(filename)
-        return {'size': sobject.size}
+        return {u'size': sobject.size}

=== modified file 'duplicity/backends/_cf_pyrax.py'
--- duplicity/backends/_cf_pyrax.py	2017-09-16 12:24:27 +0000
+++ duplicity/backends/_cf_pyrax.py	2019-02-22 19:07:43 +0000
@@ -18,6 +18,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import str
 import os
 
 import duplicity.backend
@@ -27,7 +28,7 @@
 
 
 class PyraxBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for Rackspace's CloudFiles using Pyrax
     """
     def __init__(self, parsed_url):
@@ -36,40 +37,41 @@
         try:
             import pyrax
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Pyrax backend requires the pyrax library available from Rackspace.
 Exception: %s""" % str(e))
 
         # Inform Pyrax that we're talking to Rackspace
         # per Jesus Monzon (gsusmonzon)
-        pyrax.set_setting("identity_type", "rackspace")
+        pyrax.set_setting(u"identity_type", u"rackspace")
 
         conn_kwargs = {}
 
-        if 'CLOUDFILES_USERNAME' not in os.environ:
-            raise BackendException('CLOUDFILES_USERNAME environment variable'
-                                   'not set.')
-
-        if 'CLOUDFILES_APIKEY' not in os.environ:
-            raise BackendException('CLOUDFILES_APIKEY environment variable not set.')
-
-        conn_kwargs['username'] = os.environ['CLOUDFILES_USERNAME']
-        conn_kwargs['api_key'] = os.environ['CLOUDFILES_APIKEY']
-
-        if 'CLOUDFILES_REGION' in os.environ:
-            conn_kwargs['region'] = os.environ['CLOUDFILES_REGION']
-
-        container = parsed_url.path.lstrip('/')
+        if u'CLOUDFILES_USERNAME' not in os.environ:
+            raise BackendException(u'CLOUDFILES_USERNAME environment variable'
+                                   u'not set.')
+
+        if u'CLOUDFILES_APIKEY' not in os.environ:
+            raise BackendException(u'CLOUDFILES_APIKEY environment variable not set.')
+
+        conn_kwargs[u'username'] = os.environ[u'CLOUDFILES_USERNAME']
+        conn_kwargs[u'api_key'] = os.environ[u'CLOUDFILES_APIKEY']
+
+        if u'CLOUDFILES_REGION' in os.environ:
+            conn_kwargs[u'region'] = os.environ[u'CLOUDFILES_REGION']
+
+        container = parsed_url.path.lstrip(u'/')
 
         try:
             pyrax.set_credentials(**conn_kwargs)
         except Exception as e:
-            log.FatalError("Connection failed, please check your credentials: %s %s"
+            log.FatalError(u"Connection failed, please check your credentials: %s %s"
                            % (e.__class__.__name__, util.uexc(e)),
                            log.ErrorCode.connection_failed)
 
         self.client_exc = pyrax.exceptions.ClientException
         self.nso_exc = pyrax.exceptions.NoSuchObject
+<<<<<<< TREE
 
         # query rackspace for the specified container name
         try:
@@ -89,6 +91,27 @@
                                "You may be using a read-only user that can view but not create containers.\n" +
                                "Please check your credentials and permissions.",
                                log.ErrorCode.backend_permission_denied)
+=======
+
+        # query rackspace for the specified container name
+        try:
+            self.container = pyrax.cloudfiles.get_container(container)
+        except pyrax.exceptions.Forbidden as e:
+            log.FatalError(u"%s : %s \n" % (e.__class__.__name__, util.uexc(e)) +
+                           u"Container may exist, but access was denied.\n" +
+                           u"If this container exists, please check its X-Container-Read/Write headers.\n" +
+                           u"Otherwise, please check your credentials and permissions.",
+                           log.ErrorCode.backend_permission_denied)
+        except pyrax.exceptions.NoSuchContainer as e:
+            try:
+                self.container = pyrax.cloudfiles.create_container(container)
+            except pyrax.exceptions.Forbidden as e:
+                log.FatalError(u"%s : %s \n" % (e.__class__.__name__, util.uexc(e)) +
+                               u"Container does not exist, but creation was denied.\n" +
+                               u"You may be using a read-only user that can view but not create containers.\n" +
+                               u"Please check your credentials and permissions.",
+                               log.ErrorCode.backend_permission_denied)
+>>>>>>> MERGE-SOURCE
 
     def _error_code(self, operation, e):
         if isinstance(e, self.nso_exc):
@@ -96,7 +119,7 @@
         elif isinstance(e, self.client_exc):
             if e.code == 404:
                 return log.ErrorCode.backend_not_found
-        elif hasattr(e, 'http_status'):
+        elif hasattr(e, u'http_status'):
             if e.http_status == 404:
                 return log.ErrorCode.backend_not_found
 
@@ -105,7 +128,7 @@
 
     def _get(self, remote_filename, local_path):
         sobject = self.container.get_object(remote_filename)
-        with open(local_path.name, 'wb') as f:
+        with open(local_path.name, u'wb') as f:
             f.write(sobject.get())
 
     def _list(self):
@@ -123,4 +146,4 @@
 
     def _query(self, filename):
         sobject = self.container.get_object(filename)
-        return {'size': sobject.total_bytes}
+        return {u'size': sobject.total_bytes}

=== renamed file 'duplicity/backends/acdclibackend.py' => 'duplicity/backends/acdclibackend.py.THIS'
=== added file 'duplicity/backends/adbackend.py.OTHER'
--- duplicity/backends/adbackend.py.OTHER	1970-01-01 00:00:00 +0000
+++ duplicity/backends/adbackend.py.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,411 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2016 Stefan Breunig <stefan-duplicity@xxxxxxxxxxx>
+# Based on the backend onedrivebackend.py
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+from __future__ import print_function
+from __future__ import division
+from builtins import input
+from past.utils import old_div
+import os.path
+import json
+import sys
+import time
+import re
+from io import DEFAULT_BUFFER_SIZE
+
+import duplicity.backend
+from duplicity.errors import BackendException
+from duplicity import globals
+from duplicity import log
+
+
+class ADBackend(duplicity.backend.Backend):
+    u"""
+    Backend for Amazon Drive. It communicates directly with Amazon Drive using
+    their RESTful API and does not rely on externally setup software (like
+    acd_cli).
+    """
+
+    OAUTH_TOKEN_PATH = os.path.expanduser(u'~/.duplicity_ad_oauthtoken.json')
+
+    OAUTH_AUTHORIZE_URL = u'https://www.amazon.com/ap/oa'
+    OAUTH_TOKEN_URL = u'https://api.amazon.com/auth/o2/token'
+    # NOTE: Amazon requires https, which is why I am using my domain/setup
+    # instead of Duplicity's. Mail me at stefan-duplicity@xxxxxxxxxxx once it is
+    # available through https and I will whitelist the new URL.
+    OAUTH_REDIRECT_URL = u'https://breunig.xyz/duplicity/copy.html'
+    OAUTH_SCOPE = [u'clouddrive:read_other', u'clouddrive:write']
+
+    CLIENT_ID = u'amzn1.application-oa2-client.791c9c2d78444e85a32eb66f92eb6bcc'
+    CLIENT_SECRET = u'5b322c6a37b25f16d848a6a556eddcc30314fc46ae65c87068ff1bc4588d715b'
+
+    MULTIPART_BOUNDARY = u'DuplicityFormBoundaryd66364f7f8924f7e9d478e19cf4b871d114a1e00262542'
+
+    def __init__(self, parsed_url):
+        duplicity.backend.Backend.__init__(self, parsed_url)
+
+        self.metadata_url = u'https://drive.amazonaws.com/drive/v1/'
+        self.content_url = u'https://content-na.drive.amazonaws.com/cdproxy/'
+
+        self.names_to_ids = {}
+        self.backup_target_id = None
+        self.backup_target = parsed_url.path.lstrip(u'/')
+
+        if globals.volsize > (10 * 1024 * 1024 * 1024):
+            # https://forums.developer.amazon.com/questions/22713/file-size-limits.html
+            # https://forums.developer.amazon.com/questions/22038/support-for-chunked-transfer-encoding.html
+            log.FatalError(
+                u'Your --volsize is bigger than 10 GiB, which is the maximum '
+                u'file size on Amazon Drive that does not require work arounds.')
+
+        try:
+            global requests
+            global OAuth2Session
+            import requests
+            from requests_oauthlib import OAuth2Session
+        except ImportError:
+            raise BackendException(
+                u'Amazon Drive backend requires python-requests and '
+                u'python-requests-oauthlib to be installed.\n\n'
+                u'For Debian and derivates use:\n'
+                u'  apt-get install python-requests python-requests-oauthlib\n'
+                u'For Fedora and derivates use:\n'
+                u'  yum install python-requests python-requests-oauthlib')
+
+        self.initialize_oauth2_session()
+        self.resolve_backup_target()
+
+    def initialize_oauth2_session(self):
+        u"""Setup or refresh oauth2 session with Amazon Drive"""
+
+        def token_updater(token):
+            u"""Stores oauth2 token on disk"""
+            try:
+                with open(self.OAUTH_TOKEN_PATH, u'w') as f:
+                    json.dump(token, f)
+            except Exception as err:
+                log.Error(u'Could not save the OAuth2 token to %s. This means '
+                          u'you may need to do the OAuth2 authorization '
+                          u'process again soon. Original error: %s' % (
+                              self.OAUTH_TOKEN_PATH, err))
+
+        token = None
+        try:
+            with open(self.OAUTH_TOKEN_PATH) as f:
+                token = json.load(f)
+        except IOError as err:
+            log.Notice(u'Could not load OAuth2 token. '
+                       u'Trying to create a new one. (original error: %s)' % err)
+
+        self.http_client = OAuth2Session(
+            self.CLIENT_ID,
+            scope=self.OAUTH_SCOPE,
+            redirect_uri=self.OAUTH_REDIRECT_URL,
+            token=token,
+            auto_refresh_kwargs={
+                u'client_id': self.CLIENT_ID,
+                u'client_secret': self.CLIENT_SECRET,
+            },
+            auto_refresh_url=self.OAUTH_TOKEN_URL,
+            token_updater=token_updater)
+
+        if token is not None:
+            self.http_client.refresh_token(self.OAUTH_TOKEN_URL)
+
+        endpoints_response = self.http_client.get(self.metadata_url +
+                                                  u'account/endpoint')
+        if endpoints_response.status_code != requests.codes.ok:
+            token = None
+
+        if token is None:
+            if not sys.stdout.isatty() or not sys.stdin.isatty():
+                log.FatalError(u'The OAuth2 token could not be loaded from %s '
+                               u'and you are not running duplicity '
+                               u'interactively, so duplicity cannot possibly '
+                               u'access Amazon Drive.' % self.OAUTH_TOKEN_PATH)
+            authorization_url, _ = self.http_client.authorization_url(
+                self.OAUTH_AUTHORIZE_URL)
+
+            print(u'')
+            print(u'In order to allow duplicity to access Amazon Drive, please '
+                  u'open the following URL in a browser and copy the URL of the '
+                  u'page you see after authorization here:')
+            print(authorization_url)
+            print(u'')
+
+            redirected_to = (input(u'URL of the resulting page: ')
+                             .replace(u'http://', u'https://', 1)).strip()
+
+            token = self.http_client.fetch_token(
+                self.OAUTH_TOKEN_URL,
+                client_secret=self.CLIENT_SECRET,
+                authorization_response=redirected_to)
+
+            endpoints_response = self.http_client.get(self.metadata_url +
+                                                      u'account/endpoint')
+            endpoints_response.raise_for_status()
+            token_updater(token)
+
+        urls = endpoints_response.json()
+        if u'metadataUrl' not in urls or u'contentUrl' not in urls:
+            log.FatalError(u'Could not retrieve endpoint URLs for this account')
+        self.metadata_url = urls[u'metadataUrl']
+        self.content_url = urls[u'contentUrl']
+
+    def resolve_backup_target(self):
+        u"""Resolve node id for remote backup target folder"""
+
+        response = self.http_client.get(
+            self.metadata_url + u'nodes?filters=kind:FOLDER AND isRoot:true')
+        parent_node_id = response.json()[u'data'][0][u'id']
+
+        for component in [x for x in self.backup_target.split(u'/') if x]:
+            # There doesn't seem to be escaping support, so cut off filter
+            # after first unsupported character
+            query = re.search(u'^[A-Za-z0-9_-]*', component).group(0)
+            if component != query:
+                query = query + u'*'
+
+            matches = self.read_all_pages(
+                self.metadata_url + u'nodes?filters=kind:FOLDER AND name:%s '
+                                    u'AND parents:%s' % (query, parent_node_id))
+            candidates = [f for f in matches if f.get(u'name') == component]
+
+            if len(candidates) >= 2:
+                log.FatalError(u'There are multiple folders with the same name '
+                               u'below one parent.\nParentID: %s\nFolderName: '
+                               u'%s' % (parent_node_id, component))
+            elif len(candidates) == 1:
+                parent_node_id = candidates[0][u'id']
+            else:
+                log.Debug(u'Folder %s does not exist yet. Creating.' % component)
+                parent_node_id = self.mkdir(parent_node_id, component)
+
+        log.Debug(u"Backup target folder has id: %s" % parent_node_id)
+        self.backup_target_id = parent_node_id
+
+    def get_file_id(self, remote_filename):
+        u"""Find id of remote file in backup target folder"""
+
+        if remote_filename not in self.names_to_ids:
+            self._list()
+
+        return self.names_to_ids.get(remote_filename)
+
+    def mkdir(self, parent_node_id, folder_name):
+        u"""Create a new folder as a child of a parent node"""
+
+        data = {u'name': folder_name, u'parents': [parent_node_id], u'kind': u'FOLDER'}
+        response = self.http_client.post(
+            self.metadata_url + u'nodes',
+            data=json.dumps(data))
+        response.raise_for_status()
+        return response.json()[u'id']
+
+    def multipart_stream(self, metadata, source_path):
+        u"""Generator for multipart/form-data file upload from source file"""
+
+        boundary = self.MULTIPART_BOUNDARY
+
+        yield str.encode(u'--%s\r\nContent-Disposition: form-data; '
+                         u'name="metadata"\r\n\r\n' % boundary +
+                         u'%s\r\n' % json.dumps(metadata) +
+                         u'--%s\r\n' % boundary)
+        yield b'Content-Disposition: form-data; name="content"; filename="i_love_backups"\r\n'
+        yield b'Content-Type: application/octet-stream\r\n\r\n'
+
+        with source_path.open() as stream:
+            while True:
+                f = stream.read(DEFAULT_BUFFER_SIZE)
+                if f:
+                    yield f
+                else:
+                    break
+
+        yield str.encode(u'\r\n--%s--\r\n' % boundary +
+                         u'multipart/form-data; boundary=%s' % boundary)
+
+    def read_all_pages(self, url):
+        u"""Iterates over nodes API URL until all pages were read"""
+
+        result = []
+        next_token = u''
+        token_param = u'&startToken=' if u'?' in url else u'?startToken='
+
+        while True:
+            paginated_url = url + token_param + next_token
+            response = self.http_client.get(paginated_url)
+            if response.status_code != 200:
+                raise BackendException(u"Pagination failed with status=%s on "
+                                       u"URL=%s" % (response.status_code, url))
+
+            parsed = response.json()
+            if u'data' in parsed and len(parsed[u'data']) > 0:
+                result.extend(parsed[u'data'])
+            else:
+                break
+
+            # Do not make another HTTP request if everything is here already
+            if len(result) >= parsed[u'count']:
+                break
+
+            if u'nextToken' not in parsed:
+                break
+            next_token = parsed[u'nextToken']
+
+        return result
+
+    def raise_for_existing_file(self, remote_filename):
+        u"""Report error when file already existed in location and delete it"""
+
+        self._delete(remote_filename)
+        raise BackendException(u'Upload failed, because there was a file with '
+                               u'the same name as %s already present. The file was '
+                               u'deleted, and duplicity will retry the upload unless '
+                               u'the retry limit has been reached.' % remote_filename)
+
+    def _put(self, source_path, remote_filename):
+        u"""Upload a local file to Amazon Drive"""
+
+        quota = self.http_client.get(self.metadata_url + u'account/quota')
+        quota.raise_for_status()
+        available = quota.json()[u'available']
+
+        source_size = os.path.getsize(source_path.name)
+
+        if source_size > available:
+            raise BackendException(
+                u'Out of space: trying to store "%s" (%d bytes), but only '
+                u'%d bytes available on Amazon Drive.' % (
+                    source_path.name, source_size, available))
+
+        # Just check the cached list, to avoid _list for every new file being
+        # uploaded
+        if remote_filename in self.names_to_ids:
+            log.Debug(u'File %s seems to already exist on Amazon Drive. Deleting '
+                      u'before attempting to upload it again.' % remote_filename)
+            self._delete(remote_filename)
+
+        metadata = {u'name': remote_filename, u'kind': u'FILE',
+                    u'parents': [self.backup_target_id]}
+        headers = {u'Content-Type': u'multipart/form-data; boundary=%s' % self.MULTIPART_BOUNDARY}
+        data = self.multipart_stream(metadata, source_path)
+
+        response = self.http_client.post(
+            self.content_url + u'nodes?suppress=deduplication',
+            data=data,
+            headers=headers)
+
+        if response.status_code == 409:  # "409 : Duplicate file exists."
+            self.raise_for_existing_file(remote_filename)
+        elif response.status_code == 201:
+            log.Debug(u'%s uploaded successfully' % remote_filename)
+        elif response.status_code == 408 or response.status_code == 504:
+            log.Info(u'%s upload failed with timeout status code=%d. Speculatively '
+                     u'waiting for %d seconds to see if Amazon Drive finished the '
+                     u'upload anyway' % (remote_filename, response.status_code,
+                                         globals.timeout))
+            tries = old_div(globals.timeout, 15)
+            while tries >= 0:
+                tries -= 1
+                time.sleep(15)
+
+                remote_size = self._query(remote_filename)[u'size']
+                if source_size == remote_size:
+                    log.Debug(u'Upload turned out to be successful after all.')
+                    return
+                elif remote_size == -1:
+                    log.Debug(u'Uploaded file is not yet there, %d tries left.'
+                              % (tries + 1))
+                    continue
+                else:
+                    self.raise_for_existing_file(remote_filename)
+            raise BackendException(u'%s upload failed and file did not show up '
+                                   u'within time limit.' % remote_filename)
+        else:
+            log.Debug(u'%s upload returned an undesirable status code %s'
+                      % (remote_filename, response.status_code))
+            response.raise_for_status()
+
+        parsed = response.json()
+        if u'id' not in parsed:
+            raise BackendException(u'%s was uploaded, but returned JSON does not '
+                                   u'contain ID of new file. Retrying.\nJSON:\n\n%s'
+                                   % (remote_filename, parsed))
+
+        # XXX: The upload may be considered finished before the file shows up
+        # in the file listing. As such, the following is required to avoid race
+        # conditions when duplicity calls _query or _list.
+        self.names_to_ids[parsed[u'name']] = parsed[u'id']
+
+    def _get(self, remote_filename, local_path):
+        u"""Download file from Amazon Drive"""
+
+        with local_path.open(u'wb') as local_file:
+            file_id = self.get_file_id(remote_filename)
+            if file_id is None:
+                raise BackendException(
+                    u'File "%s" cannot be downloaded: it does not exist' %
+                    remote_filename)
+
+            response = self.http_client.get(
+                self.content_url + u'/nodes/' + file_id + u'/content', stream=True)
+            response.raise_for_status()
+            for chunk in response.iter_content(chunk_size=DEFAULT_BUFFER_SIZE):
+                if chunk:
+                    local_file.write(chunk)
+            local_file.flush()
+
+    def _query(self, remote_filename):
+        u"""Retrieve file size info from Amazon Drive"""
+
+        file_id = self.get_file_id(remote_filename)
+        if file_id is None:
+            return {u'size': -1}
+        response = self.http_client.get(self.metadata_url + u'nodes/' + file_id)
+        response.raise_for_status()
+
+        return {u'size': response.json()[u'contentProperties'][u'size']}
+
+    def _list(self):
+        u"""List files in Amazon Drive backup folder"""
+
+        files = self.read_all_pages(
+            self.metadata_url + u'nodes/' + self.backup_target_id +
+            u'/children?filters=kind:FILE')
+
+        self.names_to_ids = {f[u'name']: f[u'id'] for f in files}
+
+        return list(self.names_to_ids.keys())
+
+    def _delete(self, remote_filename):
+        u"""Delete file from Amazon Drive"""
+
+        file_id = self.get_file_id(remote_filename)
+        if file_id is None:
+            raise BackendException(
+                u'File "%s" cannot be deleted: it does not exist' % (
+                    remote_filename))
+        response = self.http_client.put(self.metadata_url + u'trash/' + file_id)
+        response.raise_for_status()
+        del self.names_to_ids[remote_filename]
+
+
+duplicity.backend.register_backend(u'ad', ADBackend)

=== modified file 'duplicity/backends/azurebackend.py'
--- duplicity/backends/azurebackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/azurebackend.py	2019-02-22 19:07:43 +0000
@@ -19,6 +19,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import str
 import os
 
 import duplicity.backend
@@ -27,7 +28,7 @@
 
 
 class AzureBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for Azure Blob Storage Service
     """
     def __init__(self, parsed_url):
@@ -37,18 +38,26 @@
         try:
             import azure
             import azure.storage
-            if hasattr(azure.storage, 'BlobService'):
+            if hasattr(azure.storage, u'BlobService'):
                 # v0.11.1 and below
                 from azure.storage import BlobService
                 self.AzureMissingResourceError = azure.WindowsAzureMissingResourceError
                 self.AzureConflictError = azure.WindowsAzureConflictError
             else:
                 # v1.0.0 and above
+<<<<<<< TREE
                 from azure.storage.blob import BlobService
+=======
+                import azure.storage.blob
+                if hasattr(azure.storage.blob, u'BlobService'):
+                    from azure.storage.blob import BlobService
+                else:
+                    from azure.storage.blob.blockblobservice import BlockBlobService as BlobService
+>>>>>>> MERGE-SOURCE
                 self.AzureMissingResourceError = azure.common.AzureMissingResourceHttpError
                 self.AzureConflictError = azure.common.AzureConflictHttpError
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Azure backend requires Microsoft Azure Storage SDK for Python (https://pypi.python.org/pypi/azure-storage/).
 Exception: %s""" % str(e))
 
@@ -60,21 +69,93 @@
                                         account_key=os.environ['AZURE_ACCOUNT_KEY'])
 
         # TODO: validate container name
+<<<<<<< TREE
         self.container = parsed_url.path.lstrip('/')
+=======
+        self.container = parsed_url.path.lstrip(u'/')
+
+        if u'AZURE_ACCOUNT_NAME' not in os.environ:
+            raise BackendException(u'AZURE_ACCOUNT_NAME environment variable not set.')
+
+        if u'AZURE_ACCOUNT_KEY' in os.environ:
+            if u'AZURE_ENDPOINT_SUFFIX' in os.environ:
+                self.blob_service = BlobService(account_name=os.environ[u'AZURE_ACCOUNT_NAME'],
+                                                account_key=os.environ[u'AZURE_ACCOUNT_KEY'],
+                                                endpoint_suffix=os.environ[u'AZURE_ENDPOINT_SUFFIX'])
+            else:
+                self.blob_service = BlobService(account_name=os.environ[u'AZURE_ACCOUNT_NAME'],
+                                                account_key=os.environ[u'AZURE_ACCOUNT_KEY'])
+            self._create_container()
+        elif u'AZURE_SHARED_ACCESS_SIGNATURE' in os.environ:
+            if u'AZURE_ENDPOINT_SUFFIX' in os.environ:
+                self.blob_service = BlobService(account_name=os.environ[u'AZURE_ACCOUNT_NAME'],
+                                                sas_token=os.environ[u'AZURE_SHARED_ACCESS_SIGNATURE'],
+                                                endpoint_suffix=os.environ[u'AZURE_ENDPOINT_SUFFIX'])
+            else:
+                self.blob_service = BlobService(account_name=os.environ[u'AZURE_ACCOUNT_NAME'],
+                                                sas_token=os.environ[u'AZURE_SHARED_ACCESS_SIGNATURE'])
+        else:
+            raise BackendException(
+                u'Neither AZURE_ACCOUNT_KEY nor AZURE_SHARED_ACCESS_SIGNATURE environment variable not set.')
+
+        if globals.azure_max_single_put_size:
+            # check if we use azure-storage>=0.30.0
+            try:
+                _ = self.blob_service.MAX_SINGLE_PUT_SIZE
+                self.blob_service.MAX_SINGLE_PUT_SIZE = globals.azure_max_single_put_size
+            # fallback for azure-storage<0.30.0
+            except AttributeError:
+                self.blob_service._BLOB_MAX_DATA_SIZE = globals.azure_max_single_put_size
+
+        if globals.azure_max_block_size:
+            # check if we use azure-storage>=0.30.0
+            try:
+                _ = self.blob_service.MAX_BLOCK_SIZE
+                self.blob_service.MAX_BLOCK_SIZE = globals.azure_max_block_size
+            # fallback for azure-storage<0.30.0
+            except AttributeError:
+                self.blob_service._BLOB_MAX_CHUNK_DATA_SIZE = globals.azure_max_block_size
+                
+    def _create_container(self):
+>>>>>>> MERGE-SOURCE
         try:
             self.blob_service.create_container(self.container, fail_on_exist=True)
         except self.AzureConflictError:
             # Indicates that the resource could not be created because it already exists.
             pass
         except Exception as e:
-            log.FatalError("Could not create Azure container: %s"
-                           % unicode(e.message).split('\n', 1)[0],
+            log.FatalError(u"Could not create Azure container: %s"
+                           % str(e.message).split(u'\n', 1)[0],
                            log.ErrorCode.connection_failed)
 
     def _put(self, source_path, remote_filename):
+<<<<<<< TREE
+=======
+        kwargs = {}
+        if globals.azure_max_connections:
+            kwargs[u'max_connections'] = globals.azure_max_connections
+
+>>>>>>> MERGE-SOURCE
         # https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/#upload-a-blob-into-a-container
+<<<<<<< TREE
         self.blob_service.put_block_blob_from_path(self.container, remote_filename, source_path.name)
 
+=======
+        try:
+            self.blob_service.create_blob_from_path(self.container, remote_filename, source_path.name, **kwargs)
+        except AttributeError:  # Old versions use a different method name
+            self.blob_service.put_block_blob_from_path(self.container, remote_filename, source_path.name, **kwargs)
+        
+        self._set_tier(remote_filename)
+            
+    def _set_tier(self, remote_filename):
+        if globals.azure_blob_tier is not None:
+            try:
+                self.blob_service.set_standard_blob_tier(self.container, remote_filename, globals.azure_blob_tier)
+            except AttributeError:  # might not be available in old API
+                pass
+            
+>>>>>>> MERGE-SOURCE
     def _get(self, remote_filename, local_path):
         # https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/#download-blobs
         self.blob_service.get_blob_to_path(self.container, remote_filename, local_path.name)
@@ -97,10 +178,24 @@
 
     def _query(self, filename):
         prop = self.blob_service.get_blob_properties(self.container, filename)
+<<<<<<< TREE
         return {'size': int(prop['content-length'])}
+=======
+        try:
+            info = {u'size': int(prop.properties.content_length)}
+        except AttributeError:
+            # old versions directly returned the properties
+            info = {u'size': int(prop[u'content-length'])}
+        return info
+>>>>>>> MERGE-SOURCE
 
     def _error_code(self, operation, e):
         if isinstance(e, self.AzureMissingResourceError):
             return log.ErrorCode.backend_not_found
 
+<<<<<<< TREE
 duplicity.backend.register_backend('azure', AzureBackend)
+=======
+
+duplicity.backend.register_backend(u'azure', AzureBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/b2backend.py'
--- duplicity/backends/b2backend.py	2018-08-26 20:01:40 +0000
+++ duplicity/backends/b2backend.py	2019-02-22 19:07:43 +0000
@@ -22,12 +22,20 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 # THE SOFTWARE.
 
+from future import standard_library
+standard_library.install_aliases()
+from builtins import object
 import os
 import hashlib
+<<<<<<< TREE
+=======
+from urllib.parse import quote_plus  # pylint: disable=import-error
+>>>>>>> MERGE-SOURCE
 
 import duplicity.backend
 from duplicity.errors import BackendException, FatalBackendException
 from duplicity import log
+<<<<<<< TREE
 from duplicity import progress
 
 
@@ -50,18 +58,39 @@
             super(B2ProgressListener, self).close()
     return B2ProgressListener()
 
+=======
+from duplicity import progress
+
+
+class B2ProgressListener:
+    def __enter__(self):
+        pass
+    
+    def set_total_bytes(self, total_byte_count):
+        self.total_byte_count = total_byte_count
+
+    def bytes_completed(self, byte_count):
+        progress.report_transfer(byte_count, self.total_byte_count)
+
+    def close(self):
+        pass
+
+    def __exit__(self, exc_type, exc_val, exc_tb):
+        pass
+>>>>>>> MERGE-SOURCE
 
 class B2Backend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for BackBlaze's B2 storage service
     """
 
     def __init__(self, parsed_url):
-        """
+        u"""
         Authorize to B2 api and set up needed variables
         """
         duplicity.backend.Backend.__init__(self, parsed_url)
 
+<<<<<<< TREE
         # Import B2 API
         try:
             global b2
@@ -78,22 +107,49 @@
         self.parsed_url.hostname = 'B2'
 
         account_id = parsed_url.username
+=======
+        # Import B2 API
+        try:
+            global b2
+            import b2
+            import b2.api
+            import b2.account_info
+            import b2.download_dest
+            import b2.file_version
+        except ImportError:
+            raise BackendException(u'B2 backend requires B2 Python APIs (pip install b2)')
+
+        self.service = b2.api.B2Api(b2.account_info.InMemoryAccountInfo())
+        self.parsed_url.hostname = u'B2'
+
+        account_id = parsed_url.username
+>>>>>>> MERGE-SOURCE
         account_key = self.get_password()
 
         self.url_parts = [
-            x for x in parsed_url.path.replace("@", "/").split('/') if x != ''
+            x for x in parsed_url.path.replace(u"@", u"/").split(u'/') if x != u''
         ]
         if self.url_parts:
             self.username = self.url_parts.pop(0)
             bucket_name = self.url_parts.pop(0)
         else:
+<<<<<<< TREE
             raise BackendException("B2 requires a bucket name")
         self.path = "".join([url_part + "/" for url_part in self.url_parts])
         self.service.authorize_account('production', account_id, account_key)
 
         log.Log("B2 Backend (path= %s, bucket= %s, minimum_part_size= %s)" %
                 (self.path, bucket_name, self.service.account_info.get_minimum_part_size()), log.INFO)
+=======
+            raise BackendException(u"B2 requires a bucket name")
+        self.path = u"".join([url_part + u"/" for url_part in self.url_parts])
+        self.service.authorize_account(u'production', account_id, account_key)
+
+        log.Log(u"B2 Backend (path= %s, bucket= %s, minimum_part_size= %s)" %
+                (self.path, bucket_name, self.service.account_info.get_minimum_part_size()), log.INFO)
+>>>>>>> MERGE-SOURCE
         try:
+<<<<<<< TREE
             self.bucket = self.service.get_bucket_by_name(bucket_name)
             log.Log("Bucket found", log.INFO)
         except b2.exception.NonExistentBucket:
@@ -102,43 +158,73 @@
                 self.bucket = self.service.create_bucket(bucket_name, 'allPrivate')
             except:
                 raise FatalBackendException("Bucket cannot be created")
+=======
+            self.bucket = self.service.get_bucket_by_name(bucket_name)
+            log.Log(u"Bucket found", log.INFO)
+        except b2.exception.NonExistentBucket:
+            try:
+                log.Log(u"Bucket not found, creating one", log.INFO)
+                self.bucket = self.service.create_bucket(bucket_name, u'allPrivate')
+            except:
+                raise FatalBackendException(u"Bucket cannot be created")
+>>>>>>> MERGE-SOURCE
 
     def _get(self, remote_filename, local_path):
-        """
+        u"""
         Download remote_filename to local_path
         """
+<<<<<<< TREE
         log.Log("Get: %s -> %s" % (self.path + remote_filename, local_path.name), log.INFO)
         self.bucket.download_file_by_name(self.path + remote_filename,
                                           b2.download_dest.DownloadDestLocalFile(local_path.name))
+=======
+        log.Log(u"Get: %s -> %s" % (self.path + remote_filename, local_path.name), log.INFO)
+        self.bucket.download_file_by_name(quote_plus(self.path + remote_filename),
+                                          b2.download_dest.DownloadDestLocalFile(local_path.name))
+>>>>>>> MERGE-SOURCE
 
     def _put(self, source_path, remote_filename):
-        """
+        u"""
         Copy source_path to remote_filename
         """
+<<<<<<< TREE
         log.Log("Put: %s -> %s" % (source_path.name, self.path + remote_filename), log.INFO)
         self.bucket.upload_local_file(source_path.name, self.path + remote_filename,
                                       content_type='application/pgp-encrypted',
                                       progress_listener=progress_listener_factory())
+=======
+        log.Log(u"Put: %s -> %s" % (source_path.name, self.path + remote_filename), log.INFO)
+        self.bucket.upload_local_file(source_path.name, quote_plus(self.path + remote_filename),
+                                      content_type=u'application/pgp-encrypted',
+                                      progress_listener=B2ProgressListener())
+>>>>>>> MERGE-SOURCE
 
     def _list(self):
-        """
+        u"""
         List files on remote server
         """
         return [file_version_info.file_name[len(self.path):]
                 for (file_version_info, folder_name) in self.bucket.ls(self.path)]
 
     def _delete(self, filename):
-        """
+        u"""
         Delete filename from remote server
         """
+<<<<<<< TREE
         log.Log("Delete: %s" % self.path + filename, log.INFO)
         file_version_info = self.file_info(self.path + filename)
         self.bucket.delete_file_version(file_version_info.id_, file_version_info.file_name)
+=======
+        log.Log(u"Delete: %s" % self.path + filename, log.INFO)
+        file_version_info = self.file_info(quote_plus(self.path + filename))
+        self.bucket.delete_file_version(file_version_info.id_, file_version_info.file_name)
+>>>>>>> MERGE-SOURCE
 
     def _query(self, filename):
-        """
+        u"""
         Get size info of filename
         """
+<<<<<<< TREE
         log.Log("Query: %s" % self.path + filename, log.INFO)
         file_version_info = self.file_info(self.path + filename)
         return {'size': file_version_info.size
@@ -154,3 +240,20 @@
 
 
 duplicity.backend.register_backend("b2", B2Backend)
+=======
+        log.Log(u"Query: %s" % self.path + filename, log.INFO)
+        file_version_info = self.file_info(quote_plus(self.path + filename))
+        return {u'size': file_version_info.size
+                if file_version_info is not None and file_version_info.size is not None else -1}
+
+    def file_info(self, filename):
+        response = self.bucket.list_file_names(filename, 1)
+        for entry in response[u'files']:
+            file_version_info = b2.file_version.FileVersionInfoFactory.from_api_response(entry)
+            if file_version_info.file_name == filename:
+                return file_version_info
+        raise BackendException(u'File not found')
+
+
+duplicity.backend.register_backend(u"b2", B2Backend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/botobackend.py'
--- duplicity/backends/botobackend.py	2015-01-22 23:38:29 +0000
+++ duplicity/backends/botobackend.py	2019-02-22 19:07:43 +0000
@@ -28,7 +28,7 @@
 else:
     from ._boto_single import BotoBackend
 
-duplicity.backend.register_backend("gs", BotoBackend)
-duplicity.backend.register_backend("s3", BotoBackend)
-duplicity.backend.register_backend("s3+http", BotoBackend)
-duplicity.backend.uses_netloc.extend(['s3'])
+duplicity.backend.register_backend(u"gs", BotoBackend)
+duplicity.backend.register_backend(u"s3", BotoBackend)
+duplicity.backend.register_backend(u"s3+http", BotoBackend)
+duplicity.backend.uses_netloc.extend([u's3'])

=== modified file 'duplicity/backends/cfbackend.py'
--- duplicity/backends/cfbackend.py	2014-10-27 02:27:36 +0000
+++ duplicity/backends/cfbackend.py	2019-02-22 19:07:43 +0000
@@ -22,9 +22,9 @@
 from duplicity import globals
 
 if (globals.cf_backend and
-        globals.cf_backend.lower().strip() == 'pyrax'):
+        globals.cf_backend.lower().strip() == u'pyrax'):
     from ._cf_pyrax import PyraxBackend as CFBackend
 else:
     from ._cf_cloudfiles import CloudFilesBackend as CFBackend
 
-duplicity.backend.register_backend("cf+http", CFBackend)
+duplicity.backend.register_backend(u"cf+http", CFBackend)

=== modified file 'duplicity/backends/dpbxbackend.py'
--- duplicity/backends/dpbxbackend.py	2017-11-28 14:16:16 +0000
+++ duplicity/backends/dpbxbackend.py	2019-02-22 19:07:43 +0000
@@ -1,6 +1,4 @@
 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
-# pylint: skip-file
-# pylint: skip-file
 #
 # Copyright 2013 jno <jno@xxxxxxxxx>
 # Copyright 2016 Dmitry Nezhevenko <dion@xxxxxxxxxxx>
@@ -27,11 +25,27 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import StringIO
+from __future__ import print_function
+from __future__ import division
+from future import standard_library
+standard_library.install_aliases()
+from builtins import input
+from builtins import str
+from past.utils import old_div
+import io
+import os
+import re
+import sys
+import time
+import traceback
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
+
 from duplicity import log, globals
 from duplicity import progress
-import duplicity.backend
 from duplicity.errors import BackendException
+<<<<<<< TREE
 import os
 import sys
 import traceback
@@ -43,9 +57,11 @@
 from dropbox.files import UploadSessionCursor, CommitInfo, WriteMode, \
     GetMetadataError, DeleteError, UploadSessionLookupError, ListFolderError
 from dropbox.oauth import DropboxOAuth2FlowNoRedirect
+=======
+from duplicity.globals import num_retries
+>>>>>>> MERGE-SOURCE
 from requests.exceptions import ConnectionError
-import time
-from duplicity.globals import num_retries
+import duplicity.backend
 
 # This is chunk size for upload using Dpbx chumked API v2. It doesn't
 # make sense to make it much large since Dpbx SDK uses connection pool
@@ -61,27 +77,27 @@
 
 
 def log_exception(e):
-    log.Error('Exception [%s]:' % (e,))
-    f = StringIO.StringIO()
+    log.Error(u'Exception [%s]:' % (e,))
+    f = io.StringIO()
     traceback.print_exc(file=f)
     f.seek(0)
     for s in f.readlines():
-        log.Error('| ' + s.rstrip())
+        log.Error(u'| ' + s.rstrip())
     f.close()
 
 
 def command(login_required=True):
-    """a decorator for handling authentication and exceptions"""
+    u"""a decorator for handling authentication and exceptions"""
     def decorate(f):
         def wrapper(self, *args):
             try:
                 return f(self, *args)
             except ApiError as e:
                 log_exception(e)
-                raise BackendException('dpbx api error "%s"' % (e,))
+                raise BackendException(u'dpbx api error "%s"' % (e,))
             except Exception as e:
                 log_exception(e)
-                log.Error('dpbx code error "%s"' % (e,), log.ErrorCode.backend_code_error)
+                log.Error(u'dpbx code error "%s"' % (e,), log.ErrorCode.backend_code_error)
                 raise
 
         wrapper.__doc__ = f.__doc__
@@ -90,11 +106,32 @@
 
 
 class DPBXBackend(duplicity.backend.Backend):
-    """Connect to remote store using Dr*pB*x service"""
+    u"""Connect to remote store using Dr*pB*x service"""
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
+        global Dropbox
+        global AuthError, BadInputError, ApiError
+        global UploadSessionCursor, CommitInfo
+        global WriteMode, GetMetadataError
+        global DeleteError, UploadSessionLookupError
+        global ListFolderError
+        global DropboxOAuth2FlowNoRedirect
+        try:
+            from dropbox import Dropbox
+            from dropbox.exceptions import AuthError, BadInputError, ApiError
+            from dropbox.files import (UploadSessionCursor, CommitInfo,
+                                       WriteMode, GetMetadataError,
+                                       DeleteError, UploadSessionLookupError,
+                                       ListFolderError)
+            from dropbox.oauth import DropboxOAuth2FlowNoRedirect
+        except ImportError as e:
+            raise BackendException(u"""\
+This backend requires the dropbox package version 6.9.0
+To install use "sudo pip install dropbox==6.9.0"
+Exception: %s""" % str(e))
+
         self.api_account = None
         self.api_client = None
         self.auth_flow = None
@@ -104,49 +141,60 @@
     def user_authenticated(self):
         try:
             account = self.api_client.users_get_current_account()
-            log.Debug("User authenticated as ,%s" % account)
+            log.Debug(u"User authenticated as ,%s" % account)
             return True
         except:
-            log.Debug('User not authenticated')
+            log.Debug(u'User not authenticated')
             return False
 
     def load_access_token(self):
-        return os.environ.get('DPBX_ACCESS_TOKEN', None)
+        return os.environ.get(u'DPBX_ACCESS_TOKEN', None)
 
     def save_access_token(self, access_token):
-        raise BackendException('dpbx: Please set DPBX_ACCESS_TOKEN=\"%s\" environment variable' %
+        raise BackendException(u'dpbx: Please set DPBX_ACCESS_TOKEN=\"%s\" environment variable' %
                                access_token)
 
     def obtain_access_token(self):
-        log.Info("dpbx: trying to obtain access token")
-        for env_var in ['DPBX_APP_KEY', 'DPBX_APP_SECRET']:
+        log.Info(u"dpbx: trying to obtain access token")
+        for env_var in [u'DPBX_APP_KEY', u'DPBX_APP_SECRET']:
             if env_var not in os.environ:
-                raise BackendException('dpbx: %s environment variable not set' % env_var)
+                raise BackendException(u'dpbx: %s environment variable not set' % env_var)
 
-        app_key = os.environ['DPBX_APP_KEY']
-        app_secret = os.environ['DPBX_APP_SECRET']
+        app_key = os.environ[u'DPBX_APP_KEY']
+        app_secret = os.environ[u'DPBX_APP_SECRET']
 
         if not sys.stdout.isatty() or not sys.stdin.isatty():
-            log.FatalError('dpbx error: cannot interact, but need human attention',
+            log.FatalError(u'dpbx error: cannot interact, but need human attention',
                            log.ErrorCode.backend_command_error)
 
         auth_flow = DropboxOAuth2FlowNoRedirect(app_key, app_secret)
-        log.Debug('dpbx,auth_flow.start()')
+        log.Debug(u'dpbx,auth_flow.start()')
         authorize_url = auth_flow.start()
-        print
-        print '-' * 72
-        print "1. Go to: " + authorize_url
-        print "2. Click \"Allow\" (you might have to log in first)."
-        print "3. Copy the authorization code."
-        print '-' * 72
-        auth_code = raw_input("Enter the authorization code here: ").strip()
+        print()
+        print(u'-' * 72)
+        print(u"1. Go to: " + authorize_url)
+        print(u"2. Click \"Allow\" (you might have to log in first).")
+        print(u"3. Copy the authorization code.")
+        print(u'-' * 72)
+        auth_code = input(u"Enter the authorization code here: ").strip()
         try:
+<<<<<<< TREE
             log.Debug('dpbx,auth_flow.finish(%s)' % auth_code)
             authresult = auth_flow.finish(auth_code)
+=======
+            log.Debug(u'dpbx,auth_flow.finish(%s)' % auth_code)
+            authresult = auth_flow.finish(auth_code)
+>>>>>>> MERGE-SOURCE
         except Exception as e:
+<<<<<<< TREE
             raise BackendException('dpbx: Unable to obtain access token: %s' % e)
         log.Info("dpbx: Authentication successfull")
         self.save_access_token(authresult.access_token)
+=======
+            raise BackendException(u'dpbx: Unable to obtain access token: %s' % e)
+        log.Info(u"dpbx: Authentication successfull")
+        self.save_access_token(authresult.access_token)
+>>>>>>> MERGE-SOURCE
 
     def login(self):
         if self.load_access_token() is None:
@@ -155,21 +203,21 @@
         self.api_client = Dropbox(self.load_access_token())
         self.api_account = None
         try:
-            log.Debug('dpbx,users_get_current_account([token])')
+            log.Debug(u'dpbx,users_get_current_account([token])')
             self.api_account = self.api_client.users_get_current_account()
-            log.Debug("dpbx,%s" % self.api_account)
+            log.Debug(u"dpbx,%s" % self.api_account)
 
         except (BadInputError, AuthError) as e:
-            log.Debug('dpbx,exception: %s' % e)
-            log.Info("dpbx: Authentication failed. Trying to obtain new access token")
+            log.Debug(u'dpbx,exception: %s' % e)
+            log.Info(u"dpbx: Authentication failed. Trying to obtain new access token")
 
             self.obtain_access_token()
 
             # We're assuming obtain_access_token will throw exception.
             # So this line should not be reached
-            raise BackendException("dpbx: Please update DPBX_ACCESS_TOKEN and try again")
+            raise BackendException(u"dpbx: Please update DPBX_ACCESS_TOKEN and try again")
 
-        log.Info("dpbx: Successfully authenticated as %s" %
+        log.Info(u"dpbx: Successfully authenticated as %s" %
                  self.api_account.name.display_name)
 
     def _error_code(self, operation, e):
@@ -186,8 +234,8 @@
 
     @command()
     def _put(self, source_path, remote_filename):
-        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
-        remote_path = '/' + os.path.join(remote_dir, remote_filename).rstrip()
+        remote_dir = urllib.parse.unquote(self.parsed_url.path.lstrip(u'/'))
+        remote_path = u'/' + os.path.join(remote_dir, remote_filename).rstrip()
 
         file_size = os.path.getsize(source_path.name)
         progress.report_transfer(0, file_size)
@@ -200,10 +248,10 @@
 
         # A few sanity checks
         if res_metadata.path_display != remote_path:
-            raise BackendException('dpbx: result path mismatch: %s (expected: %s)' %
+            raise BackendException(u'dpbx: result path mismatch: %s (expected: %s)' %
                                    (res_metadata.path_display, remote_path))
         if res_metadata.size != file_size:
-            raise BackendException('dpbx: result size mismatch: %s (expected: %s)' %
+            raise BackendException(u'dpbx: result size mismatch: %s (expected: %s)' %
                                    (res_metadata.size, file_size))
 
     def put_file_small(self, source_path, remote_path):
@@ -211,16 +259,16 @@
             self.login()
 
         file_size = os.path.getsize(source_path.name)
-        f = source_path.open('rb')
+        f = source_path.open(u'rb')
         try:
-            log.Debug('dpbx,files_upload(%s, [%d bytes])' % (remote_path, file_size))
+            log.Debug(u'dpbx,files_upload(%s, [%d bytes])' % (remote_path, file_size))
 
             res_metadata = self.api_client.files_upload(f.read(), remote_path,
                                                         mode=WriteMode.overwrite,
                                                         autorename=False,
                                                         client_modified=None,
                                                         mute=True)
-            log.Debug('dpbx,files_upload(): %s' % res_metadata)
+            log.Debug(u'dpbx,files_upload(): %s' % res_metadata)
             progress.report_transfer(file_size, file_size)
             return res_metadata
         finally:
@@ -231,13 +279,13 @@
             self.login()
 
         file_size = os.path.getsize(source_path.name)
-        f = source_path.open('rb')
+        f = source_path.open(u'rb')
         try:
             buf = f.read(DPBX_UPLOAD_CHUNK_SIZE)
-            log.Debug('dpbx,files_upload_session_start([%d bytes]), total: %d' %
+            log.Debug(u'dpbx,files_upload_session_start([%d bytes]), total: %d' %
                       (len(buf), file_size))
             upload_sid = self.api_client.files_upload_session_start(buf)
-            log.Debug('dpbx,files_upload_session_start(): %s' % upload_sid)
+            log.Debug(u'dpbx,files_upload_session_start(): %s' % upload_sid)
             upload_cursor = UploadSessionCursor(upload_sid.session_id, f.tell())
             commit_info = CommitInfo(remote_path, mode=WriteMode.overwrite,
                                      autorename=False, client_modified=None,
@@ -273,21 +321,21 @@
 
                     if not is_eof:
                         assert len(buf) != 0
-                        log.Debug('dpbx,files_upload_sesssion_append([%d bytes], offset=%d)' %
+                        log.Debug(u'dpbx,files_upload_sesssion_append([%d bytes], offset=%d)' %
                                   (len(buf), upload_cursor.offset))
                         self.api_client.files_upload_session_append(buf,
                                                                     upload_cursor.session_id,
                                                                     upload_cursor.offset)
                     else:
-                        log.Debug('dpbx,files_upload_sesssion_finish([%d bytes], offset=%d)' %
+                        log.Debug(u'dpbx,files_upload_sesssion_finish([%d bytes], offset=%d)' %
                                   (len(buf), upload_cursor.offset))
                         res_metadata = self.api_client.files_upload_session_finish(buf,
                                                                                    upload_cursor,
                                                                                    commit_info)
 
                     upload_cursor.offset = f.tell()
-                    log.Debug('progress: %d of %d' % (upload_cursor.offset,
-                                                      file_size))
+                    log.Debug(u'progress: %d of %d' % (upload_cursor.offset,
+                                                       file_size))
                     progress.report_transfer(upload_cursor.offset, file_size)
                 except ApiError as e:
                     error = e.error
@@ -298,19 +346,19 @@
                         # expected offset from server and it's enough to just
                         # seek() and retry again
                         new_offset = error.get_incorrect_offset().correct_offset
-                        log.Debug('dpbx,files_upload_session_append: incorrect offset: %d (expected: %s)' %
+                        log.Debug(u'dpbx,files_upload_session_append: incorrect offset: %d (expected: %s)' %
                                   (upload_cursor.offset, new_offset))
                         if requested_offset is not None:
                             # chunk failed even after seek attempt. Something
                             # strange and no safe way to recover
-                            raise BackendException("dpbx: unable to chunk upload")
+                            raise BackendException(u"dpbx: unable to chunk upload")
                         else:
                             # will seek and retry
                             requested_offset = new_offset
                         continue
                     raise
                 except ConnectionError as e:
-                    log.Debug('dpbx,files_upload_session_append: %s' % e)
+                    log.Debug(u'dpbx,files_upload_session_append: %s' % e)
 
                     retry_number -= 1
 
@@ -323,16 +371,16 @@
                     # We don't know for sure, was partial upload successful or
                     # not. So it's better to retry smaller amount to avoid extra
                     # reupload
-                    log.Info('dpbx: sleeping a bit before chunk retry')
+                    log.Info(u'dpbx: sleeping a bit before chunk retry')
                     time.sleep(30)
-                    current_chunk_size = DPBX_UPLOAD_CHUNK_SIZE / 5
+                    current_chunk_size = old_div(DPBX_UPLOAD_CHUNK_SIZE, 5)
                     requested_offset = None
                     continue
 
             if f.tell() != file_size:
-                raise BackendException('dpbx: something wrong')
+                raise BackendException(u'dpbx: something wrong')
 
-            log.Debug('dpbx,files_upload_sesssion_finish(): %s' % res_metadata)
+            log.Debug(u'dpbx,files_upload_sesssion_finish(): %s' % res_metadata)
             progress.report_transfer(f.tell(), file_size)
 
             return res_metadata
@@ -345,18 +393,18 @@
         if not self.user_authenticated():
             self.login()
 
-        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
-        remote_path = '/' + os.path.join(remote_dir, remote_filename).rstrip()
+        remote_dir = urllib.parse.unquote(self.parsed_url.path.lstrip(u'/'))
+        remote_path = u'/' + os.path.join(remote_dir, remote_filename).rstrip()
 
-        log.Debug('dpbx,files_download(%s)' % remote_path)
+        log.Debug(u'dpbx,files_download(%s)' % remote_path)
         res_metadata, http_fd = self.api_client.files_download(remote_path)
-        log.Debug('dpbx,files_download(%s): %s, %s' % (remote_path, res_metadata,
-                                                       http_fd))
+        log.Debug(u'dpbx,files_download(%s): %s, %s' % (remote_path, res_metadata,
+                                                        http_fd))
         file_size = res_metadata.size
         to_fd = None
         progress.report_transfer(0, file_size)
         try:
-            to_fd = local_path.open('wb')
+            to_fd = local_path.open(u'wb')
             for c in http_fd.iter_content(DPBX_DOWNLOAD_BUF_SIZE):
                 to_fd.write(c)
                 progress.report_transfer(to_fd.tell(), file_size)
@@ -370,7 +418,7 @@
         # again. Since this check is free, it's better to have it here
         local_size = os.path.getsize(local_path.name)
         if local_size != file_size:
-            raise BackendException("dpbx: wrong file size: %d (expected: %d)" %
+            raise BackendException(u"dpbx: wrong file size: %d (expected: %d)" %
                                    (local_size, file_size))
 
         local_path.setdata()
@@ -380,10 +428,17 @@
         # Do a long listing to avoid connection reset
         if not self.user_authenticated():
             self.login()
+<<<<<<< TREE
         remote_dir = '/' + urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
 
         log.Debug('dpbx.files_list_folder(%s)' % remote_dir)
+=======
+        remote_dir = u'/' + urllib.parse.unquote(self.parsed_url.path.lstrip(u'/')).rstrip()
+
+        log.Debug(u'dpbx.files_list_folder(%s)' % remote_dir)
+>>>>>>> MERGE-SOURCE
         res = []
+<<<<<<< TREE
         try:
             resp = self.api_client.files_list_folder(remote_dir)
             log.Debug('dpbx.list(%s): %s' % (remote_dir, resp))
@@ -399,6 +454,23 @@
                 log.Debug('dpbx.list(%s): ignore missing folder (%s)' % (remote_dir, e))
             else:
                 raise
+=======
+        try:
+            resp = self.api_client.files_list_folder(remote_dir)
+            log.Debug(u'dpbx.list(%s): %s' % (remote_dir, resp))
+
+            while True:
+                res.extend([entry.name for entry in resp.entries])
+                if not resp.has_more:
+                    break
+                resp = self.api_client.files_list_folder_continue(resp.cursor)
+        except ApiError as e:
+            if (isinstance(e.error, ListFolderError) and e.error.is_path() and
+                    e.error.get_path().is_not_found()):
+                log.Debug(u'dpbx.list(%s): ignore missing folder (%s)' % (remote_dir, e))
+            else:
+                raise
+>>>>>>> MERGE-SOURCE
 
         # Warn users of old version dpbx about automatically renamed files
         self.check_renamed_files(res)
@@ -410,10 +482,10 @@
         if not self.user_authenticated():
             self.login()
 
-        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
-        remote_path = '/' + os.path.join(remote_dir, filename).rstrip()
+        remote_dir = urllib.parse.unquote(self.parsed_url.path.lstrip(u'/'))
+        remote_path = u'/' + os.path.join(remote_dir, filename).rstrip()
 
-        log.Debug('dpbx.files_delete(%s)' % remote_path)
+        log.Debug(u'dpbx.files_delete(%s)' % remote_path)
         self.api_client.files_delete(remote_path)
 
         # files_permanently_delete seems to be better for backup purpose
@@ -422,20 +494,20 @@
 
     @command()
     def _close(self):
-        """close backend session? no! just "flush" the data"""
-        log.Debug('dpbx.close():')
+        u"""close backend session? no! just "flush" the data"""
+        log.Debug(u'dpbx.close():')
 
     @command()
     def _query(self, filename):
         if not self.user_authenticated():
             self.login()
-        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/'))
-        remote_path = '/' + os.path.join(remote_dir, filename).rstrip()
+        remote_dir = urllib.parse.unquote(self.parsed_url.path.lstrip(u'/'))
+        remote_path = u'/' + os.path.join(remote_dir, filename).rstrip()
 
-        log.Debug('dpbx.files_get_metadata(%s)' % remote_path)
+        log.Debug(u'dpbx.files_get_metadata(%s)' % remote_path)
         info = self.api_client.files_get_metadata(remote_path)
-        log.Debug('dpbx.files_get_metadata(%s): %s' % (remote_path, info))
-        return {'size': info.size}
+        log.Debug(u'dpbx.files_get_metadata(%s): %s' % (remote_path, info))
+        return {u'size': info.size}
 
     def check_renamed_files(self, file_list):
         if not self.user_authenticated():
@@ -443,22 +515,22 @@
         bad_list = [x for x in file_list if DPBX_AUTORENAMED_FILE_RE.search(x) is not None]
         if len(bad_list) == 0:
             return
-        log.Warn('-' * 72)
-        log.Warn('Warning! It looks like there are automatically renamed files on backend')
-        log.Warn('They were probably created when using older version of duplicity.')
-        log.Warn('')
-        log.Warn('Please check your backup consistency. Most likely you will need to choose')
-        log.Warn('largest file from duplicity-* (number).gpg and remove brackets from its name.')
-        log.Warn('')
-        log.Warn('These files are not managed by duplicity at all and will not be')
-        log.Warn('removed/rotated automatically.')
-        log.Warn('')
-        log.Warn('Affected files:')
+        log.Warn(u'-' * 72)
+        log.Warn(u'Warning! It looks like there are automatically renamed files on backend')
+        log.Warn(u'They were probably created when using older version of duplicity.')
+        log.Warn(u'')
+        log.Warn(u'Please check your backup consistency. Most likely you will need to choose')
+        log.Warn(u'largest file from duplicity-* (number).gpg and remove brackets from its name.')
+        log.Warn(u'')
+        log.Warn(u'These files are not managed by duplicity at all and will not be')
+        log.Warn(u'removed/rotated automatically.')
+        log.Warn(u'')
+        log.Warn(u'Affected files:')
         for x in bad_list:
-            log.Warn('\t%s' % x)
-        log.Warn('')
-        log.Warn('In any case it\'s better to create full backup.')
-        log.Warn('-' * 72)
-
-
-duplicity.backend.register_backend("dpbx", DPBXBackend)
+            log.Warn(u'\t%s' % x)
+        log.Warn(u'')
+        log.Warn(u'In any case it\'s better to create full backup.')
+        log.Warn(u'-' * 72)
+
+
+duplicity.backend.register_backend(u"dpbx", DPBXBackend)

=== modified file 'duplicity/backends/gdocsbackend.py'
--- duplicity/backends/gdocsbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/gdocsbackend.py	2019-02-22 19:07:43 +0000
@@ -18,19 +18,26 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import print_function
+from future import standard_library
+standard_library.install_aliases()
+from builtins import input
+from builtins import str
 import os.path
 import string
-import urllib
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 
 import duplicity.backend
 from duplicity.errors import BackendException
 
 
 class GDocsBackend(duplicity.backend.Backend):
-    """Connect to remote store using Google Google Documents List API"""
+    u"""Connect to remote store using Google Google Documents List API"""
 
-    ROOT_FOLDER_ID = 'folder%3Aroot'
-    BACKUP_DOCUMENT_TYPE = 'application/binary'
+    ROOT_FOLDER_ID = u'folder%3Aroot'
+    BACKUP_DOCUMENT_TYPE = u'application/binary'
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
@@ -44,36 +51,36 @@
             import gdata.docs.client
             import gdata.docs.data
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Google Docs backend requires Google Data APIs Python Client Library (see http://code.google.com/p/gdata-python-client/).
 Exception: %s""" % str(e))
 
         # Setup client instance.
-        self.client = gdata.docs.client.DocsClient(source='duplicity $version')
+        self.client = gdata.docs.client.DocsClient(source=u'duplicity $version')
         self.client.ssl = True
         self.client.http_client.debug = False
-        self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
+        self._authorize(parsed_url.username + u'@' + parsed_url.hostname, self.get_password())
 
         # Fetch destination folder entry (and crete hierarchy if required).
-        folder_names = string.split(parsed_url.path[1:], '/')
+        folder_names = string.split(parsed_url.path[1:], u'/')
         parent_folder = None
         parent_folder_id = GDocsBackend.ROOT_FOLDER_ID
         for folder_name in folder_names:
-            entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
+            entries = self._fetch_entries(parent_folder_id, u'folder', folder_name)
             if entries is not None:
                 if len(entries) == 1:
                     parent_folder = entries[0]
                 elif len(entries) == 0:
-                    folder = gdata.docs.data.Resource(type='folder', title=folder_name)
+                    folder = gdata.docs.data.Resource(type=u'folder', title=folder_name)
                     parent_folder = self.client.create_resource(folder, collection=parent_folder)
                 else:
                     parent_folder = None
                 if parent_folder:
                     parent_folder_id = parent_folder.resource_id.text
                 else:
-                    raise BackendException("Error while creating destination folder '%s'." % folder_name)
+                    raise BackendException(u"Error while creating destination folder '%s'." % folder_name)
             else:
-                raise BackendException("Error while fetching destination folder '%s'." % folder_name)
+                raise BackendException(u"Error while fetching destination folder '%s'." % folder_name)
         self.folder = parent_folder
 
     def _put(self, source_path, remote_filename):
@@ -92,13 +99,13 @@
         if uploader:
             # Chunked upload.
             entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
-            uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
+            uri = self.folder.get_resumable_create_media_link().href + u'?convert=false'
             entry = uploader.UploadFile(uri, entry=entry)
             if not entry:
-                raise BackendException("Failed to upload file '%s' to remote folder '%s'"
+                raise BackendException(u"Failed to upload file '%s' to remote folder '%s'"
                                        % (source_path.get_filename(), self.folder.title.text))
         else:
-            raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'"
+            raise BackendException(u"Failed to initialize upload of file '%s' to remote folder '%s'"
                                    % (source_path.get_filename(), self.folder.title.text))
         assert not file.close()
 
@@ -110,7 +117,7 @@
             entry = entries[0]
             self.client.DownloadResource(entry, local_path.name)
         else:
-            raise BackendException("Failed to find file '%s' in remote folder '%s'"
+            raise BackendException(u"Failed to find file '%s' in remote folder '%s'"
                                    % (remote_filename, self.folder.title.text))
 
     def _list(self):
@@ -123,41 +130,41 @@
                                       GDocsBackend.BACKUP_DOCUMENT_TYPE,
                                       filename)
         for entry in entries:
-            self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
+            self.client.delete(entry.get_edit_link().href + u'?delete=true', force=True)
 
     def _authorize(self, email, password, captcha_token=None, captcha_response=None):
         try:
             self.client.client_login(email,
                                      password,
-                                     source='duplicity $version',
-                                     service='writely',
+                                     source=u'duplicity $version',
+                                     service=u'writely',
                                      captcha_token=captcha_token,
                                      captcha_response=captcha_response)
         except gdata.client.CaptchaChallenge as challenge:
-            print('A captcha challenge in required. Please visit ' + challenge.captcha_url)
+            print(u'A captcha challenge in required. Please visit ' + challenge.captcha_url)
             answer = None
             while not answer:
-                answer = raw_input('Answer to the challenge? ')
+                answer = eval(input(u'Answer to the challenge? '))
             self._authorize(email, password, challenge.captcha_token, answer)
         except gdata.client.BadAuthentication:
             raise BackendException(
-                'Invalid user credentials given. Be aware that accounts '
-                'that use 2-step verification require creating an application specific '
-                'access code for using this Duplicity backend. Follow the instruction in '
-                'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
-                'and create your application-specific password to run duplicity backups.')
+                u'Invalid user credentials given. Be aware that accounts '
+                u'that use 2-step verification require creating an application specific '
+                u'access code for using this Duplicity backend. Follow the instruction in '
+                u'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
+                u'and create your application-specific password to run duplicity backups.')
 
     def _fetch_entries(self, folder_id, type, title=None):
         # Build URI.
-        uri = '/feeds/default/private/full/%s/contents' % folder_id
-        if type == 'folder':
-            uri += '/-/folder?showfolders=true'
+        uri = u'/feeds/default/private/full/%s/contents' % folder_id
+        if type == u'folder':
+            uri += u'/-/folder?showfolders=true'
         elif type == GDocsBackend.BACKUP_DOCUMENT_TYPE:
-            uri += '?showfolders=false'
+            uri += u'?showfolders=false'
         else:
-            uri += '?showfolders=true'
+            uri += u'?showfolders=true'
         if title:
-            uri += '&title=' + urllib.quote(title) + '&title-exact=true'
+            uri += u'&title=' + urllib.parse.quote(title) + u'&title-exact=true'
 
         # Fetch entries.
         entries = self.client.get_all_resources(uri=uri)
@@ -169,8 +176,8 @@
             for entry in entries:
                 resource_type = entry.get_resource_type()
                 if (not type) \
-                   or (type == 'folder' and resource_type == 'folder') \
-                   or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
+                   or (type == u'folder' and resource_type == u'folder') \
+                   or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != u'folder'):
 
                     if folder_id != GDocsBackend.ROOT_FOLDER_ID:
                         for link in entry.in_collections():
@@ -186,6 +193,13 @@
         # Done!
         return result
 
+<<<<<<< TREE
 """ gdata is an alternate way to access gdocs, currently 05/2015 lacking OAuth support """
 duplicity.backend.register_backend('gdata+gdocs', GDocsBackend)
 duplicity.backend.uses_netloc.extend(['gdata+gdocs'])
+=======
+
+u""" gdata is an alternate way to access gdocs, currently 05/2015 lacking OAuth support """
+duplicity.backend.register_backend(u'gdata+gdocs', GDocsBackend)
+duplicity.backend.uses_netloc.extend([u'gdata+gdocs'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/giobackend.py'
--- duplicity/backends/giobackend.py	2017-08-15 15:52:11 +0000
+++ duplicity/backends/giobackend.py	2019-02-22 19:07:43 +0000
@@ -18,8 +18,6 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-# pylint: skip-file
-
 import os
 import subprocess
 import atexit
@@ -34,35 +32,35 @@
     # GIO requires a dbus session bus which can start the gvfs daemons
     # when required.  So we make sure that such a bus exists and that our
     # environment points to it.
-    if 'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
-        output = subprocess.Popen(['dbus-launch'], stdout=subprocess.PIPE).communicate()[0]
-        lines = output.split('\n')
+    if u'DBUS_SESSION_BUS_ADDRESS' not in os.environ:
+        output = subprocess.Popen([u'dbus-launch'], stdout=subprocess.PIPE).communicate()[0]
+        lines = output.split(u'\n')
         for line in lines:
-            parts = line.split('=', 1)
+            parts = line.split(u'=', 1)
             if len(parts) == 2:
-                if parts[0] == 'DBUS_SESSION_BUS_PID':  # cleanup at end
+                if parts[0] == u'DBUS_SESSION_BUS_PID':  # cleanup at end
                     atexit.register(os.kill, int(parts[1]), signal.SIGTERM)
                 os.environ[parts[0]] = parts[1]
 
 
 class GIOBackend(duplicity.backend.Backend):
-    """Use this backend when saving to a GIO URL.
+    u"""Use this backend when saving to a GIO URL.
        This is a bit of a meta-backend, in that it can handle multiple schemas.
        URLs look like schema://user@server/path.
     """
     def __init__(self, parsed_url):
-        from gi.repository import Gio  # @UnresolvedImport
-        from gi.repository import GLib  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
+        from gi.repository import GLib  # @UnresolvedImport  # pylint: disable=import-error
 
         class DupMountOperation(Gio.MountOperation):
-            """A simple MountOperation that grabs the password from the environment
+            u"""A simple MountOperation that grabs the password from the environment
                or the user.
             """
             def __init__(self, backend):
                 Gio.MountOperation.__init__(self)
                 self.backend = backend
-                self.connect('ask-password', self.ask_password_cb)
-                self.connect('ask-question', self.ask_question_cb)
+                self.connect(u'ask-password', self.ask_password_cb)
+                self.connect(u'ask-question', self.ask_question_cb)
 
             def ask_password_cb(self, *args, **kwargs):
                 self.set_password(self.backend.get_password())
@@ -100,14 +98,14 @@
                 raise
 
     def __done_with_mount(self, fileobj, result, loop):
-        from gi.repository import Gio  # @UnresolvedImport
-        from gi.repository import GLib  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
+        from gi.repository import GLib  # @UnresolvedImport  # pylint: disable=import-error
         try:
             fileobj.mount_enclosing_volume_finish(result)
         except GLib.GError as e:
             # check for NOT_SUPPORTED because some schemas (e.g. file://) validly don't
             if e.code != Gio.IOErrorEnum.ALREADY_MOUNTED and e.code != Gio.IOErrorEnum.NOT_SUPPORTED:
-                log.FatalError(_("Connection failed, please check your password: %s")
+                log.FatalError(_(u"Connection failed, please check your password: %s")
                                % util.uexc(e), log.ErrorCode.connection_failed)
         loop.quit()
 
@@ -115,7 +113,7 @@
         pass
 
     def __copy_file(self, source, target):
-        from gi.repository import Gio  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
         # Don't pass NOFOLLOW_SYMLINKS here. Some backends (e.g. google-drive:)
         # use symlinks internally for all files. In the normal course of
         # events, we never deal with symlinks anyway, just tarballs.
@@ -124,10 +122,10 @@
                     None, self.__copy_progress, None)
 
     def _error_code(self, operation, e):
-        from gi.repository import Gio  # @UnresolvedImport
-        from gi.repository import GLib  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
+        from gi.repository import GLib  # @UnresolvedImport  # pylint: disable=import-error
         if isinstance(e, GLib.GError):
-            if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete':
+            if e.code == Gio.IOErrorEnum.FAILED and operation == u'delete':
                 # Sometimes delete will return a generic failure on a file not
                 # found (notably the FTP does that)
                 return log.ErrorCode.backend_not_found
@@ -139,19 +137,24 @@
                 return log.ErrorCode.backend_no_space
 
     def _put(self, source_path, remote_filename):
-        from gi.repository import Gio  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
         source_file = Gio.File.new_for_path(source_path.name)
         target_file = self.remote_file.get_child_for_display_name(remote_filename)
         self.__copy_file(source_file, target_file)
 
     def _get(self, filename, local_path):
+<<<<<<< TREE
         from gi.repository import Gio  # @UnresolvedImport
         source_file = self.remote_file.get_child_for_display_name(filename)
+=======
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
+        source_file = self.remote_file.get_child_for_display_name(filename)
+>>>>>>> MERGE-SOURCE
         target_file = Gio.File.new_for_path(local_path.name)
         self.__copy_file(source_file, target_file)
 
     def _list(self):
-        from gi.repository import Gio  # @UnresolvedImport
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
         files = []
         # We grab display name, rather than file name because some backends
         # (e.g. google-drive:) use filesystem-specific IDs as file names and
@@ -171,10 +174,22 @@
         target_file.delete(None)
 
     def _query(self, filename):
+<<<<<<< TREE
         from gi.repository import Gio  # @UnresolvedImport
         target_file = self.remote_file.get_child_for_display_name(filename)
+=======
+        from gi.repository import Gio  # @UnresolvedImport  # pylint: disable=import-error
+        target_file = self.remote_file.get_child_for_display_name(filename)
+>>>>>>> MERGE-SOURCE
         info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE,
                                       Gio.FileQueryInfoFlags.NONE, None)
+<<<<<<< TREE
         return {'size': info.get_size()}
 
 duplicity.backend.register_backend_prefix('gio', GIOBackend)
+=======
+        return {u'size': info.get_size()}
+
+
+duplicity.backend.register_backend_prefix(u'gio', GIOBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/hsibackend.py'
--- duplicity/backends/hsibackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/hsibackend.py	2019-02-22 19:07:43 +0000
@@ -19,10 +19,12 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import range
 import os
 import duplicity.backend
+from duplicity import util
 
-hsi_command = "hsi"
+hsi_command = u"hsi"
 
 
 class HSIBackend(duplicity.backend.Backend):
@@ -31,31 +33,43 @@
         self.host_string = parsed_url.hostname
         self.remote_dir = parsed_url.path
         if self.remote_dir:
-            self.remote_prefix = self.remote_dir + "/"
+            self.remote_prefix = self.remote_dir + u"/"
         else:
-            self.remote_prefix = ""
+            self.remote_prefix = u""
 
     def _put(self, source_path, remote_filename):
-        commandline = '%s "put %s : %s%s"' % (hsi_command, source_path.name, self.remote_prefix, remote_filename)
+        if isinstance(remote_filename, b"".__class__):
+            remote_filename = util.fsdecode(remote_filename)
+        commandline = u'%s "put %s : %s%s"' % (hsi_command, source_path.uc_name, self.remote_prefix, remote_filename)
         self.subprocess_popen(commandline)
 
     def _get(self, remote_filename, local_path):
-        commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)
+        if isinstance(remote_filename, b"".__class__):
+            remote_filename = util.fsdecode(remote_filename)
+        commandline = u'%s "get %s : %s%s"' % (hsi_command, local_path.uc_name, self.remote_prefix, remote_filename)
         self.subprocess_popen(commandline)
 
     def _list(self):
         import sys
-        commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)
+        commandline = u'%s "ls -l %s"' % (hsi_command, self.remote_dir)
         l = self.subprocess_popen(commandline)[2]
-        l = l.split(os.linesep)[3:]
+        l = l.split(os.linesep.encode())[3:]
         for i in range(0, len(l)):
             if l[i]:
                 l[i] = l[i].split()[-1]
         return [x for x in l if x]
 
     def _delete(self, filename):
-        commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
+        if isinstance(filename, b"".__class__):
+            filename = util.fsdecode(filename)
+        commandline = u'%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
         self.subprocess_popen(commandline)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("hsi", HSIBackend)
 duplicity.backend.uses_netloc.extend(['hsi'])
+=======
+
+duplicity.backend.register_backend(u"hsi", HSIBackend)
+duplicity.backend.uses_netloc.extend([u'hsi'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/hubicbackend.py'
--- duplicity/backends/hubicbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/hubicbackend.py	2019-02-22 19:07:43 +0000
@@ -18,17 +18,19 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import str
 import os
 
-import duplicity.backend
 from duplicity import log
 from duplicity import util
 from duplicity.errors import BackendException
+import duplicity.backend
+
 from ._cf_pyrax import PyraxBackend
 
 
 class HubicBackend(PyraxBackend):
-    """
+    u"""
     Backend for Hubic using Pyrax
     """
     def __init__(self, parsed_url):
@@ -37,29 +39,34 @@
         try:
             import pyrax
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Hubic backend requires the pyrax library available from Rackspace.
 Exception: %s""" % str(e))
 
         # Inform Pyrax that we're talking to Hubic
-        pyrax.set_setting("identity_type", "duplicity.backends.pyrax_identity.hubic.HubicIdentity")
+        pyrax.set_setting(u"identity_type", u"duplicity.backends.pyrax_identity.hubic.HubicIdentity")
 
-        CREDENTIALS_FILE = os.path.expanduser("~/.hubic_credentials")
+        CREDENTIALS_FILE = os.path.expanduser(u"~/.hubic_credentials")
         if os.path.exists(CREDENTIALS_FILE):
             try:
                 pyrax.set_credential_file(CREDENTIALS_FILE)
             except Exception as e:
-                log.FatalError("Connection failed, please check your credentials: %s %s"
+                log.FatalError(u"Connection failed, please check your credentials: %s %s"
                                % (e.__class__.__name__, util.uexc(e)),
                                log.ErrorCode.connection_failed)
 
         else:
-            raise BackendException("No ~/.hubic_credentials file found.")
+            raise BackendException(u"No ~/.hubic_credentials file found.")
 
-        container = parsed_url.path.lstrip('/')
+        container = parsed_url.path.lstrip(u'/')
 
         self.client_exc = pyrax.exceptions.ClientException
         self.nso_exc = pyrax.exceptions.NoSuchObject
         self.container = pyrax.cloudfiles.create_container(container)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("cf+hubic", HubicBackend)
+=======
+
+duplicity.backend.register_backend(u"cf+hubic", HubicBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/imapbackend.py'
--- duplicity/backends/imapbackend.py	2017-07-11 14:55:38 +0000
+++ duplicity/backends/imapbackend.py	2019-02-22 19:07:43 +0000
@@ -20,15 +20,23 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import sys
+from future import standard_library
+standard_library.install_aliases()
+from builtins import input
 import imaplib
 import re
 import os
 import time
 import socket
-import StringIO
-import rfc822
+import io
 import getpass
 import email
+from email.parser import Parser
+try:
+    from email.policy import default  # pylint: disable=import-error
+except:
+    pass
 
 import duplicity.backend
 from duplicity import globals
@@ -40,7 +48,7 @@
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
-        log.Debug("I'm %s (scheme %s) connecting to %s as %s" %
+        log.Debug(u"I'm %s (scheme %s) connecting to %s as %s" %
                   (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))
 
         #  Store url for reconnection on error
@@ -48,16 +56,16 @@
 
         #  Set the username
         if (parsed_url.username is None):
-            username = raw_input('Enter account userid: ')
+            username = eval(input(u'Enter account userid: '))
         else:
             username = parsed_url.username
 
         #  Set the password
         if (not parsed_url.password):
-            if 'IMAP_PASSWORD' in os.environ:
-                password = os.environ.get('IMAP_PASSWORD')
+            if u'IMAP_PASSWORD' in os.environ:
+                password = os.environ.get(u'IMAP_PASSWORD')
             else:
-                password = getpass.getpass("Enter account password: ")
+                password = getpass.getpass(u"Enter account password: ")
         else:
             password = parsed_url.password
 
@@ -68,7 +76,7 @@
     def resetConnection(self):
         parsed_url = self.url
         try:
-            imap_server = os.environ['IMAP_SERVER']
+            imap_server = os.environ[u'IMAP_SERVER']
         except KeyError:
             imap_server = parsed_url.hostname
 
@@ -78,25 +86,25 @@
         except Exception:
             pass
 
-        if (parsed_url.scheme == "imap"):
+        if (parsed_url.scheme == u"imap"):
             cl = imaplib.IMAP4
             self.conn = cl(imap_server, 143)
-        elif (parsed_url.scheme == "imaps"):
+        elif (parsed_url.scheme == u"imaps"):
             cl = imaplib.IMAP4_SSL
             self.conn = cl(imap_server, 993)
 
-        log.Debug("Type of imap class: %s" % (cl.__name__))
+        log.Debug(u"Type of imap class: %s" % (cl.__name__))
         self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
 
         #  Login
         if (not(globals.imap_full_address)):
             self.conn.login(self.username, self.password)
             self.conn.select(globals.imap_mailbox)
-            log.Info("IMAP connected")
+            log.Info(u"IMAP connected")
         else:
-            self.conn.login(self.username + "@" + parsed_url.hostname, self.password)
+            self.conn.login(self.username + u"@" + parsed_url.hostname, self.password)
             self.conn.select(globals.imap_mailbox)
-            log.Info("IMAP connected")
+            log.Info(u"IMAP connected")
 
     def prepareBody(self, f, rname):
         mp = email.MIMEMultipart.MIMEMultipart()
@@ -104,10 +112,10 @@
         # I am going to use the remote_dir as the From address so that
         # multiple archives can be stored in an IMAP account and can be
         # accessed separately
-        mp["From"] = self.remote_dir
-        mp["Subject"] = rname
+        mp[u"From"] = self.remote_dir
+        mp[u"Subject"] = rname
 
-        a = email.MIMEBase.MIMEBase("application", "binary")
+        a = email.MIMEBase.MIMEBase(u"application", u"binary")
         a.set_payload(f.read())
 
         email.Encoders.encode_base64(a)
@@ -117,7 +125,7 @@
         return mp.as_string()
 
     def _put(self, source_path, remote_filename):
-        f = source_path.open("rb")
+        f = source_path.open(u"rb")
         allowedTimeout = globals.timeout
         if (allowedTimeout == 0):
             # Allow a total timeout of 1 day
@@ -133,7 +141,7 @@
                 break
             except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                 allowedTimeout -= 1
-                log.Info("Error saving '%s', retrying in 30s " % remote_filename)
+                log.Info(u"Error saving '%s', retrying in 30s " % remote_filename)
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
@@ -141,10 +149,10 @@
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
-                        log.Info("Error reconnecting, retrying in 30s ")
+                        log.Info(u"Error reconnecting, retrying in 30s ")
                         time.sleep(30)
 
-        log.Info("IMAP mail with '%s' subject stored" % remote_filename)
+        log.Info(u"IMAP mail with '%s' subject stored" % remote_filename)
 
     def _get(self, remote_filename, local_path):
         allowedTimeout = globals.timeout
@@ -154,17 +162,17 @@
         while allowedTimeout > 0:
             try:
                 self.conn.select(globals.imap_mailbox)
-                (result, list) = self.conn.search(None, 'Subject', remote_filename)
-                if result != "OK":
+                (result, list) = self.conn.search(None, u'Subject', remote_filename)
+                if result != u"OK":
                     raise Exception(list[0])
 
                 # check if there is any result
-                if list[0] == '':
-                    raise Exception("no mail with subject %s")
-
-                (result, list) = self.conn.fetch(list[0], "(RFC822)")
-
-                if result != "OK":
+                if list[0] == u'':
+                    raise Exception(u"no mail with subject %s")
+
+                (result, list) = self.conn.fetch(list[0], u"(RFC822)")
+
+                if result != u"OK":
                     raise Exception(list[0])
                 rawbody = list[0][1]
 
@@ -178,7 +186,7 @@
                 break
             except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                 allowedTimeout -= 1
-                log.Info("Error loading '%s', retrying in 30s " % remote_filename)
+                log.Info(u"Error loading '%s', retrying in 30s " % remote_filename)
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
@@ -186,71 +194,73 @@
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
-                        log.Info("Error reconnecting, retrying in 30s ")
+                        log.Info(u"Error reconnecting, retrying in 30s ")
                         time.sleep(30)
 
-        tfile = local_path.open("wb")
+        tfile = local_path.open(u"wb")
         tfile.write(body)
         local_path.setdata()
-        log.Info("IMAP mail with '%s' subject fetched" % remote_filename)
+        log.Info(u"IMAP mail with '%s' subject fetched" % remote_filename)
 
     def _list(self):
         ret = []
         (result, list) = self.conn.select(globals.imap_mailbox)
-        if result != "OK":
+        if result != u"OK":
             raise BackendException(list[0])
 
         # Going to find all the archives which have remote_dir in the From
         # address
 
         # Search returns an error if you haven't selected an IMAP folder.
-        (result, list) = self.conn.search(None, 'FROM', self.remote_dir)
-        if result != "OK":
+        (result, list) = self.conn.search(None, u'FROM', self.remote_dir)
+        if result != u"OK":
             raise Exception(list[0])
-        if list[0] == '':
+        if list[0] == u'':
             return ret
-        nums = list[0].strip().split(" ")
-        set = "%s:%s" % (nums[0], nums[-1])
-        (result, list) = self.conn.fetch(set, "(BODY[HEADER])")
-        if result != "OK":
+        nums = list[0].strip().split(u" ")
+        set = u"%s:%s" % (nums[0], nums[-1])
+        (result, list) = self.conn.fetch(set, u"(BODY[HEADER])")
+        if result != u"OK":
             raise Exception(list[0])
 
         for msg in list:
             if (len(msg) == 1):
                 continue
-            io = StringIO.StringIO(msg[1])  # pylint: disable=unsubscriptable-object
-            m = rfc822.Message(io)
-            subj = m.getheader("subject")
-            header_from = m.getheader("from")
+            if sys.version_info.major >= 3:
+                headers = Parser(policy=default).parsestr(msg[1])  # pylint: disable=unsubscriptable-object
+            else:
+                headers = Parser().parsestr(msg[1])  # pylint: disable=unsubscriptable-object
+            subj = headers[u"subject"]
+            header_from = headers[u"from"]
 
             # Catch messages with empty headers which cause an exception.
             if (not (header_from is None)):
-                if (re.compile("^" + self.remote_dir + "$").match(header_from)):
+                if (re.compile(u"^" + self.remote_dir + u"$").match(header_from)):
                     ret.append(subj)
-                    log.Info("IMAP LIST: %s %s" % (subj, header_from))
+                    log.Info(u"IMAP LIST: %s %s" % (subj, header_from))
         return ret
 
     def imapf(self, fun, *args):
         (ret, list) = fun(*args)
-        if ret != "OK":
+        if ret != u"OK":
             raise Exception(list[0])
         return list
 
     def delete_single_mail(self, i):
-        self.imapf(self.conn.store, i, "+FLAGS", '\\DELETED')
+        self.imapf(self.conn.store, i, u"+FLAGS", u'\\DELETED')
 
     def expunge(self):
         list = self.imapf(self.conn.expunge)
 
     def _delete_list(self, filename_list):
         for filename in filename_list:
-            list = self.imapf(self.conn.search, None, "(SUBJECT %s)" % filename)
+            list = self.imapf(self.conn.search, None, u"(SUBJECT %s)" % filename)
             list = list[0].split()
-            if len(list) > 0 and list[0] != "":
+            if len(list) > 0 and list[0] != u"":
                 self.delete_single_mail(list[0])
-                log.Notice("marked %s to be deleted" % filename)
+                log.Notice(u"marked %s to be deleted" % filename)
         self.expunge()
-        log.Notice("IMAP expunged %s files" % len(filename_list))
+        log.Notice(u"IMAP expunged %s files" % len(filename_list))
 
     def _close(self):
         self.conn.select(globals.imap_mailbox)
@@ -258,6 +268,6 @@
         self.conn.logout()
 
 
-duplicity.backend.register_backend("imap", ImapBackend)
-duplicity.backend.register_backend("imaps", ImapBackend)
-duplicity.backend.uses_netloc.extend(['imap', 'imaps'])
+duplicity.backend.register_backend(u"imap", ImapBackend)
+duplicity.backend.register_backend(u"imaps", ImapBackend)
+duplicity.backend.uses_netloc.extend([u'imap', u'imaps'])

=== added file 'duplicity/backends/jottacloudbackend.py.OTHER'
--- duplicity/backends/jottacloudbackend.py.OTHER	1970-01-01 00:00:00 +0000
+++ duplicity/backends/jottacloudbackend.py.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,157 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4; encoding:utf-8 -*-
+#
+# Copyright 2014 Havard Gulldahl
+#
+# in part based on dpbxbackend.py:
+# Copyright 2013 jno <jno@xxxxxxxxx>
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# stdlib
+import posixpath
+import locale
+import logging
+
+# import duplicity stuff # version 0.6
+import duplicity.backend
+from duplicity import log
+from duplicity.errors import BackendException
+
+
+def get_jotta_device(jfs):
+    jottadev = None
+    for j in jfs.devices:  # find Jotta/Shared folder
+        if j.name == u'Jotta':
+            jottadev = j
+    return jottadev
+
+
+def get_root_dir(jfs):
+    jottadev = get_jotta_device(jfs)
+    root_dir = jottadev.mountPoints[u'Archive']
+    return root_dir
+
+
+def set_jottalib_logging_level(log_level):
+    logger = logging.getLogger(u'jottalib')
+    logger.setLevel(getattr(logging, log_level))
+
+
+def set_jottalib_log_handlers(handlers):
+    logger = logging.getLogger(u'jottalib')
+    for handler in handlers:
+        logger.addHandler(handler)
+
+
+def get_duplicity_log_level():
+    u""" Get the current duplicity log level as a stdlib-compatible logging level"""
+    duplicity_log_level = log.LevelName(log.getverbosity())
+
+    # notice is a duplicity-specific logging level not supported by stdlib
+    if duplicity_log_level == u'NOTICE':
+        duplicity_log_level = u'INFO'
+
+    return duplicity_log_level
+
+
+class JottaCloudBackend(duplicity.backend.Backend):
+    u"""Connect to remote store using JottaCloud API"""
+
+    def __init__(self, parsed_url):
+        duplicity.backend.Backend.__init__(self, parsed_url)
+
+        # Import JottaCloud libraries.
+        try:
+            from jottalib import JFS
+            from jottalib.JFS import JFSNotFoundError, JFSIncompleteFile
+        except ImportError:
+            raise BackendException(u'JottaCloud backend requires jottalib'
+                                   u' (see https://pypi.python.org/pypi/jottalib).')
+
+        # Set jottalib loggers to the same verbosity as duplicity
+        duplicity_log_level = get_duplicity_log_level()
+        set_jottalib_logging_level(duplicity_log_level)
+
+        # Ensure jottalib and duplicity log to the same handlers
+        set_jottalib_log_handlers(log._logger.handlers)
+
+        # Will fetch jottacloud auth from environment or .netrc
+        self.client = JFS.JFS()
+
+        self.folder = self.get_or_create_directory(parsed_url.path.lstrip(u'/'))
+        log.Debug(u"Jottacloud folder for duplicity: %r" % self.folder.path)
+
+    def get_or_create_directory(self, directory_name):
+        root_directory = get_root_dir(self.client)
+        full_path = posixpath.join(root_directory.path, directory_name)
+        try:
+            return self.client.getObject(full_path)
+        except JFSNotFoundError:
+            return root_directory.mkdir(directory_name)
+
+    def _put(self, source_path, remote_filename):
+        # - Upload one file
+        # - Retried if an exception is thrown
+        resp = self.folder.up(source_path.open(), remote_filename)
+        log.Debug(u'jottacloud.put(%s,%s): %s' % (source_path.name, remote_filename, resp))
+
+    def _get(self, remote_filename, local_path):
+        # - Get one file
+        # - Retried if an exception is thrown
+        remote_file = self.client.getObject(posixpath.join(self.folder.path, remote_filename))
+        log.Debug(u'jottacloud.get(%s,%s): %s' % (remote_filename, local_path.name, remote_file))
+        with open(local_path.name, u'wb') as to_file:
+            for chunk in remote_file.stream():
+                to_file.write(chunk)
+
+    def _list(self):
+        # - List all files in the backend
+        # - Return a list of filenames
+        # - Retried if an exception is thrown
+        return list([f.name for f in self.folder.files()
+                     if not f.is_deleted() and f.state != u'INCOMPLETE'])
+
+    def _delete(self, filename):
+        # - Delete one file
+        # - Retried if an exception is thrown
+        remote_path = posixpath.join(self.folder.path, filename)
+        remote_file = self.client.getObject(remote_path)
+        log.Debug(u'jottacloud.delete deleting: %s (%s)' % (remote_file, type(remote_file)))
+        remote_file.delete()
+
+    def _query(self, filename):
+        u"""Get size of filename"""
+        #  - Query metadata of one file
+        #  - Return a dict with a 'size' key, and a file size value (-1 for not found)
+        #  - Retried if an exception is thrown
+        log.Info(u'Querying size of %s' % filename)
+        remote_path = posixpath.join(self.folder.path, filename)
+        try:
+            remote_file = self.client.getObject(remote_path)
+        except JFSNotFoundError:
+            return {u'size': -1}
+        return {
+            u'size': remote_file.size,
+        }
+
+    def _close(self):
+        # - If your backend needs to clean up after itself, do that here.
+        pass
+
+
+duplicity.backend.register_backend(u"jottacloud", JottaCloudBackend)
+u""" jottacloud is a Norwegian backup company """

=== modified file 'duplicity/backends/lftpbackend.py'
--- duplicity/backends/lftpbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/lftpbackend.py	2019-02-22 19:07:43 +0000
@@ -24,10 +24,14 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from future import standard_library
+standard_library.install_aliases()
 import os
 import os.path
 import re
-import urllib
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 try:
     from shlex import quote as cmd_quote
 except ImportError:
@@ -37,64 +41,65 @@
 from duplicity import globals
 from duplicity import log
 from duplicity import tempdir
+from duplicity import util
 
 
 class LFTPBackend(duplicity.backend.Backend):
-    """Connect to remote store using File Transfer Protocol"""
+    u"""Connect to remote store using File Transfer Protocol"""
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
         # we expect an output
         try:
-            p = os.popen("lftp --version")
+            p = os.popen(u"lftp --version")
             fout = p.read()
             ret = p.close()
         except Exception:
             pass
         # there is no output if lftp not found
         if not fout:
-            log.FatalError("LFTP not found:  Please install LFTP.",
+            log.FatalError(u"LFTP not found:  Please install LFTP.",
                            log.ErrorCode.ftps_lftp_missing)
 
         # version is the second word of the second part of the first line
-        version = fout.split('\n')[0].split(' | ')[1].split()[1]
-        log.Notice("LFTP version is %s" % version)
+        version = fout.split(u'\n')[0].split(u' | ')[1].split()[1]
+        log.Notice(u"LFTP version is %s" % version)
 
         self.parsed_url = parsed_url
 
-        self.scheme = duplicity.backend.strip_prefix(parsed_url.scheme, 'lftp').lower()
-        self.scheme = re.sub('^webdav', 'http', self.scheme)
-        self.url_string = self.scheme + '://' + parsed_url.hostname
+        self.scheme = duplicity.backend.strip_prefix(parsed_url.scheme, u'lftp').lower()
+        self.scheme = re.sub(u'^webdav', u'http', self.scheme)
+        self.url_string = self.scheme + u'://' + parsed_url.hostname
         if parsed_url.port:
-            self.url_string += ":%s" % parsed_url.port
+            self.url_string += u":%s" % parsed_url.port
 
-        self.remote_path = re.sub('^/', '', parsed_url.path)
+        self.remote_path = re.sub(u'^/', u'', parsed_url.path)
 
         # Fix up an empty remote path
         if len(self.remote_path) == 0:
-            self.remote_path = '/'
+            self.remote_path = u'/'
 
         # Use an explicit directory name.
-        if self.remote_path[-1] != '/':
-            self.remote_path += '/'
+        if self.remote_path[-1] != u'/':
+            self.remote_path += u'/'
 
-        self.authflag = ''
+        self.authflag = u''
         if self.parsed_url.username:
             self.username = self.parsed_url.username
             self.password = self.get_password()
-            self.authflag = "-u '%s,%s'" % (self.username, self.password)
+            self.authflag = u"-u '%s,%s'" % (self.username, self.password)
 
-        if globals.ftp_connection == 'regular':
-            self.conn_opt = 'off'
+        if globals.ftp_connection == u'regular':
+            self.conn_opt = u'off'
         else:
-            self.conn_opt = 'on'
+            self.conn_opt = u'on'
 
         # check for cacert file if https
         self.cacert_file = globals.ssl_cacert_file
-        if self.scheme == 'https' and not globals.ssl_no_check_certificate:
-            cacert_candidates = ["~/.duplicity/cacert.pem",
-                                 "~/duplicity_cacert.pem",
-                                 "/etc/duplicity/cacert.pem"]
+        if self.scheme == u'https' and not globals.ssl_no_check_certificate:
+            cacert_candidates = [u"~/.duplicity/cacert.pem",
+                                 u"~/duplicity_cacert.pem",
+                                 u"/etc/duplicity/cacert.pem"]
             # look for a default cacert file
             if not self.cacert_file:
                 for path in cacert_candidates:
@@ -104,97 +109,103 @@
                         break
 
         # save config into a reusable temp file
-        self.tempfile, self.tempname = tempdir.default().mkstemp()
-        os.write(self.tempfile, "set ssl:verify-certificate " +
-                 ("false" if globals.ssl_no_check_certificate else "true") + "\n")
+        self.tempfd, self.tempname = tempdir.default().mkstemp()
+        self.tempfile = os.fdopen(self.tempfd, u"w")
+        self.tempfile.write(u"set ssl:verify-certificate " +
+                            (u"false" if globals.ssl_no_check_certificate else u"true") + u"\n")
         if self.cacert_file:
-            os.write(self.tempfile, "set ssl:ca-file " + cmd_quote(self.cacert_file) + "\n")
+            self.tempfile.write(u"set ssl:ca-file " + cmd_quote(self.cacert_file) + u"\n")
         if globals.ssl_cacert_path:
-            os.write(self.tempfile, "set ssl:ca-path " + cmd_quote(globals.ssl_cacert_path) + "\n")
-        if self.parsed_url.scheme == 'ftps':
-            os.write(self.tempfile, "set ftp:ssl-allow true\n")
-            os.write(self.tempfile, "set ftp:ssl-protect-data true\n")
-            os.write(self.tempfile, "set ftp:ssl-protect-list true\n")
-        elif self.parsed_url.scheme == 'ftpes':
-            os.write(self.tempfile, "set ftp:ssl-force on\n")
-            os.write(self.tempfile, "set ftp:ssl-protect-data on\n")
-            os.write(self.tempfile, "set ftp:ssl-protect-list on\n")
+            self.tempfile.write(u"set ssl:ca-path " + cmd_quote(globals.ssl_cacert_path) + u"\n")
+        if self.parsed_url.scheme == u'ftps':
+            self.tempfile.write(u"set ftp:ssl-allow true\n")
+            self.tempfile.write(u"set ftp:ssl-protect-data true\n")
+            self.tempfile.write(u"set ftp:ssl-protect-list true\n")
+        elif self.parsed_url.scheme == u'ftpes':
+            self.tempfile.write(u"set ftp:ssl-force on\n")
+            self.tempfile.write(u"set ftp:ssl-protect-data on\n")
+            self.tempfile.write(u"set ftp:ssl-protect-list on\n")
         else:
-            os.write(self.tempfile, "set ftp:ssl-allow false\n")
-        os.write(self.tempfile, "set http:use-propfind true\n")
-        os.write(self.tempfile, "set net:timeout %s\n" % globals.timeout)
-        os.write(self.tempfile, "set net:max-retries %s\n" % globals.num_retries)
-        os.write(self.tempfile, "set ftp:passive-mode %s\n" % self.conn_opt)
+            self.tempfile.write(u"set ftp:ssl-allow false\n")
+        self.tempfile.write(u"set http:use-propfind true\n")
+        self.tempfile.write(u"set net:timeout %s\n" % globals.timeout)
+        self.tempfile.write(u"set net:max-retries %s\n" % globals.num_retries)
+        self.tempfile.write(u"set ftp:passive-mode %s\n" % self.conn_opt)
         if log.getverbosity() >= log.DEBUG:
-            os.write(self.tempfile, "debug\n")
-        if self.parsed_url.scheme == 'ftpes':
-            os.write(self.tempfile, "open %s %s\n" % (self.authflag, self.url_string.replace('ftpes', 'ftp')))
+            self.tempfile.write(u"debug\n")
+        if self.parsed_url.scheme == u'ftpes':
+            self.tempfile.write(u"open %s %s\n" % (self.authflag, self.url_string.replace(u'ftpes', u'ftp')))
         else:
-            os.write(self.tempfile, "open %s %s\n" % (self.authflag, self.url_string))
-        os.close(self.tempfile)
+            self.tempfile.write(u"open %s %s\n" % (self.authflag, self.url_string))
+        self.tempfile.close()
         # print settings in debug mode
         if log.getverbosity() >= log.DEBUG:
-            f = open(self.tempname, 'r')
-            log.Debug("SETTINGS: \n"
-                      "%s" % f.read())
+            f = open(self.tempname, u'r')
+            log.Debug(u"SETTINGS: \n"
+                      u"%s" % f.read())
 
     def _put(self, source_path, remote_filename):
-        commandline = "lftp -c \"source %s; mkdir -p %s; put %s -o %s\"" % (
+        if isinstance(remote_filename, b"".__class__):
+            remote_filename = util.fsdecode(remote_filename)
+        commandline = u"lftp -c \"source %s; mkdir -p %s; put %s -o %s\"" % (
             self.tempname,
             cmd_quote(self.remote_path),
-            cmd_quote(source_path.name),
-            cmd_quote(self.remote_path) + remote_filename
+            cmd_quote(source_path.uc_name),
+            cmd_quote(self.remote_path) + util.fsdecode(remote_filename)
         )
-        log.Debug("CMD: %s" % commandline)
+        log.Debug(u"CMD: %s" % commandline)
         s, l, e = self.subprocess_popen(commandline)
-        log.Debug("STATUS: %s" % s)
-        log.Debug("STDERR:\n"
-                  "%s" % (e))
-        log.Debug("STDOUT:\n"
-                  "%s" % (l))
+        log.Debug(u"STATUS: %s" % s)
+        log.Debug(u"STDERR:\n"
+                  u"%s" % (e))
+        log.Debug(u"STDOUT:\n"
+                  u"%s" % (l))
 
     def _get(self, remote_filename, local_path):
-        commandline = "lftp -c \"source %s; get %s -o %s\"" % (
+        if isinstance(remote_filename, b"".__class__):
+            remote_filename = util.fsdecode(remote_filename)
+        commandline = u"lftp -c \"source %s; get %s -o %s\"" % (
             cmd_quote(self.tempname),
             cmd_quote(self.remote_path) + remote_filename,
-            cmd_quote(local_path.name)
+            cmd_quote(local_path.uc_name)
         )
-        log.Debug("CMD: %s" % commandline)
+        log.Debug(u"CMD: %s" % commandline)
         _, l, e = self.subprocess_popen(commandline)
-        log.Debug("STDERR:\n"
-                  "%s" % (e))
-        log.Debug("STDOUT:\n"
-                  "%s" % (l))
+        log.Debug(u"STDERR:\n"
+                  u"%s" % (e))
+        log.Debug(u"STDOUT:\n"
+                  u"%s" % (l))
 
     def _list(self):
         # Do a long listing to avoid connection reset
         # remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
-        remote_dir = urllib.unquote(self.parsed_url.path)
+        remote_dir = urllib.parse.unquote(self.parsed_url.path)
         # print remote_dir
         quoted_path = cmd_quote(self.remote_path)
         # failing to cd into the folder might be because it was not created already
-        commandline = "lftp -c \"source %s; ( cd %s && ls ) || ( mkdir -p %s && cd %s && ls )\"" % (
+        commandline = u"lftp -c \"source %s; ( cd %s && ls ) || ( mkdir -p %s && cd %s && ls )\"" % (
             cmd_quote(self.tempname),
             quoted_path, quoted_path, quoted_path
         )
-        log.Debug("CMD: %s" % commandline)
+        log.Debug(u"CMD: %s" % commandline)
         _, l, e = self.subprocess_popen(commandline)
-        log.Debug("STDERR:\n"
-                  "%s" % (e))
-        log.Debug("STDOUT:\n"
-                  "%s" % (l))
+        log.Debug(u"STDERR:\n"
+                  u"%s" % (e))
+        log.Debug(u"STDOUT:\n"
+                  u"%s" % (l))
 
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l.split('\n') if x]
+        return [x.split()[-1] for x in l.split(b'\n') if x]
 
     def _delete(self, filename):
-        commandline = "lftp -c \"source %s; cd %s; rm %s\"" % (
+        commandline = u"lftp -c \"source %s; cd %s; rm %s\"" % (
             cmd_quote(self.tempname),
             cmd_quote(self.remote_path),
-            cmd_quote(filename)
+            cmd_quote(util.fsdecode(filename))
         )
-        log.Debug("CMD: %s" % commandline)
+        log.Debug(u"CMD: %s" % commandline)
         _, l, e = self.subprocess_popen(commandline)
+<<<<<<< TREE
         log.Debug("STDERR:\n"
                   "%s" % (e))
         log.Debug("STDOUT:\n"
@@ -221,4 +232,33 @@
                                       'lftp+sftp',
                                       'lftp+webdav', 'lftp+webdavs',
                                       'lftp+http', 'lftp+https']
+=======
+        log.Debug(u"STDERR:\n"
+                  u"%s" % (e))
+        log.Debug(u"STDOUT:\n"
+                  u"%s" % (l))
+
+
+duplicity.backend.register_backend(u"ftp", LFTPBackend)
+duplicity.backend.register_backend(u"ftps", LFTPBackend)
+duplicity.backend.register_backend(u"fish", LFTPBackend)
+duplicity.backend.register_backend(u"ftpes", LFTPBackend)
+
+duplicity.backend.register_backend(u"lftp+ftp", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+ftps", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+fish", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+ftpes", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+sftp", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+webdav", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+webdavs", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+http", LFTPBackend)
+duplicity.backend.register_backend(u"lftp+https", LFTPBackend)
+
+duplicity.backend.uses_netloc.extend([u'ftp', u'ftps', u'fish', u'ftpes',
+                                      u'lftp+ftp', u'lftp+ftps',
+                                      u'lftp+fish', u'lftp+ftpes',
+                                      u'lftp+sftp',
+                                      u'lftp+webdav', u'lftp+webdavs',
+                                      u'lftp+http', u'lftp+https']
+>>>>>>> MERGE-SOURCE
                                      )

=== modified file 'duplicity/backends/localbackend.py'
--- duplicity/backends/localbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/localbackend.py	2019-02-22 19:07:43 +0000
@@ -28,7 +28,7 @@
 
 
 class LocalBackend(duplicity.backend.Backend):
-    """Use this backend when saving to local disk
+    u"""Use this backend when saving to local disk
 
     Urls look like file://testfiles/output.  Relative to root can be
     gotten with extra slash (file:///usr/local).
@@ -37,8 +37,8 @@
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
         # The URL form "file:MyFile" is not a valid duplicity target.
-        if not parsed_url.path.startswith('//'):
-            raise BackendException("Bad file:// path syntax.")
+        if not parsed_url.path.startswith(u'//'):
+            raise BackendException(u"Bad file:// path syntax.")
         self.remote_pathdir = path.Path(parsed_url.path[2:])
         try:
             os.makedirs(self.remote_pathdir.base)
@@ -55,11 +55,11 @@
 
     def _put(self, source_path, remote_filename):
         target_path = self.remote_pathdir.append(remote_filename)
-        target_path.writefileobj(source_path.open("rb"))
+        target_path.writefileobj(source_path.open(u"rb"))
 
     def _get(self, filename, local_path):
         source_path = self.remote_pathdir.append(filename)
-        local_path.writefileobj(source_path.open("rb"))
+        local_path.writefileobj(source_path.open(u"rb"))
 
     def _list(self):
         return self.remote_pathdir.listdir()
@@ -71,6 +71,13 @@
         target_file = self.remote_pathdir.append(filename)
         target_file.setdata()
         size = target_file.getsize() if target_file.exists() else -1
+<<<<<<< TREE
         return {'size': size}
 
 duplicity.backend.register_backend("file", LocalBackend)
+=======
+        return {u'size': size}
+
+
+duplicity.backend.register_backend(u"file", LocalBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/mediafirebackend.py'
--- duplicity/backends/mediafirebackend.py	2017-07-11 14:55:38 +0000
+++ duplicity/backends/mediafirebackend.py	2019-02-22 19:07:43 +0000
@@ -18,19 +18,20 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""MediaFire Duplicity Backend"""
+u"""MediaFire Duplicity Backend"""
 
+from builtins import str
 import os
 
 import duplicity.backend
 
 from duplicity.errors import BackendException
 
-DUPLICITY_APP_ID = '45593'
+DUPLICITY_APP_ID = u'45593'
 
 
 class MediafireBackend(duplicity.backend.Backend):
-    """Use this backend when saving to MediaFire
+    u"""Use this backend when saving to MediaFire
 
     URLs look like mf:/root/folder.
     """
@@ -38,7 +39,7 @@
         try:
             import mediafire.client
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Mediafire backend requires the mediafire library.
 Exception: %s""" % str(e))
 
@@ -58,7 +59,7 @@
                           password=mediafire_password)
 
         # //username:password@host/path/to/folder -> path/to/folder
-        uri = 'mf:///' + parsed_url.path.split('/', 3)[3]
+        uri = u'mf:///' + parsed_url.path.split(u'/', 3)[3]
 
         # Create folder if it does not exist and make sure it is private
         # See MediaFire Account Settings /Security and Privacy / Share Link
@@ -66,17 +67,17 @@
         try:
             folder = self.client.get_resource_by_uri(uri)
             if not isinstance(folder, self._folder_res):
-                raise BackendException("target_url already exists "
-                                       "and is not a folder")
+                raise BackendException(u"target_url already exists "
+                                       u"and is not a folder")
         except mediafire.client.ResourceNotFoundError:
             # force folder to be private
             folder = self.client.create_folder(uri, recursive=True)
-            self.client.update_folder_metadata(uri, privacy='private')
+            self.client.update_folder_metadata(uri, privacy=u'private')
 
         self.folder = folder
 
     def _put(self, source_path, remote_filename=None):
-        """Upload file"""
+        u"""Upload file"""
         # Use source file name if remote one is not defined
         if remote_filename is None:
             remote_filename = os.path.basename(source_path.name)
@@ -84,56 +85,56 @@
         uri = self._build_uri(remote_filename)
 
         with self.client.upload_session():
-            self.client.upload_file(source_path.open('rb'), uri)
+            self.client.upload_file(source_path.open(u'rb'), uri)
 
     def _get(self, filename, local_path):
-        """Download file"""
+        u"""Download file"""
         uri = self._build_uri(filename)
         try:
-            self.client.download_file(uri, local_path.open('wb'))
+            self.client.download_file(uri, local_path.open(u'wb'))
         except self._downloaderror_exc as ex:
             raise BackendException(ex)
 
     def _list(self):
-        """List files in backup directory"""
+        u"""List files in backup directory"""
         uri = self._build_uri()
         filenames = []
         for item in self.client.get_folder_contents_iter(uri):
             if not isinstance(item, self._file_res):
                 continue
 
-            filenames.append(item['filename'].encode('utf-8'))
+            filenames.append(item[u'filename'].encode(u'utf-8'))
 
         return filenames
 
     def _delete(self, filename):
-        """Delete single file"""
+        u"""Delete single file"""
         uri = self._build_uri(filename)
         self.client.delete_file(uri)
 
     def _delete_list(self, filename_list):
-        """Delete list of files"""
+        u"""Delete list of files"""
         for filename in filename_list:
             self._delete(filename)
 
     def _query(self, filename):
-        """Stat the remote file"""
+        u"""Stat the remote file"""
         uri = self._build_uri(filename)
 
         try:
             resource = self.client.get_resource_by_uri(uri)
-            size = int(resource['size'])
+            size = int(resource[u'size'])
         except self._notfound_exc:
             size = -1
 
-        return {'size': size}
+        return {u'size': size}
 
     def _build_uri(self, filename=None):
-        """Build relative URI"""
+        u"""Build relative URI"""
         return (
-            'mf:' + self.folder["folderkey"] +
-            ('/' + filename if filename else '')
+            u'mf:' + self.folder[u"folderkey"] +
+            (u'/' + filename if filename else u'')
         )
 
 
-duplicity.backend.register_backend("mf", MediafireBackend)
+duplicity.backend.register_backend(u"mf", MediafireBackend)

=== modified file 'duplicity/backends/megabackend.py'
--- duplicity/backends/megabackend.py	2017-08-29 15:44:59 +0000
+++ duplicity/backends/megabackend.py	2019-02-22 19:07:43 +0000
@@ -19,6 +19,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import print_function
 import duplicity.backend
 from duplicity import log
 from duplicity.errors import BackendException
@@ -28,11 +29,12 @@
 
 
 class MegaBackend(duplicity.backend.Backend):
-    """Connect to remote store using Mega.co.nz API"""
+    u"""Connect to remote store using Mega.co.nz API"""
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
+<<<<<<< TREE
         # ensure all the necessary megatools binaries exist
         self._check_binary_exists('megals')
         self._check_binary_exists('megamkdir')
@@ -60,7 +62,37 @@
     def _check_binary_exists(self, cmd):
         'checks that a specified command exists in the current path'
 
+=======
+        # ensure all the necessary megatools binaries exist
+        self._check_binary_exists(u'megals')
+        self._check_binary_exists(u'megamkdir')
+        self._check_binary_exists(u'megaget')
+        self._check_binary_exists(u'megaput')
+        self._check_binary_exists(u'megarm')
+
+        # store some basic info
+        self._hostname = parsed_url.hostname
+
+        if parsed_url.password is None:
+            self._megarc = os.getenv(u'HOME') + u'/.megarc'
+        else:
+            self._megarc = False
+            self._username = parsed_url.username
+            self._password = self.get_password()
+
+        # remote folder (Can we assume /Root prefix?)
+        self._root = u'/Root'
+        self._folder = self._root + u'/' + parsed_url.path[1:]
+
+        # make sure the remote folder exists (the whole path)
+        self._makedir_recursive(parsed_url.path[1:].split(u'/'))
+
+    def _check_binary_exists(self, cmd):
+        u'checks that a specified command exists in the current path'
+
+>>>>>>> MERGE-SOURCE
         try:
+<<<<<<< TREE
             # ignore the output, we only need the return code
             subprocess.check_output(['which', cmd])
         except Exception as e:
@@ -89,10 +121,45 @@
                 self._make_dir(p)
             except:
                 pass
+=======
+            # ignore the output, we only need the return code
+            subprocess.check_output([u'which', cmd])
+        except Exception as e:
+            raise BackendException(u"command '%s' not found, make sure megatools are installed" % (cmd,))
+
+    def _makedir(self, path):
+        u'creates a remote directory'
+
+        if self._megarc:
+            cmd = [u'megamkdir', u'--config', self._megarc, path]
+        else:
+            cmd = [u'megamkdir', u'-u', self._username, u'-p', self._password, path]
+
+        self.subprocess_popen(cmd)
+
+    def _makedir_recursive(self, path):
+        u'creates a remote directory (recursively the whole path), ingores errors'
+
+        print(u"mkdir: %s" % (u'/'.join(path),))
+
+        p = self._root
+
+        for folder in path:
+            p = p + u'/' + folder
+            try:
+                self._make_dir(p)
+            except:
+                pass
+>>>>>>> MERGE-SOURCE
 
     def _put(self, source_path, remote_filename):
+<<<<<<< TREE
         'uploads file to Mega (deletes it first, to ensure it does not exist)'
 
+=======
+        u'uploads file to Mega (deletes it first, to ensure it does not exist)'
+
+>>>>>>> MERGE-SOURCE
         try:
             self.delete(remote_filename)
         except Exception:
@@ -101,16 +168,29 @@
         self.upload(local_file=source_path.get_canonical(), remote_file=remote_filename)
 
     def _get(self, remote_filename, local_path):
+<<<<<<< TREE
         'downloads file from Mega'
 
         self.download(remote_file=remote_filename, local_file=local_path.name)
+=======
+        u'downloads file from Mega'
+
+        self.download(remote_file=remote_filename, local_file=local_path.name)
+>>>>>>> MERGE-SOURCE
 
     def _list(self):
+<<<<<<< TREE
         'list files in the backup folder'
 
         return self.folder_contents(files_only=True)
+=======
+        u'list files in the backup folder'
+
+        return self.folder_contents(files_only=True)
+>>>>>>> MERGE-SOURCE
 
     def _delete(self, filename):
+<<<<<<< TREE
         'deletes remote '
 
         self.delete(remote_file=filename)
@@ -177,3 +257,71 @@
 
 duplicity.backend.register_backend('mega', MegaBackend)
 duplicity.backend.uses_netloc.extend(['mega'])
+=======
+        u'deletes remote '
+
+        self.delete(remote_file=filename)
+
+    def folder_contents(self, files_only=False):
+        u'lists contents of a folder, optionally ignoring subdirectories'
+
+        print(u"megals: %s" % (self._folder,))
+
+        if self._megarc:
+            cmd = [u'megals', u'--config', self._megarc, self._folder]
+        else:
+            cmd = [u'megals', u'-u', self._username, u'-p', self._password, self._folder]
+
+        files = subprocess.check_output(cmd)
+        files = files.strip().split(u'\n')
+
+        # remove the folder name, including the path separator
+        files = [f[len(self._folder) + 1:] for f in files]
+
+        # optionally ignore entries containing path separator (i.e. not files)
+        if files_only:
+            files = [f for f in files if u'/' not in f]
+
+        return files
+
+    def download(self, remote_file, local_file):
+
+        print(u"megaget: %s" % (remote_file,))
+
+        if self._megarc:
+            cmd = [u'megaget', u'--config', self._megarc, u'--no-progress',
+                   u'--path', local_file, self._folder + u'/' + remote_file]
+        else:
+            cmd = [u'megaget', u'-u', self._username, u'-p', self._password, u'--no-progress',
+                   u'--path', local_file, self._folder + u'/' + remote_file]
+
+        self.subprocess_popen(cmd)
+
+    def upload(self, local_file, remote_file):
+
+        print(u"megaput: %s" % (remote_file,))
+
+        if self._megarc:
+            cmd = [u'megaput', u'--config', self._megarc, u'--no-progress',
+                   u'--path', self._folder + u'/' + remote_file, local_file]
+        else:
+            cmd = [u'megaput', u'-u', self._username, u'-p', self._password, u'--no-progress',
+                   u'--path', self._folder + u'/' + remote_file, local_file]
+
+        self.subprocess_popen(cmd)
+
+    def delete(self, remote_file):
+
+        print(u"megarm: %s" % (remote_file,))
+
+        if self._megarc:
+            cmd = [u'megarm', u'--config', self._megarc, self._folder + u'/' + remote_file]
+        else:
+            cmd = [u'megarm', u'-u', self._username, u'-p', self._password, self._folder + u'/' + remote_file]
+
+        self.subprocess_popen(cmd)
+
+
+duplicity.backend.register_backend(u'mega', MegaBackend)
+duplicity.backend.uses_netloc.extend([u'mega'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/multibackend.py'
--- duplicity/backends/multibackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/multibackend.py	2019-02-22 19:07:43 +0000
@@ -23,11 +23,14 @@
 
 #
 
+from future import standard_library
+standard_library.install_aliases()
 import os
 import os.path
 import string
-import urllib
-import urlparse
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 import json
 
 import duplicity.backend
@@ -36,7 +39,7 @@
 
 
 class MultiBackend(duplicity.backend.Backend):
-    """Store files across multiple remote stores. URL is a path to a local file
+    u"""Store files across multiple remote stores. URL is a path to a local file
     containing URLs/other config defining the remote store"""
 
     # the stores we are managing
@@ -44,26 +47,26 @@
 
     # Set of known query paramaters
     __knownQueryParameters = frozenset([
-        'mode',
-        'onfail',
+        u'mode',
+        u'onfail',
     ])
 
     # the mode of operation to follow
     # can be one of 'stripe' or 'mirror' currently
-    __mode = 'stripe'
+    __mode = u'stripe'
     __mode_allowedSet = frozenset([
-        'mirror',
-        'stripe',
+        u'mirror',
+        u'stripe',
     ])
 
     # the write error handling logic
     # can be one of the following:
     # * continue - default, on failure continues to next source
     # * abort - stop all further operations
-    __onfail_mode = 'continue'
+    __onfail_mode = u'continue'
     __onfail_mode_allowedSet = frozenset([
-        'abort',
-        'continue',
+        u'abort',
+        u'continue',
     ])
 
     # when we write in stripe mode, we "stripe" via a simple round-robin across
@@ -76,30 +79,30 @@
     @staticmethod
     def get_query_params(parsed_url):
         # Reparse so the query string is available
-        reparsed_url = urlparse.urlparse(parsed_url.geturl())
+        reparsed_url = urllib.parse.urlparse(parsed_url.geturl())
         if len(reparsed_url.query) == 0:
             return dict()
         try:
-            queryMultiDict = urlparse.parse_qs(reparsed_url.query, strict_parsing=True)
+            queryMultiDict = urllib.parse.parse_qs(reparsed_url.query, strict_parsing=True)
         except ValueError as e:
-            log.Log(_("MultiBackend: Could not parse query string %s: %s ")
+            log.Log(_(u"MultiBackend: Could not parse query string %s: %s ")
                     % (reparsed_url.query, e),
                     log.ERROR)
-            raise BackendException('Could not parse query string')
+            raise BackendException(u'Could not parse query string')
         queryDict = dict()
         # Convert the multi-dict to a single dictionary
         # while checking to make sure that no unrecognized values are found
-        for name, valueList in queryMultiDict.items():
+        for name, valueList in list(queryMultiDict.items()):
             if len(valueList) != 1:
-                log.Log(_("MultiBackend: Invalid query string %s: more than one value for %s")
+                log.Log(_(u"MultiBackend: Invalid query string %s: more than one value for %s")
                         % (reparsed_url.query, name),
                         log.ERROR)
-                raise BackendException('Invalid query string')
+                raise BackendException(u'Invalid query string')
             if name not in MultiBackend.__knownQueryParameters:
-                log.Log(_("MultiBackend: Invalid query string %s: unknown parameter %s")
+                log.Log(_(u"MultiBackend: Invalid query string %s: unknown parameter %s")
                         % (reparsed_url.query, name),
                         log.ERROR)
-                raise BackendException('Invalid query string')
+                raise BackendException(u'Invalid query string')
 
             queryDict[name] = valueList[0]
         return queryDict
@@ -139,61 +142,91 @@
 
         queryParams = MultiBackend.get_query_params(parsed_url)
 
-        if 'mode' in queryParams:
-            self.__mode = queryParams['mode']
+        if u'mode' in queryParams:
+            self.__mode = queryParams[u'mode']
 
-        if 'onfail' in queryParams:
-            self.__onfail_mode = queryParams['onfail']
+        if u'onfail' in queryParams:
+            self.__onfail_mode = queryParams[u'onfail']
 
         if self.__mode not in MultiBackend.__mode_allowedSet:
-            log.Log(_("MultiBackend: illegal value for %s: %s")
-                    % ('mode', self.__mode), log.ERROR)
-            raise BackendException("MultiBackend: invalid mode value")
+            log.Log(_(u"MultiBackend: illegal value for %s: %s")
+                    % (u'mode', self.__mode), log.ERROR)
+            raise BackendException(u"MultiBackend: invalid mode value")
 
         if self.__onfail_mode not in MultiBackend.__onfail_mode_allowedSet:
-            log.Log(_("MultiBackend: illegal value for %s: %s")
-                    % ('onfail', self.__onfail_mode), log.ERROR)
-            raise BackendException("MultiBackend: invalid onfail value")
+            log.Log(_(u"MultiBackend: illegal value for %s: %s")
+                    % (u'onfail', self.__onfail_mode), log.ERROR)
+            raise BackendException(u"MultiBackend: invalid onfail value")
 
         try:
             with open(parsed_url.path) as f:
                 configs = json.load(f)
         except IOError as e:
-            log.Log(_("MultiBackend: Url %s")
+            log.Log(_(u"MultiBackend: Url %s")
                     % (parsed_url.geturl()),
                     log.ERROR)
 
-            log.Log(_("MultiBackend: Could not load config file %s: %s ")
+            log.Log(_(u"MultiBackend: Could not load config file %s: %s ")
                     % (parsed_url.path, e),
                     log.ERROR)
-            raise BackendException('Could not load config file')
+            raise BackendException(u'Could not load config file')
 
         for config in configs:
-            url = config['url']
+            url = config[u'url']
             # Fix advised in bug #1471795
-            url = url.encode('utf-8')
-            log.Log(_("MultiBackend: use store %s")
+            url = url.encode(u'utf-8')
+            log.Log(_(u"MultiBackend: use store %s")
                     % (url),
                     log.INFO)
-            if 'env' in config:
-                for env in config['env']:
-                    log.Log(_("MultiBackend: set env %s = %s")
-                            % (env['name'], env['value']),
+            if u'env' in config:
+                for env in config[u'env']:
+                    log.Log(_(u"MultiBackend: set env %s = %s")
+                            % (env[u'name'], env[u'value']),
                             log.INFO)
-                    os.environ[env['name']] = env['value']
+                    os.environ[env[u'name']] = env[u'value']
 
             store = duplicity.backend.get_backend(url)
             self.__stores.append(store)
+<<<<<<< TREE
+=======
+
+            # Prefix affinity
+            if u'prefixes' in config:
+                if self.__mode == u'stripe':
+                    raise BackendException(u"Multibackend: stripe mode not supported with prefix affinity.")
+                for prefix in config[u'prefixes']:
+                    log.Log(_(u"Multibackend: register affinity for prefix %s")
+                            % prefix, log.INFO)
+                if prefix in self.__affinities:
+                    self.__affinities[prefix].append(store)
+                else:
+                    self.__affinities[prefix] = [store]
+
+>>>>>>> MERGE-SOURCE
             # store_list = store.list()
             # log.Log(_("MultiBackend: at init, store %s has %s files")
             #         % (url, len(store_list)),
             #         log.INFO)
 
+<<<<<<< TREE
+=======
+    def _eligible_stores(self, filename):
+        if self.__affinities:
+            matching_prefixes = [k for k in list(self.__affinities.keys()) if filename.startswith(k)]
+            matching_stores = {store for prefix in matching_prefixes for store in self.__affinities[prefix]}
+            if matching_stores:
+                # Distinct stores with matching prefix
+                return list(matching_stores)
+
+        # No affinity rule or no matching store for that prefix
+        return self.__stores
+
+>>>>>>> MERGE-SOURCE
     def _put(self, source_path, remote_filename):
         # Store an indication of whether any of these passed
         passed = False
         # Mirror mode always starts at zero
-        if self.__mode == 'mirror':
+        if self.__mode == u'mirror':
             self.__write_cursor = 0
 
         first = self.__write_cursor
@@ -203,7 +236,7 @@
                 next = self.__write_cursor + 1
                 if (next > len(self.__stores) - 1):
                     next = 0
-                log.Log(_("MultiBackend: _put: write to store #%s (%s)")
+                log.Log(_(u"MultiBackend: _put: write to store #%s (%s)")
                         % (self.__write_cursor, store.backend.parsed_url.url_string),
                         log.DEBUG)
                 store.put(source_path, remote_filename)
@@ -213,27 +246,27 @@
                 if next == 0:
                     break
                 # If in stripe mode, don't continue to the next
-                if self.__mode == 'stripe':
+                if self.__mode == u'stripe':
                     break
             except Exception as e:
-                log.Log(_("MultiBackend: failed to write to store #%s (%s), try #%s, Exception: %s")
+                log.Log(_(u"MultiBackend: failed to write to store #%s (%s), try #%s, Exception: %s")
                         % (self.__write_cursor, store.backend.parsed_url.url_string, next, e),
                         log.INFO)
                 self.__write_cursor = next
 
                 # If we consider write failure as abort, abort
-                if self.__onfail_mode == 'abort':
-                    log.Log(_("MultiBackend: failed to write %s. Aborting process.")
+                if self.__onfail_mode == u'abort':
+                    log.Log(_(u"MultiBackend: failed to write %s. Aborting process.")
                             % (source_path),
                             log.ERROR)
-                    raise BackendException("failed to write")
+                    raise BackendException(u"failed to write")
 
                 # If we've looped around, and none of them passed, fail
                 if (self.__write_cursor == first) and not passed:
-                    log.Log(_("MultiBackend: failed to write %s. Tried all backing stores and none succeeded")
+                    log.Log(_(u"MultiBackend: failed to write %s. Tried all backing stores and none succeeded")
                             % (source_path),
                             log.ERROR)
-                    raise BackendException("failed to write")
+                    raise BackendException(u"failed to write")
 
     def _get(self, remote_filename, local_path):
         # since the backend operations will be retried, we can't
@@ -247,25 +280,25 @@
             if remote_filename in list:
                 s.get(remote_filename, local_path)
                 return
-            log.Log(_("MultiBackend: failed to get %s to %s from %s")
+            log.Log(_(u"MultiBackend: failed to get %s to %s from %s")
                     % (remote_filename, local_path, s.backend.parsed_url.url_string),
                     log.INFO)
-        log.Log(_("MultiBackend: failed to get %s. Tried all backing stores and none succeeded")
+        log.Log(_(u"MultiBackend: failed to get %s. Tried all backing stores and none succeeded")
                 % (remote_filename),
                 log.ERROR)
-        raise BackendException("failed to get")
+        raise BackendException(u"failed to get")
 
     def _list(self):
         lists = []
         for s in self.__stores:
             l = s.list()
-            log.Log(_("MultiBackend: list from %s: %s")
+            log.Log(_(u"MultiBackend: list from %s: %s")
                     % (s.backend.parsed_url.url_string, l),
                     log.DEBUG)
             lists.append(s.list())
         # combine the lists into a single flat list w/o duplicates via set:
         result = list({item for sublist in lists for item in sublist})
-        log.Log(_("MultiBackend: combined list: %s")
+        log.Log(_(u"MultiBackend: combined list: %s")
                 % (result),
                 log.DEBUG)
         return result
@@ -285,15 +318,20 @@
                 s._do_delete(filename)
                 passed = True
                 # In stripe mode, only one item will have the file
-                if self.__mode == 'stripe':
+                if self.__mode == u'stripe':
                     return
-            log.Log(_("MultiBackend: failed to delete %s from %s")
+            log.Log(_(u"MultiBackend: failed to delete %s from %s")
                     % (filename, s.backend.parsed_url.url_string),
                     log.INFO)
         if not passed:
-            log.Log(_("MultiBackend: failed to delete %s. Tried all backing stores and none succeeded")
+            log.Log(_(u"MultiBackend: failed to delete %s. Tried all backing stores and none succeeded")
                     % (filename),
                     log.ERROR)
 #           raise BackendException("failed to delete")
 
+<<<<<<< TREE
 duplicity.backend.register_backend('multi', MultiBackend)
+=======
+
+duplicity.backend.register_backend(u'multi', MultiBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/ncftpbackend.py'
--- duplicity/backends/ncftpbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/ncftpbackend.py	2019-02-22 19:07:43 +0000
@@ -19,8 +19,12 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from future import standard_library
+standard_library.install_aliases()
 import os.path
-import urllib
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 import re
 
 import duplicity.backend
@@ -30,91 +34,99 @@
 
 
 class NCFTPBackend(duplicity.backend.Backend):
-    """Connect to remote store using File Transfer Protocol"""
+    u"""Connect to remote store using File Transfer Protocol"""
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
         # we expect an error return, so go low-level and ignore it
         try:
-            p = os.popen("ncftpls -v")
+            p = os.popen(u"ncftpls -v")
             fout = p.read()
             ret = p.close()
         except Exception:
             pass
         # the expected error is 8 in the high-byte and some output
         if ret != 0x0800 or not fout:
-            log.FatalError("NcFTP not found:  Please install NcFTP version 3.1.9 or later",
+            log.FatalError(u"NcFTP not found:  Please install NcFTP version 3.1.9 or later",
                            log.ErrorCode.ftp_ncftp_missing)
 
         # version is the second word of the first line
-        version = fout.split('\n')[0].split()[1]
-        if version < "3.1.9":
-            log.FatalError("NcFTP too old:  Duplicity requires NcFTP version 3.1.9,"
-                           "3.2.1 or later.  Version 3.2.0 will not work properly.",
+        version = fout.split(u'\n')[0].split()[1]
+        if version < u"3.1.9":
+            log.FatalError(u"NcFTP too old:  Duplicity requires NcFTP version 3.1.9,"
+                           u"3.2.1 or later.  Version 3.2.0 will not work properly.",
                            log.ErrorCode.ftp_ncftp_too_old)
-        elif version == "3.2.0":
-            log.Warn("NcFTP (ncftpput) version 3.2.0 may fail with duplicity.\n"
-                     "see: http://www.ncftpd.com/ncftp/doc/changelog.html\n";
-                     "If you have trouble, please upgrade to 3.2.1 or later",
+        elif version == u"3.2.0":
+            log.Warn(u"NcFTP (ncftpput) version 3.2.0 may fail with duplicity.\n"
+                     u"see: http://www.ncftpd.com/ncftp/doc/changelog.html\n";
+                     u"If you have trouble, please upgrade to 3.2.1 or later",
                      log.WarningCode.ftp_ncftp_v320)
-        log.Notice("NcFTP version is %s" % version)
+        log.Notice(u"NcFTP version is %s" % version)
 
         self.parsed_url = parsed_url
 
         self.url_string = duplicity.backend.strip_auth_from_url(self.parsed_url)
 
         # strip ncftp+ prefix
-        self.url_string = duplicity.backend.strip_prefix(self.url_string, 'ncftp')
+        self.url_string = duplicity.backend.strip_prefix(self.url_string, u'ncftp')
 
         # This squelches the "file not found" result from ncftpls when
         # the ftp backend looks for a collection that does not exist.
         # version 3.2.2 has error code 5, 1280 is some legacy value
-        self.popen_breaks['ncftpls'] = [5, 1280]
+        self.popen_breaks[u'ncftpls'] = [5, 1280]
 
         # Use an explicit directory name.
-        if self.url_string[-1] != '/':
-            self.url_string += '/'
+        if self.url_string[-1] != u'/':
+            self.url_string += u'/'
 
         self.password = self.get_password()
 
-        if globals.ftp_connection == 'regular':
-            self.conn_opt = '-E'
+        if globals.ftp_connection == u'regular':
+            self.conn_opt = u'-E'
         else:
-            self.conn_opt = '-F'
+            self.conn_opt = u'-F'
 
         self.tempfile, self.tempname = tempdir.default().mkstemp()
-        os.write(self.tempfile, "host %s\n" % self.parsed_url.hostname)
-        os.write(self.tempfile, "user %s\n" % self.parsed_url.username)
-        os.write(self.tempfile, "pass %s\n" % self.password)
+        os.write(self.tempfile, u"host %s\n" % self.parsed_url.hostname)
+        os.write(self.tempfile, u"user %s\n" % self.parsed_url.username)
+        os.write(self.tempfile, u"pass %s\n" % self.password)
         os.close(self.tempfile)
-        self.flags = "-f %s %s -t %s -o useCLNT=0,useHELP_SITE=0 " % \
+        self.flags = u"-f %s %s -t %s -o useCLNT=0,useHELP_SITE=0 " % \
             (self.tempname, self.conn_opt, globals.timeout)
         if parsed_url.port is not None and parsed_url.port != 21:
-            self.flags += " -P '%s'" % (parsed_url.port)
+            self.flags += u" -P '%s'" % (parsed_url.port)
 
     def _put(self, source_path, remote_filename):
-        remote_path = os.path.join(urllib.unquote(re.sub('^/', '', self.parsed_url.path)), remote_filename).rstrip()
-        commandline = "ncftpput %s -m -V -C '%s' '%s'" % \
+        remote_path = os.path.join(urllib.parse.unquote(re.sub(u'^/', u'', self.parsed_url.path)),
+                                   remote_filename).rstrip()
+        commandline = u"ncftpput %s -m -V -C '%s' '%s'" % \
             (self.flags, source_path.name, remote_path)
         self.subprocess_popen(commandline)
 
     def _get(self, remote_filename, local_path):
-        remote_path = os.path.join(urllib.unquote(re.sub('^/', '', self.parsed_url.path)), remote_filename).rstrip()
-        commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \
-            (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)
+        remote_path = os.path.join(urllib.parse.unquote(re.sub(u'^/', u'', self.parsed_url.path)),
+                                   remote_filename).rstrip()
+        commandline = u"ncftpget %s -V -C '%s' '%s' '%s'" % \
+            (self.flags, self.parsed_url.hostname, remote_path.lstrip(u'/'), local_path.name)
         self.subprocess_popen(commandline)
 
     def _list(self):
         # Do a long listing to avoid connection reset
-        commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)
+        commandline = u"ncftpls %s -l '%s'" % (self.flags, self.url_string)
         _, l, _ = self.subprocess_popen(commandline)
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
+        return [x.split()[-1] for x in l.split(u'\n') if x and not x.startswith(u"total ")]
 
     def _delete(self, filename):
-        commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
+        commandline = u"ncftpls %s -l -X 'DELE %s' '%s'" % \
             (self.flags, filename, self.url_string)
         self.subprocess_popen(commandline)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("ncftp+ftp", NCFTPBackend)
 duplicity.backend.uses_netloc.extend(['ncftp+ftp'])
+=======
+
+duplicity.backend.register_backend(u"ncftp+ftp", NCFTPBackend)
+duplicity.backend.uses_netloc.extend([u'ncftp+ftp'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/onedrivebackend.py'
--- duplicity/backends/onedrivebackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/onedrivebackend.py	2019-02-22 19:07:43 +0000
@@ -21,6 +21,9 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import print_function
+from builtins import input
+from builtins import str
 import time
 import json
 import os
@@ -38,24 +41,24 @@
 
 
 class OneDriveBackend(duplicity.backend.Backend):
-    """Uses Microsoft OneDrive (formerly SkyDrive) for backups."""
+    u"""Uses Microsoft OneDrive (formerly SkyDrive) for backups."""
 
     OAUTH_TOKEN_PATH = os.path.expanduser(
-        '~/.duplicity_onedrive_oauthtoken.json')
+        u'~/.duplicity_onedrive_oauthtoken.json')
 
-    API_URI = 'https://apis.live.net/v5.0/'
+    API_URI = u'https://apis.live.net/v5.0/'
     MAXIMUM_FRAGMENT_SIZE = 60 * 1024 * 1024
-    BITS_1_5_UPLOAD_PROTOCOL = '{7df0354d-249b-430f-820d-3d2a9bef4931}'
-    CLIENT_ID = '000000004C12E85D'
-    CLIENT_SECRET = 'k1oR0CbtbvTG9nK1PEDeVW2dzvAaiN4d'
-    OAUTH_TOKEN_URI = 'https://login.live.com/oauth20_token.srf'
-    OAUTH_AUTHORIZE_URI = 'https://login.live.com/oauth20_authorize.srf'
-    OAUTH_REDIRECT_URI = 'https://login.live.com/oauth20_desktop.srf'
+    BITS_1_5_UPLOAD_PROTOCOL = u'{7df0354d-249b-430f-820d-3d2a9bef4931}'
+    CLIENT_ID = u'000000004C12E85D'
+    CLIENT_SECRET = u'k1oR0CbtbvTG9nK1PEDeVW2dzvAaiN4d'
+    OAUTH_TOKEN_URI = u'https://login.live.com/oauth20_token.srf'
+    OAUTH_AUTHORIZE_URI = u'https://login.live.com/oauth20_authorize.srf'
+    OAUTH_REDIRECT_URI = u'https://login.live.com/oauth20_desktop.srf'
     # wl.skydrive is for reading files,
     # wl.skydrive_update is for creating/writing files,
     # wl.offline_access is necessary for duplicity to access onedrive without
     # the user being logged in right now.
-    OAUTH_SCOPE = ['wl.skydrive', 'wl.skydrive_update', 'wl.offline_access']
+    OAUTH_SCOPE = [u'wl.skydrive', u'wl.skydrive_update', u'wl.offline_access']
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
@@ -72,34 +75,34 @@
             from requests_oauthlib import OAuth2Session
         except ImportError as e:
             raise BackendException((
-                'OneDrive backend requires python-requests and '
-                'python-requests-oauthlib to be installed. Please install '
-                'them and try again.\n' + str(e)))
+                u'OneDrive backend requires python-requests and '
+                u'python-requests-oauthlib to be installed. Please install '
+                u'them and try again.\n' + str(e)))
 
         self.names_to_ids = None
         self.user_id = None
-        self.directory = parsed_url.path.lstrip('/')
-        if self.directory == "":
+        self.directory = parsed_url.path.lstrip(u'/')
+        if self.directory == u"":
             raise BackendException((
-                'You did not specify a path. '
-                'Please specify a path, e.g. onedrive://duplicity_backups'))
+                u'You did not specify a path. '
+                u'Please specify a path, e.g. onedrive://duplicity_backups'))
         if globals.volsize > (10 * 1024 * 1024 * 1024):
             raise BackendException((
-                'Your --volsize is bigger than 10 GiB, which is the maximum '
-                'file size on OneDrive.'))
+                u'Your --volsize is bigger than 10 GiB, which is the maximum '
+                u'file size on OneDrive.'))
         self.initialize_oauth2_session()
         self.resolve_directory()
 
     def initialize_oauth2_session(self):
         def token_updater(token):
             try:
-                with open(self.OAUTH_TOKEN_PATH, 'w') as f:
+                with open(self.OAUTH_TOKEN_PATH, u'w') as f:
                     json.dump(token, f)
             except Exception as e:
-                log.Error(('Could not save the OAuth2 token to %s. '
-                           'This means you may need to do the OAuth2 '
-                           'authorization process again soon. '
-                           'Original error: %s' % (
+                log.Error((u'Could not save the OAuth2 token to %s. '
+                           u'This means you may need to do the OAuth2 '
+                           u'authorization process again soon. '
+                           u'Original error: %s' % (
                                self.OAUTH_TOKEN_PATH, e)))
 
         token = None
@@ -107,8 +110,8 @@
             with open(self.OAUTH_TOKEN_PATH) as f:
                 token = json.load(f)
         except IOError as e:
-            log.Error(('Could not load OAuth2 token. '
-                       'Trying to create a new one. (original error: %s)' % e))
+            log.Error((u'Could not load OAuth2 token. '
+                       u'Trying to create a new one. (original error: %s)' % e))
 
         self.http_client = OAuth2Session(
             self.CLIENT_ID,
@@ -116,8 +119,8 @@
             redirect_uri=self.OAUTH_REDIRECT_URI,
             token=token,
             auto_refresh_kwargs={
-                'client_id': self.CLIENT_ID,
-                'client_secret': self.CLIENT_SECRET,
+                u'client_id': self.CLIENT_ID,
+                u'client_secret': self.CLIENT_SECRET,
             },
             auto_refresh_url=self.OAUTH_TOKEN_URI,
             token_updater=token_updater)
@@ -130,83 +133,83 @@
         # refreshed successfully, which will happen under the covers). In case
         # this request fails, the provided token was too old (i.e. expired),
         # and we need to get a new token.
-        user_info_response = self.http_client.get(self.API_URI + 'me')
+        user_info_response = self.http_client.get(self.API_URI + u'me')
         if user_info_response.status_code != requests.codes.ok:
             token = None
 
         if token is None:
             if not sys.stdout.isatty() or not sys.stdin.isatty():
-                log.FatalError(('The OAuth2 token could not be loaded from %s '
-                                'and you are not running duplicity '
-                                'interactively, so duplicity cannot possibly '
-                                'access OneDrive.' % self.OAUTH_TOKEN_PATH))
+                log.FatalError((u'The OAuth2 token could not be loaded from %s '
+                                u'and you are not running duplicity '
+                                u'interactively, so duplicity cannot possibly '
+                                u'access OneDrive.' % self.OAUTH_TOKEN_PATH))
             authorization_url, state = self.http_client.authorization_url(
-                self.OAUTH_AUTHORIZE_URI, display='touch')
-
-            print()
-            print('In order to authorize duplicity to access your OneDrive, '
-                  'please open %s in a browser and copy the URL of the blank '
-                  'page the dialog leads to.' % authorization_url)
-            print()
-
-            redirected_to = raw_input('URL of the blank page: ').strip()
+                self.OAUTH_AUTHORIZE_URI, display=u'touch')
+
+            print()
+            print(u'In order to authorize duplicity to access your OneDrive, '
+                  u'please open %s in a browser and copy the URL of the blank '
+                  u'page the dialog leads to.' % authorization_url)
+            print()
+
+            redirected_to = input(u'URL of the blank page: ').strip()
 
             token = self.http_client.fetch_token(
                 self.OAUTH_TOKEN_URI,
                 client_secret=self.CLIENT_SECRET,
                 authorization_response=redirected_to)
 
-            user_info_response = self.http_client.get(self.API_URI + 'me')
+            user_info_response = self.http_client.get(self.API_URI + u'me')
             user_info_response.raise_for_status()
 
             try:
-                with open(self.OAUTH_TOKEN_PATH, 'w') as f:
+                with open(self.OAUTH_TOKEN_PATH, u'w') as f:
                     json.dump(token, f)
             except Exception as e:
-                log.Error(('Could not save the OAuth2 token to %s. '
-                           'This means you need to do the OAuth2 authorization '
-                           'process on every start of duplicity. '
-                           'Original error: %s' % (
+                log.Error((u'Could not save the OAuth2 token to %s. '
+                           u'This means you need to do the OAuth2 authorization '
+                           u'process on every start of duplicity. '
+                           u'Original error: %s' % (
                                self.OAUTH_TOKEN_PATH, e)))
 
-        if 'id' not in user_info_response.json():
-            log.Error('user info response lacks the "id" field.')
+        if u'id' not in user_info_response.json():
+            log.Error(u'user info response lacks the "id" field.')
 
-        self.user_id = user_info_response.json()['id']
+        self.user_id = user_info_response.json()[u'id']
 
     def resolve_directory(self):
-        """Ensures self.directory_id contains the folder id for the path.
+        u"""Ensures self.directory_id contains the folder id for the path.
 
         There is no API call to resolve a logical path (e.g.
         /backups/duplicity/notebook/), so we recursively list directories
         until we get the object id of the configured directory, creating
         directories as necessary.
         """
-        object_id = 'me/skydrive'
-        for component in [x for x in self.directory.split('/') if x]:
+        object_id = u'me/skydrive'
+        for component in [x for x in self.directory.split(u'/') if x]:
             tried_mkdir = False
             while True:
                 files = self.get_files(object_id)
-                names_to_ids = {x['name']: x['id'] for x in files}
+                names_to_ids = {x[u'name']: x[u'id'] for x in files}
                 if component not in names_to_ids:
                     if not tried_mkdir:
                         self.mkdir(object_id, component)
                         tried_mkdir = True
                         continue
                     raise BackendException((
-                        'Could not resolve/create directory "%s" on '
-                        'OneDrive: %s not in %s (files of folder %s)' % (
+                        u'Could not resolve/create directory "%s" on '
+                        u'OneDrive: %s not in %s (files of folder %s)' % (
                             self.directory, component,
-                            names_to_ids.keys(), object_id)))
+                            list(names_to_ids.keys()), object_id)))
                 break
             object_id = names_to_ids[component]
         self.directory_id = object_id
-        log.Debug('OneDrive id for the configured directory "%s" is "%s"' % (
+        log.Debug(u'OneDrive id for the configured directory "%s" is "%s"' % (
             self.directory, self.directory_id))
 
     def mkdir(self, object_id, folder_name):
-        data = {'name': folder_name, 'description': 'Created by duplicity'}
-        headers = {'Content-Type': 'application/json'}
+        data = {u'name': folder_name, u'description': u'Created by duplicity'}
+        headers = {u'Content-Type': u'application/json'}
         response = self.http_client.post(
             self.API_URI + object_id,
             data=json.dumps(data),
@@ -214,35 +217,35 @@
         response.raise_for_status()
 
     def get_files(self, path):
-        response = self.http_client.get(self.API_URI + path + '/files')
+        response = self.http_client.get(self.API_URI + path + u'/files')
         response.raise_for_status()
-        if 'data' not in response.json():
+        if u'data' not in response.json():
             raise BackendException((
-                'Malformed JSON: expected "data" member in %s' % (
+                u'Malformed JSON: expected "data" member in %s' % (
                     response.json())))
-        return response.json()['data']
+        return response.json()[u'data']
 
     def _list(self):
         files = self.get_files(self.directory_id)
-        self.names_to_ids = {x['name']: x['id'] for x in files}
-        return [x['name'] for x in files]
+        self.names_to_ids = {x[u'name']: x[u'id'] for x in files}
+        return [x[u'name'] for x in files]
 
     def get_file_id(self, remote_filename):
-        """Returns the file id from cache, updating the cache if necessary."""
+        u"""Returns the file id from cache, updating the cache if necessary."""
         if (self.names_to_ids is None or
                 remote_filename not in self.names_to_ids):
             self._list()
         return self.names_to_ids.get(remote_filename)
 
     def _get(self, remote_filename, local_path):
-        with local_path.open('wb') as f:
+        with local_path.open(u'wb') as f:
             file_id = self.get_file_id(remote_filename)
             if file_id is None:
                 raise BackendException((
-                    'File "%s" cannot be downloaded: it does not exist' % (
+                    u'File "%s" cannot be downloaded: it does not exist' % (
                         remote_filename)))
             response = self.http_client.get(
-                self.API_URI + file_id + '/content', stream=True)
+                self.API_URI + file_id + u'/content', stream=True)
             response.raise_for_status()
             for chunk in response.iter_content(chunk_size=4096):
                 if chunk:
@@ -254,42 +257,42 @@
         # attempting to upload the file.
         source_size = os.path.getsize(source_path.name)
         start = time.time()
-        response = self.http_client.get(self.API_URI + 'me/skydrive/quota')
+        response = self.http_client.get(self.API_URI + u'me/skydrive/quota')
         response.raise_for_status()
-        if ('available' in response.json() and
-                source_size > response.json()['available']):
+        if (u'available' in response.json() and
+                source_size > response.json()[u'available']):
             raise BackendException((
-                'Out of space: trying to store "%s" (%d bytes), but only '
-                '%d bytes available on OneDrive.' % (
+                u'Out of space: trying to store "%s" (%d bytes), but only '
+                u'%d bytes available on OneDrive.' % (
                     source_path.name, source_size,
-                    response.json()['available'])))
-        log.Debug("Checked quota in %fs" % (time.time() - start))
+                    response.json()[u'available'])))
+        log.Debug(u"Checked quota in %fs" % (time.time() - start))
 
         with source_path.open() as source_file:
             start = time.time()
             # Create a BITS session, so that we can upload large files.
-            short_directory_id = self.directory_id.split('.')[-1]
-            url = 'https://cid-%s.users.storage.live.com/items/%s/%s' % (
+            short_directory_id = self.directory_id.split(u'.')[-1]
+            url = u'https://cid-%s.users.storage.live.com/items/%s/%s' % (
                 self.user_id, short_directory_id, remote_filename)
             headers = {
-                'X-Http-Method-Override': 'BITS_POST',
-                'BITS-Packet-Type': 'Create-Session',
-                'BITS-Supported-Protocols': self.BITS_1_5_UPLOAD_PROTOCOL,
+                u'X-Http-Method-Override': u'BITS_POST',
+                u'BITS-Packet-Type': u'Create-Session',
+                u'BITS-Supported-Protocols': self.BITS_1_5_UPLOAD_PROTOCOL,
             }
 
             response = self.http_client.post(
                 url,
                 headers=headers)
             response.raise_for_status()
-            if ('bits-packet-type' not in response.headers or
-                    response.headers['bits-packet-type'].lower() != 'ack'):
+            if (u'bits-packet-type' not in response.headers or
+                    response.headers[u'bits-packet-type'].lower() != u'ack'):
                 raise BackendException((
-                    'File "%s" cannot be uploaded: '
-                    'Could not create BITS session: '
-                    'Server response did not include BITS-Packet-Type: ACK' % (
+                    u'File "%s" cannot be uploaded: '
+                    u'Could not create BITS session: '
+                    u'Server response did not include BITS-Packet-Type: ACK' % (
                         remote_filename)))
-            bits_session_id = response.headers['bits-session-id']
-            log.Debug('BITS session id is "%s"' % bits_session_id)
+            bits_session_id = response.headers[u'bits-session-id']
+            log.Debug(u'BITS session id is "%s"' % bits_session_id)
 
             # Send fragments (with a maximum size of 60 MB each).
             offset = 0
@@ -298,10 +301,10 @@
                 if len(chunk) == 0:
                     break
                 headers = {
-                    'X-Http-Method-Override': 'BITS_POST',
-                    'BITS-Packet-Type': 'Fragment',
-                    'BITS-Session-Id': bits_session_id,
-                    'Content-Range': 'bytes %d-%d/%d' % (offset, offset + len(chunk) - 1, source_size),
+                    u'X-Http-Method-Override': u'BITS_POST',
+                    u'BITS-Packet-Type': u'Fragment',
+                    u'BITS-Session-Id': bits_session_id,
+                    u'Content-Range': u'bytes %d-%d/%d' % (offset, offset + len(chunk) - 1, source_size),
                 }
                 response = self.http_client.post(
                     url,
@@ -312,20 +315,20 @@
 
             # Close the BITS session to commit the file.
             headers = {
-                'X-Http-Method-Override': 'BITS_POST',
-                'BITS-Packet-Type': 'Close-Session',
-                'BITS-Session-Id': bits_session_id,
+                u'X-Http-Method-Override': u'BITS_POST',
+                u'BITS-Packet-Type': u'Close-Session',
+                u'BITS-Session-Id': bits_session_id,
             }
             response = self.http_client.post(url, headers=headers)
             response.raise_for_status()
 
-            log.Debug("PUT file in %fs" % (time.time() - start))
+            log.Debug(u"PUT file in %fs" % (time.time() - start))
 
     def _delete(self, remote_filename):
         file_id = self.get_file_id(remote_filename)
         if file_id is None:
             raise BackendException((
-                'File "%s" cannot be deleted: it does not exist' % (
+                u'File "%s" cannot be deleted: it does not exist' % (
                     remote_filename)))
         response = self.http_client.delete(self.API_URI + file_id)
         response.raise_for_status()
@@ -333,16 +336,21 @@
     def _query(self, remote_filename):
         file_id = self.get_file_id(remote_filename)
         if file_id is None:
-            return {'size': -1}
+            return {u'size': -1}
         response = self.http_client.get(self.API_URI + file_id)
         response.raise_for_status()
-        if 'size' not in response.json():
+        if u'size' not in response.json():
             raise BackendException((
-                'Malformed JSON: expected "size" member in %s' % (
+                u'Malformed JSON: expected "size" member in %s' % (
                     response.json())))
-        return {'size': response.json()['size']}
+        return {u'size': response.json()[u'size']}
 
     def _retry_cleanup(self):
         self.initialize_oauth2_session()
 
+<<<<<<< TREE
 duplicity.backend.register_backend('onedrive', OneDriveBackend)
+=======
+
+duplicity.backend.register_backend(u'onedrive', OneDriveBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/par2backend.py'
--- duplicity/backends/par2backend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/par2backend.py	2019-02-22 19:07:43 +0000
@@ -16,7 +16,8 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-from future_builtins import filter
+from builtins import filter
+from future.builtins import filter
 
 import os
 import re
@@ -24,16 +25,24 @@
 from duplicity.errors import BackendException
 from duplicity import log
 from duplicity import globals
+from duplicity import util
 
 
 class Par2Backend(backend.Backend):
-    """This backend wrap around other backends and create Par2 recovery files
+    u"""This backend wrap around other backends and create Par2 recovery files
     before the file and the Par2 files are transfered with the wrapped backend.
 
     If a received file is corrupt it will try to repair it on the fly.
     """
     def __init__(self, parsed_url):
         backend.Backend.__init__(self, parsed_url)
+
+        global pexpect
+        try:
+            import pexpect
+        except ImportError:
+            raise
+
         self.parsed_url = parsed_url
         try:
             self.redundancy = globals.par2_redundancy
@@ -41,15 +50,15 @@
             self.redundancy = 10
 
         try:
-            self.common_options = globals.par2_options + " -q -q"
+            self.common_options = globals.par2_options + u" -q -q"
         except AttributeError:
-            self.common_options = "-q -q"
+            self.common_options = u"-q -q"
 
         self.wrapped_backend = backend.get_backend_object(parsed_url.url_string)
 
-        for attr in ['_get', '_put', '_list', '_delete', '_delete_list',
-                     '_query', '_query_list', '_retry_cleanup', '_error_code',
-                     '_move', '_close']:
+        for attr in [u'_get', u'_put', u'_list', u'_delete', u'_delete_list',
+                     u'_query', u'_query_list', u'_retry_cleanup', u'_error_code',
+                     u'_move', u'_close']:
             if hasattr(self.wrapped_backend, attr):
                 setattr(self, attr, getattr(self, attr[1:]))
 
@@ -58,26 +67,25 @@
         self._delete_list = self.delete_list
 
     def transfer(self, method, source_path, remote_filename):
-        """create Par2 files and transfer the given file and the Par2 files
+        u"""create Par2 files and transfer the given file and the Par2 files
         with the wrapped backend.
 
         Par2 must run on the real filename or it would restore the
         temp-filename later on. So first of all create a tempdir and symlink
         the soure_path with remote_filename into this.
         """
-        import pexpect
-
         par2temp = source_path.get_temp_in_same_dir()
         par2temp.mkdir()
         source_symlink = par2temp.append(remote_filename)
         source_target = source_path.get_canonical()
         if not os.path.isabs(source_target):
-            source_target = os.path.join(os.getcwd(), source_target)
+            source_target = os.path.join(util.fsencode(os.getcwd()), source_target)
         os.symlink(source_target, source_symlink.get_canonical())
         source_symlink.setdata()
 
-        log.Info("Create Par2 recovery files")
-        par2create = 'par2 c -r%d -n1 %s %s' % (self.redundancy, self.common_options, source_symlink.get_canonical())
+        log.Info(u"Create Par2 recovery files")
+        par2create = u'par2 c -r%d -n1 %s %s' % (self.redundancy, self.common_options,
+                                                 util.fsdecode(source_symlink.get_canonical()))
         out, returncode = pexpect.run(par2create, None, True)
 
         source_symlink.delete()
@@ -99,14 +107,14 @@
         self.transfer(self.wrapped_backend._move, local, remote)
 
     def get(self, remote_filename, local_path):
-        """transfer remote_filename and the related .par2 file into
+        u"""transfer remote_filename and the related .par2 file into
         a temp-dir. remote_filename will be renamed into local_path before
         finishing.
 
         If "par2 verify" detect an error transfer the Par2-volumes into the
         temp-dir and try to repair.
         """
-        import pexpect
+
         par2temp = local_path.get_temp_in_same_dir()
         par2temp.mkdir()
         local_path_temp = par2temp.append(remote_filename)
@@ -114,32 +122,32 @@
         self.wrapped_backend._get(remote_filename, local_path_temp)
 
         try:
-            par2file = par2temp.append(remote_filename + '.par2')
+            par2file = par2temp.append(remote_filename + b'.par2')
             self.wrapped_backend._get(par2file.get_filename(), par2file)
 
-            par2verify = 'par2 v %s %s %s' % (self.common_options,
-                                              par2file.get_canonical(),
-                                              local_path_temp.get_canonical())
+            par2verify = u'par2 v %s %s %s' % (self.common_options,
+                                               par2file.get_canonical(),
+                                               local_path_temp.get_canonical())
             out, returncode = pexpect.run(par2verify, None, True)
 
             if returncode:
-                log.Warn("File is corrupt. Try to repair %s" % remote_filename)
-                par2volumes = filter(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename).match,
-                                     self.wrapped_backend._list())
+                log.Warn(u"File is corrupt. Try to repair %s" % remote_filename)
+                par2volumes = list(filter(re.compile(b'%s\\.vol[\\d+]*\\.par2' % remote_filename).match,
+                                          self.wrapped_backend._list()))
 
                 for filename in par2volumes:
                     file = par2temp.append(filename)
                     self.wrapped_backend._get(filename, file)
 
-                par2repair = 'par2 r %s %s %s' % (self.common_options,
-                                                  par2file.get_canonical(),
-                                                  local_path_temp.get_canonical())
+                par2repair = u'par2 r %s %s %s' % (self.common_options,
+                                                   par2file.get_canonical(),
+                                                   local_path_temp.get_canonical())
                 out, returncode = pexpect.run(par2repair, None, True)
 
                 if returncode:
-                    log.Error("Failed to repair %s" % remote_filename)
+                    log.Error(u"Failed to repair %s" % remote_filename)
                 else:
-                    log.Warn("Repair successful %s" % remote_filename)
+                    log.Warn(u"Repair successful %s" % remote_filename)
         except BackendException:
             # par2 file not available
             pass
@@ -148,44 +156,44 @@
             par2temp.deltree()
 
     def delete(self, filename):
-        """delete given filename and its .par2 files
+        u"""delete given filename and its .par2 files
         """
         self.wrapped_backend._delete(filename)
 
         remote_list = self.unfiltered_list()
 
-        c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
+        c = re.compile(b'%s(?:\\.vol[\\d+]*)?\\.par2' % filename)
         for remote_filename in remote_list:
             if c.match(remote_filename):
                 self.wrapped_backend._delete(remote_filename)
 
     def delete_list(self, filename_list):
-        """delete given filename_list and all .par2 files that belong to them
+        u"""delete given filename_list and all .par2 files that belong to them
         """
         remote_list = self.unfiltered_list()
 
         for filename in filename_list[:]:
-            c = re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
+            c = re.compile(b'%s(?:\\.vol[\\d+]*)?\\.par2' % filename)
             for remote_filename in remote_list:
                 if c.match(remote_filename):
                     # insert here to make sure par2 files will be removed first
                     filename_list.insert(0, remote_filename)
 
-        if hasattr(self.wrapped_backend, '_delete_list'):
+        if hasattr(self.wrapped_backend, u'_delete_list'):
             return self.wrapped_backend._delete_list(filename_list)
         else:
             for filename in filename_list:
                 self.wrapped_backend._delete(filename)
 
     def list(self):
-        """
+        u"""
         Return list of filenames (byte strings) present in backend
 
         Files ending with ".par2" will be excluded from the list.
         """
         remote_list = self.wrapped_backend._list()
 
-        c = re.compile(r'(?!.*\.par2$)')
+        c = re.compile(b'(?!.*\\.par2$)')
         filtered_list = []
         for filename in remote_list:
             if c.match(filename):
@@ -210,4 +218,9 @@
     def close(self):
         self.wrapped_backend._close()
 
+<<<<<<< TREE
 backend.register_backend_prefix('par2', Par2Backend)
+=======
+
+backend.register_backend_prefix(u'par2', Par2Backend)
+>>>>>>> MERGE-SOURCE

=== added file 'duplicity/backends/pcabackend.py.OTHER'
--- duplicity/backends/pcabackend.py.OTHER	1970-01-01 00:00:00 +0000
+++ duplicity/backends/pcabackend.py.OTHER	2019-02-22 19:07:43 +0000
@@ -0,0 +1,210 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2013 Matthieu Huin <mhu@xxxxxxxxxxxx>
+# Copyright 2017 Xavier Lucas <xavier.lucas@xxxxxxxxxxxx>
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+from builtins import str
+import os
+
+import duplicity.backend
+from duplicity import log
+from duplicity import util
+from duplicity.errors import BackendException
+import time
+
+
+class PCABackend(duplicity.backend.Backend):
+    u"""
+    Backend for OVH PCA
+    """
+    def __init__(self, parsed_url):
+        duplicity.backend.Backend.__init__(self, parsed_url)
+
+        try:
+            from swiftclient import Connection
+            from swiftclient import ClientException
+        except ImportError as e:
+            raise BackendException(u"""\
+PCA backend requires the python-swiftclient library.
+Exception: %s""" % str(e))
+
+        self.resp_exc = ClientException
+        self.conn_cls = Connection
+        conn_kwargs = {}
+
+        # if the user has already authenticated
+        if u'PCA_PREAUTHURL' in os.environ and u'PCA_PREAUTHTOKEN' in os.environ:
+            conn_kwargs[u'preauthurl'] = os.environ[u'PCA_PREAUTHURL']
+            conn_kwargs[u'preauthtoken'] = os.environ[u'PCA_PREAUTHTOKEN']
+
+        else:
+            if u'PCA_USERNAME' not in os.environ:
+                raise BackendException(u'PCA_USERNAME environment variable '
+                                       u'not set.')
+
+            if u'PCA_PASSWORD' not in os.environ:
+                raise BackendException(u'PCA_PASSWORD environment variable '
+                                       u'not set.')
+
+            if u'PCA_AUTHURL' not in os.environ:
+                raise BackendException(u'PCA_AUTHURL environment variable '
+                                       u'not set.')
+
+            conn_kwargs[u'user'] = os.environ[u'PCA_USERNAME']
+            conn_kwargs[u'key'] = os.environ[u'PCA_PASSWORD']
+            conn_kwargs[u'authurl'] = os.environ[u'PCA_AUTHURL']
+
+        os_options = {}
+
+        if u'PCA_AUTHVERSION' in os.environ:
+            conn_kwargs[u'auth_version'] = os.environ[u'PCA_AUTHVERSION']
+            if os.environ[u'PCA_AUTHVERSION'] == u'3':
+                if u'PCA_USER_DOMAIN_NAME' in os.environ:
+                    os_options.update({u'user_domain_name': os.environ[u'PCA_USER_DOMAIN_NAME']})
+                if u'PCA_USER_DOMAIN_ID' in os.environ:
+                    os_options.update({u'user_domain_id': os.environ[u'PCA_USER_DOMAIN_ID']})
+                if u'PCA_PROJECT_DOMAIN_NAME' in os.environ:
+                    os_options.update({u'project_domain_name': os.environ[u'PCA_PROJECT_DOMAIN_NAME']})
+                if u'PCA_PROJECT_DOMAIN_ID' in os.environ:
+                    os_options.update({u'project_domain_id': os.environ[u'PCA_PROJECT_DOMAIN_ID']})
+                if u'PCA_TENANTNAME' in os.environ:
+                    os_options.update({u'tenant_name': os.environ[u'PCA_TENANTNAME']})
+                if u'PCA_ENDPOINT_TYPE' in os.environ:
+                    os_options.update({u'endpoint_type': os.environ[u'PCA_ENDPOINT_TYPE']})
+                if u'PCA_USERID' in os.environ:
+                    os_options.update({u'user_id': os.environ[u'PCA_USERID']})
+                if u'PCA_TENANTID' in os.environ:
+                    os_options.update({u'tenant_id': os.environ[u'PCA_TENANTID']})
+                if u'PCA_REGIONNAME' in os.environ:
+                    os_options.update({u'region_name': os.environ[u'PCA_REGIONNAME']})
+
+        else:
+            conn_kwargs[u'auth_version'] = u'2'
+        if u'PCA_TENANTNAME' in os.environ:
+            conn_kwargs[u'tenant_name'] = os.environ[u'PCA_TENANTNAME']
+        if u'PCA_REGIONNAME' in os.environ:
+            os_options.update({u'region_name': os.environ[u'PCA_REGIONNAME']})
+
+        conn_kwargs[u'os_options'] = os_options
+        conn_kwargs[u'retries'] = 0
+
+        self.conn_kwargs = conn_kwargs
+
+        # This folds the null prefix and all null parts, which means that:
+        #  //MyContainer/ and //MyContainer are equivalent.
+        #  //MyContainer//My/Prefix/ and //MyContainer/My/Prefix are equivalent.
+        url_parts = [x for x in parsed_url.path.split(u'/') if x != u'']
+
+        self.container = url_parts.pop(0)
+        if url_parts:
+            self.prefix = u'%s/' % u'/'.join(url_parts)
+        else:
+            self.prefix = u''
+
+        policy = u'PCA'
+        policy_header = u'X-Storage-Policy'
+
+        container_metadata = None
+        try:
+            self.conn = Connection(**self.conn_kwargs)
+            container_metadata = self.conn.head_container(self.container)
+        except ClientException:
+            pass
+        except Exception as e:
+            log.FatalError(u"Connection failed: %s %s"
+                           % (e.__class__.__name__, str(e)),
+                           log.ErrorCode.connection_failed)
+
+        if container_metadata is None:
+            log.Info(u"Creating container %s" % self.container)
+            try:
+                headers = dict([[policy_header, policy]])
+                self.conn.put_container(self.container, headers=headers)
+            except Exception as e:
+                log.FatalError(u"Container creation failed: %s %s"
+                               % (e.__class__.__name__, str(e)),
+                               log.ErrorCode.connection_failed)
+        elif policy and container_metadata[policy_header.lower()] != policy:
+            log.FatalError(u"Container '%s' exists but its storage policy is '%s' not '%s'."
+                           % (self.container, container_metadata[policy_header.lower()], policy))
+
+    def _error_code(self, operation, e):
+        if isinstance(e, self.resp_exc):
+            if e.http_status == 404:
+                return log.ErrorCode.backend_not_found
+
+    def _put(self, source_path, remote_filename):
+        self.conn.put_object(self.container, self.prefix + remote_filename,
+                             file(source_path.name))
+
+    def _get(self, remote_filename, local_path):
+        body = self.preprocess_download(remote_filename, 60)
+        if body:
+            with open(local_path.name, u'wb') as f:
+                for chunk in body:
+                    f.write(chunk)
+
+    def _list(self):
+        headers, objs = self.conn.get_container(self.container, full_listing=True, path=self.prefix)
+        # removes prefix from return values. should check for the prefix ?
+        return [o[u'name'][len(self.prefix):] for o in objs]
+
+    def _delete(self, filename):
+        self.conn.delete_object(self.container, self.prefix + filename)
+
+    def _query(self, filename):
+        sobject = self.conn.head_object(self.container, self.prefix + filename)
+        return {u'size': int(sobject[u'content-length'])}
+
+    def preprocess_download(self, remote_filename, retry_period, wait=True):
+        body = self.unseal(remote_filename)
+        try:
+            if wait:
+                while not body:
+                    time.sleep(retry_period)
+                    self.conn = self.conn_cls(**self.conn_kwargs)
+                    body = self.unseal(remote_filename)
+                    self.conn.close()
+        except Exception as e:
+            log.FatalError(u"Connection failed: %s %s" % (e.__class__.__name__, str(e)),
+                           log.ErrorCode.connection_failed)
+        return body
+
+    def unseal(self, remote_filename):
+        try:
+            _, body = self.conn.get_object(self.container, self.prefix + remote_filename,
+                                           resp_chunk_size=1024)
+            log.Info(u"File %s was successfully unsealed." % remote_filename)
+            return body
+        except self.resp_exc as e:
+            # The object is sealed but being released.
+            if e.http_status == 429:
+                # The retry-after header contains the remaining duration before
+                # the unsealing operation completes.
+                duration = int(e.http_response_headers[u'Retry-After'])
+                m, s = divmod(duration, 60)
+                h, m = divmod(m, 60)
+                eta = u"%dh%02dm%02ds" % (h, m, s)
+                log.Info(u"File %s is being unsealed, operation ETA is %s." %
+                         (remote_filename, eta))
+            else:
+                raise
+
+
+duplicity.backend.register_backend(u"pca", PCABackend)

=== modified file 'duplicity/backends/pydrivebackend.py'
--- duplicity/backends/pydrivebackend.py	2017-11-01 12:41:49 +0000
+++ duplicity/backends/pydrivebackend.py	2019-02-22 19:07:43 +0000
@@ -16,6 +16,8 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import next
+from builtins import str
 import string
 import os
 
@@ -25,7 +27,7 @@
 
 
 class PyDriveBackend(duplicity.backend.Backend):
-    """Connect to remote store using PyDrive API"""
+    u"""Connect to remote store using PyDrive API"""
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
@@ -35,9 +37,9 @@
             from apiclient.discovery import build
             from pydrive.auth import GoogleAuth
             from pydrive.drive import GoogleDrive
-            from pydrive.files import FileNotUploadedError
+            from pydrive.files import ApiRequestError, FileNotUploadedError
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 PyDrive backend requires PyDrive installation.  Please read the manpage for setup details.
 Exception: %s""" % str(e))
 
@@ -50,79 +52,78 @@
             from oauth2client import crypt
             self.oldClient = False
 
-        if 'GOOGLE_DRIVE_ACCOUNT_KEY' in os.environ:
-            account_key = os.environ['GOOGLE_DRIVE_ACCOUNT_KEY']
+        if u'GOOGLE_DRIVE_ACCOUNT_KEY' in os.environ:
+            account_key = os.environ[u'GOOGLE_DRIVE_ACCOUNT_KEY']
             if self.oldClient:
                 credentials = SignedJwtAssertionCredentials(parsed_url.username +
-                                                            '@' + parsed_url.hostname,
+                                                            u'@' + parsed_url.hostname,
                                                             account_key,
-                                                            scopes='https://www.googleapis.com/auth/drive')
+                                                            scopes=u'https://www.googleapis.com/auth/drive')
             else:
                 signer = crypt.Signer.from_string(account_key)
-                credentials = ServiceAccountCredentials(parsed_url.username + '@' + parsed_url.hostname, signer,
-                                                        scopes='https://www.googleapis.com/auth/drive')
+                credentials = ServiceAccountCredentials(parsed_url.username + u'@' + parsed_url.hostname, signer,
+                                                        scopes=u'https://www.googleapis.com/auth/drive')
             credentials.authorize(httplib2.Http())
             gauth = GoogleAuth()
             gauth.credentials = credentials
-        elif 'GOOGLE_DRIVE_SETTINGS' in os.environ:
-            gauth = GoogleAuth(settings_file=os.environ['GOOGLE_DRIVE_SETTINGS'])
+        elif u'GOOGLE_DRIVE_SETTINGS' in os.environ:
+            gauth = GoogleAuth(settings_file=os.environ[u'GOOGLE_DRIVE_SETTINGS'])
             gauth.CommandLineAuth()
-        elif ('GOOGLE_SECRETS_FILE' in os.environ and 'GOOGLE_CREDENTIALS_FILE' in os.environ):
+        elif (u'GOOGLE_SECRETS_FILE' in os.environ and u'GOOGLE_CREDENTIALS_FILE' in os.environ):
             gauth = GoogleAuth()
-            gauth.LoadClientConfigFile(os.environ['GOOGLE_SECRETS_FILE'])
-            gauth.LoadCredentialsFile(os.environ['GOOGLE_CREDENTIALS_FILE'])
+            gauth.LoadClientConfigFile(os.environ[u'GOOGLE_SECRETS_FILE'])
+            gauth.LoadCredentialsFile(os.environ[u'GOOGLE_CREDENTIALS_FILE'])
             if gauth.credentials is None:
                 gauth.CommandLineAuth()
             elif gauth.access_token_expired:
                 gauth.Refresh()
             else:
                 gauth.Authorize()
-            gauth.SaveCredentialsFile(os.environ['GOOGLE_CREDENTIALS_FILE'])
+            gauth.SaveCredentialsFile(os.environ[u'GOOGLE_CREDENTIALS_FILE'])
         else:
             raise BackendException(
-                'GOOGLE_DRIVE_ACCOUNT_KEY or GOOGLE_DRIVE_SETTINGS environment '
-                'variable not set. Please read the manpage to fix.')
+                u'GOOGLE_DRIVE_ACCOUNT_KEY or GOOGLE_DRIVE_SETTINGS environment '
+                u'variable not set. Please read the manpage to fix.')
         self.drive = GoogleDrive(gauth)
 
         # Dirty way to find root folder id
-        file_list = self.drive.ListFile({'q': "'Root' in parents and trashed=false"}).GetList()
+        file_list = self.drive.ListFile({u'q': u"'Root' in parents and trashed=false"}).GetList()
         if file_list:
-            parent_folder_id = file_list[0]['parents'][0]['id']
+            parent_folder_id = file_list[0][u'parents'][0][u'id']
         else:
-            file_in_root = self.drive.CreateFile({'title': 'i_am_in_root'})
+            file_in_root = self.drive.CreateFile({u'title': u'i_am_in_root'})
             file_in_root.Upload()
-            parent_folder_id = file_in_root['parents'][0]['id']
+            parent_folder_id = file_in_root[u'parents'][0][u'id']
 
         # Fetch destination folder entry and create hierarchy if required.
-        folder_names = string.split(parsed_url.path, '/')
+        folder_names = string.split(parsed_url.path, u'/')
         for folder_name in folder_names:
             if not folder_name:
                 continue
-            file_list = self.drive.ListFile({'q': "'" + parent_folder_id +
-                                                  "' in parents and trashed=false"}).GetList()
-            folder = next((item for item in file_list if item['title'] == folder_name and
-                           item['mimeType'] == 'application/vnd.google-apps.folder'), None)
+            file_list = self.drive.ListFile({u'q': u"'" + parent_folder_id +
+                                             u"' in parents and trashed=false"}).GetList()
+            folder = next((item for item in file_list if item[u'title'] == folder_name and
+                           item[u'mimeType'] == u'application/vnd.google-apps.folder'), None)
             if folder is None:
-                folder = self.drive.CreateFile({'title': folder_name,
-                                                'mimeType': "application/vnd.google-apps.folder",
-                                                'parents': [{'id': parent_folder_id}]})
+                folder = self.drive.CreateFile({u'title': folder_name,
+                                                u'mimeType': u"application/vnd.google-apps.folder",
+                                                u'parents': [{u'id': parent_folder_id}]})
                 folder.Upload()
-            parent_folder_id = folder['id']
+            parent_folder_id = folder[u'id']
         self.folder = parent_folder_id
         self.id_cache = {}
 
     def file_by_name(self, filename):
-        from pydrive.files import ApiRequestError
         if filename in self.id_cache:
             # It might since have been locally moved, renamed or deleted, so we
             # need to validate the entry.
             file_id = self.id_cache[filename]
-            drive_file = self.drive.CreateFile({'id': file_id})
+            drive_file = self.drive.CreateFile({u'id': file_id})
             try:
-                if drive_file['title'] == filename and not drive_file['labels']['trashed']:
-                    for parent in drive_file['parents']:
-                        if parent['id'] == self.folder:
-                            log.Info("PyDrive backend: found file '%s' with id %s in ID cache" %
+                if drive_file[u'title'] == filename and not drive_file[u'labels'][u'trashed']:
+                    for parent in drive_file[u'parents']:
+                        if parent[u'id'] == self.folder:
+                            log.Info(u"PyDrive backend: found file '%s' with id %s in ID cache" %
                                      (filename, file_id))
                             return drive_file
             except ApiRequestError as error:
@@ -130,48 +131,48 @@
                 if error.args[0].resp.status != 404:
                     raise
             # If we get here, the cache entry is invalid
-            log.Info("PyDrive backend: invalidating '%s' (previously ID %s) from ID cache" %
+            log.Info(u"PyDrive backend: invalidating '%s' (previously ID %s) from ID cache" %
                      (filename, file_id))
             del self.id_cache[filename]
 
         # Not found in the cache, so use directory listing. This is less
         # reliable because there is no strong consistency.
-        q = "title='%s' and '%s' in parents and trashed=false" % (filename, self.folder)
-        fields = 'items(title,id,fileSize,downloadUrl,exportLinks),nextPageToken'
-        flist = self.drive.ListFile({'q': q, 'fields': fields}).GetList()
+        q = u"title='%s' and '%s' in parents and trashed=false" % (filename, self.folder)
+        fields = u'items(title,id,fileSize,downloadUrl,exportLinks),nextPageToken'
+        flist = self.drive.ListFile({u'q': q, u'fields': fields}).GetList()
         if len(flist) > 1:
-            log.FatalError(_("PyDrive backend: multiple files called '%s'.") % (filename,))
+            log.FatalError(_(u"PyDrive backend: multiple files called '%s'.") % (filename,))
         elif flist:
-            file_id = flist[0]['id']
-            self.id_cache[filename] = flist[0]['id']
-            log.Info("PyDrive backend: found file '%s' with id %s on server, "
-                     "adding to cache" % (filename, file_id))
+            file_id = flist[0][u'id']
+            self.id_cache[filename] = flist[0][u'id']
+            log.Info(u"PyDrive backend: found file '%s' with id %s on server, "
+                     u"adding to cache" % (filename, file_id))
             return flist[0]
-        log.Info("PyDrive backend: file '%s' not found in cache or on server" %
+        log.Info(u"PyDrive backend: file '%s' not found in cache or on server" %
                  (filename,))
         return None
 
     def id_by_name(self, filename):
         drive_file = self.file_by_name(filename)
         if drive_file is None:
-            return ''
+            return u''
         else:
-            return drive_file['id']
+            return drive_file[u'id']
 
     def _put(self, source_path, remote_filename):
         drive_file = self.file_by_name(remote_filename)
         if drive_file is None:
             # No existing file, make a new one
-            drive_file = self.drive.CreateFile({'title': remote_filename,
-                                                'parents': [{"kind": "drive#fileLink",
-                                                             "id": self.folder}]})
-            log.Info("PyDrive backend: creating new file '%s'" % (remote_filename,))
+            drive_file = self.drive.CreateFile({u'title': remote_filename,
+                                                u'parents': [{u"kind": u"drive#fileLink",
+                                                             u"id": self.folder}]})
+            log.Info(u"PyDrive backend: creating new file '%s'" % (remote_filename,))
         else:
-            log.Info("PyDrive backend: replacing existing file '%s' with id '%s'" % (
-                remote_filename, drive_file['id']))
+            log.Info(u"PyDrive backend: replacing existing file '%s' with id '%s'" % (
+                remote_filename, drive_file[u'id']))
         drive_file.SetContentFile(source_path.name)
         drive_file.Upload()
-        self.id_cache[remote_filename] = drive_file['id']
+        self.id_cache[remote_filename] = drive_file[u'id']
 
     def _get(self, remote_filename, local_path):
         drive_file = self.file_by_name(remote_filename)
@@ -179,45 +180,53 @@
 
     def _list(self):
         drive_files = self.drive.ListFile({
-            'q': "'" + self.folder + "' in parents and trashed=false",
-            'fields': 'items(title,id),nextPageToken'}).GetList()
-        filenames = set(item['title'] for item in drive_files)
+            u'q': u"'" + self.folder + u"' in parents and trashed=false",
+            u'fields': u'items(title,id),nextPageToken'}).GetList()
+        filenames = set(item[u'title'] for item in drive_files)
         # Check the cache as well. A file might have just been uploaded but
         # not yet appear in the listing.
         # Note: do not use iterkeys() here, because file_by_name will modify
         # the cache if it finds invalid entries.
-        for filename in self.id_cache.keys():
+        for filename in list(self.id_cache.keys()):
             if (filename not in filenames) and (self.file_by_name(filename) is not None):
                 filenames.add(filename)
         return list(filenames)
 
     def _delete(self, filename):
         file_id = self.id_by_name(filename)
-        if file_id != '':
+        if file_id != u'':
             self.drive.auth.service.files().delete(fileId=file_id).execute()
         else:
-            log.Warn("File '%s' does not exist while trying to delete it" % (filename,))
+            log.Warn(u"File '%s' does not exist while trying to delete it" % (filename,))
 
     def _query(self, filename):
         drive_file = self.file_by_name(filename)
         if drive_file is None:
             size = -1
         else:
-            size = int(drive_file['fileSize'])
-        return {'size': size}
+            size = int(drive_file[u'fileSize'])
+        return {u'size': size}
 
     def _error_code(self, operation, error):
-        from pydrive.files import ApiRequestError, FileNotUploadedError
         if isinstance(error, FileNotUploadedError):
             return log.ErrorCode.backend_not_found
         elif isinstance(error, ApiRequestError):
             return log.ErrorCode.backend_permission_denied
         return log.ErrorCode.backend_error
 
+<<<<<<< TREE
 duplicity.backend.register_backend('pydrive', PyDriveBackend)
 """ pydrive is an alternate way to access gdocs """
 duplicity.backend.register_backend('pydrive+gdocs', PyDriveBackend)
 """ register pydrive as the default way to access gdocs """
 duplicity.backend.register_backend('gdocs', PyDriveBackend)
-
-duplicity.backend.uses_netloc.extend(['pydrive', 'pydrive+gdocs', 'gdocs'])
+=======
+
+duplicity.backend.register_backend(u'pydrive', PyDriveBackend)
+u""" pydrive is an alternate way to access gdocs """
+duplicity.backend.register_backend(u'pydrive+gdocs', PyDriveBackend)
+u""" register pydrive as the default way to access gdocs """
+duplicity.backend.register_backend(u'gdocs', PyDriveBackend)
+>>>>>>> MERGE-SOURCE
+
+duplicity.backend.uses_netloc.extend([u'pydrive', u'pydrive+gdocs', u'gdocs'])

=== modified file 'duplicity/backends/pyrax_identity/hubic.py'
--- duplicity/backends/pyrax_identity/hubic.py	2016-06-29 22:40:59 +0000
+++ duplicity/backends/pyrax_identity/hubic.py	2019-02-22 19:07:43 +0000
@@ -3,21 +3,23 @@
 # Copyright (c) 2014 Gu1
 # Licensed under the MIT license
 
+from __future__ import print_function
+from future import standard_library
+standard_library.install_aliases()
+from builtins import str
+import configparser
 import os
-import pyrax
-import pyrax.exceptions as exc
-import requests
 import re
-import urlparse
-import ConfigParser
 import time
-from pyrax.base_identity import BaseIdentity, Service
+import urllib.parse  # pylint: disable=import-error
+
 from requests.compat import quote, quote_plus
-
-
-OAUTH_ENDPOINT = "https://api.hubic.com/oauth/";
-API_ENDPOINT = "https://api.hubic.com/1.0/";
-TOKENS_FILE = os.path.expanduser("~/.hubic_tokens")
+import requests
+
+
+OAUTH_ENDPOINT = u"https://api.hubic.com/oauth/";
+API_ENDPOINT = u"https://api.hubic.com/1.0/";
+TOKENS_FILE = os.path.expanduser(u"~/.hubic_tokens")
 
 
 class BearerTokenAuth(requests.auth.AuthBase):
@@ -25,19 +27,28 @@
         self.token = token
 
     def __call__(self, req):
-        req.headers['Authorization'] = 'Bearer ' + self.token
+        req.headers[u'Authorization'] = u'Bearer ' + self.token
         return req
 
 
 class HubicIdentity(BaseIdentity):
+    def __init__(self):
+        try:
+            from pyrax.base_identity import BaseIdentity, Service
+            import pyrax
+            import pyrax.exceptions as exc
+        except ImportError as e:
+            raise BackendException(u"""\
+Hubic backend requires the pyrax library available from Rackspace.
+Exception: %s""" % str(e))
 
     def _get_auth_endpoint(self):
-        return ""
+        return u""
 
     def set_credentials(self, email, password, client_id,
                         client_secret, redirect_uri,
                         authenticate=False):
-        """Sets the username and password directly."""
+        u"""Sets the username and password directly."""
         self._email = email
         self._password = password
         self._client_id = client_id
@@ -48,90 +59,90 @@
             self.authenticate()
 
     def _read_credential_file(self, cfg):
-        """
+        u"""
         Parses the credential file with Rackspace-specific labels.
         """
-        self._email = cfg.get("hubic", "email")
-        self._password = cfg.get("hubic", "password")
-        self._client_id = cfg.get("hubic", "client_id")
+        self._email = cfg.get(u"hubic", u"email")
+        self._password = cfg.get(u"hubic", u"password")
+        self._client_id = cfg.get(u"hubic", u"client_id")
         self.tenant_id = self._client_id
-        self._client_secret = cfg.get("hubic", "client_secret")
-        self._redirect_uri = cfg.get("hubic", "redirect_uri")
+        self._client_secret = cfg.get(u"hubic", u"client_secret")
+        self._redirect_uri = cfg.get(u"hubic", u"redirect_uri")
 
     def _parse_error(self, resp):
-        if 'location' not in resp.headers:
+        if u'location' not in resp.headers:
             return None
-        query = urlparse.urlsplit(resp.headers['location']).query
-        qs = dict(urlparse.parse_qsl(query))
-        return {'error': qs['error'], 'error_description': qs['error_description']}
+        query = urllib.parse.urlsplit(resp.headers[u'location']).query
+        qs = dict(urllib.parse.parse_qsl(query))
+        return {u'error': qs[u'error'], u'error_description': qs[u'error_description']}
 
     def _get_access_token(self, code):
         r = requests.post(
-            OAUTH_ENDPOINT + 'token/',
+            OAUTH_ENDPOINT + u'token/',
             data={
-                'code': code,
-                'redirect_uri': self._redirect_uri,
-                'grant_type': 'authorization_code',
+                u'code': code,
+                u'redirect_uri': self._redirect_uri,
+                u'grant_type': u'authorization_code',
             },
             auth=(self._client_id, self._client_secret)
         )
         if r.status_code != 200:
             try:
                 err = r.json()
-                err['code'] = r.status_code
+                err[u'code'] = r.status_code
             except:
                 err = {}
 
-            raise exc.AuthenticationFailed("Unable to get oauth access token, "
-                                           "wrong client_id or client_secret ? (%s)" %
+            raise exc.AuthenticationFailed(u"Unable to get oauth access token, "
+                                           u"wrong client_id or client_secret ? (%s)" %
                                            str(err))
 
         oauth_token = r.json()
 
-        config = ConfigParser.ConfigParser()
+        config = configparser.ConfigParser()
         config.read(TOKENS_FILE)
 
-        if not config.has_section("hubic"):
-            config.add_section("hubic")
+        if not config.has_section(u"hubic"):
+            config.add_section(u"hubic")
 
-        if oauth_token['access_token'] is not None:
-            config.set("hubic", "access_token", oauth_token['access_token'])
-            with open(TOKENS_FILE, 'wb') as configfile:
+        if oauth_token[u'access_token'] is not None:
+            config.set(u"hubic", u"access_token", oauth_token[u'access_token'])
+            with open(TOKENS_FILE, u'wb') as configfile:
                 config.write(configfile)
         else:
             raise exc.AuthenticationFailed(
-                "Unable to get oauth access token, wrong client_id or client_secret ? (%s)" %
+                u"Unable to get oauth access token, wrong client_id or client_secret ? (%s)" %
                 str(err))
 
-        if oauth_token['refresh_token'] is not None:
-            config.set("hubic", "refresh_token", oauth_token['refresh_token'])
-            with open(TOKENS_FILE, 'wb') as configfile:
+        if oauth_token[u'refresh_token'] is not None:
+            config.set(u"hubic", u"refresh_token", oauth_token[u'refresh_token'])
+            with open(TOKENS_FILE, u'wb') as configfile:
                 config.write(configfile)
         else:
-            raise exc.AuthenticationFailed("Unable to get the refresh token.")
+            raise exc.AuthenticationFailed(u"Unable to get the refresh token.")
 
         # removing username and password from .hubic_tokens
-        if config.has_option("hubic", "email"):
-            config.remove_option("hubic", "email")
-            with open(TOKENS_FILE, 'wb') as configfile:
-                config.write(configfile)
-            print "username has been removed from the .hubic_tokens file sent to the CE."
-        if config.has_option("hubic", "password"):
-            config.remove_option("hubic", "password")
-            with open(TOKENS_FILE, 'wb') as configfile:
-                config.write(configfile)
-            print "password has been removed from the .hubic_tokens file sent to the CE."
+        if config.has_option(u"hubic", u"email"):
+            config.remove_option(u"hubic", u"email")
+            with open(TOKENS_FILE, u'wb') as configfile:
+                config.write(configfile)
+            print(u"username has been removed from the .hubic_tokens file sent to the CE.")
+        if config.has_option(u"hubic", u"password"):
+            config.remove_option(u"hubic", u"password")
+            with open(TOKENS_FILE, u'wb') as configfile:
+                config.write(configfile)
+            print(u"password has been removed from the .hubic_tokens file sent to the CE.")
 
         return oauth_token
 
     def _refresh_access_token(self):
 
-        config = ConfigParser.ConfigParser()
+        config = configparser.ConfigParser()
         config.read(TOKENS_FILE)
-        refresh_token = config.get("hubic", "refresh_token")
+        refresh_token = config.get(u"hubic", u"refresh_token")
 
         if refresh_token is None:
-            raise exc.AuthenticationFailed("refresh_token is null. Not acquiered before ?")
+            raise exc.AuthenticationFailed(u"refresh_token is null. Not acquiered before ?")
 
         success = False
         max_retries = 20
@@ -141,16 +152,16 @@
 
         while retries < max_retries and not success:
             r = requests.post(
-                OAUTH_ENDPOINT + 'token/',
+                OAUTH_ENDPOINT + u'token/',
                 data={
-                    'refresh_token': refresh_token,
-                    'grant_type': 'refresh_token',
+                    u'refresh_token': refresh_token,
+                    u'grant_type': u'refresh_token',
                 },
                 auth=(self._client_id, self._client_secret)
             )
             if r.status_code != 200:
                 if r.status_code == 509:
-                    print "status_code 509: attempt #", retries, " failed"
+                    print(u"status_code 509: attempt #", retries, u" failed")
                     retries += 1
                     time.sleep(sleep_time)
                     sleep_time = sleep_time * 2
@@ -159,38 +170,38 @@
                 else:
                     try:
                         err = r.json()
-                        err['code'] = r.status_code
+                        err[u'code'] = r.status_code
                     except:
                         err = {}
 
                     raise exc.AuthenticationFailed(
-                        "Unable to get oauth access token, wrong client_id or client_secret ? (%s)" %
+                        u"Unable to get oauth access token, wrong client_id or client_secret ? (%s)" %
                         str(err))
             else:
                 success = True
 
         if not success:
             raise exc.AuthenticationFailed(
-                "All the attempts failed to get the refresh token: "
-                "status_code = 509: Bandwidth Limit Exceeded")
+                u"All the attempts failed to get the refresh token: "
+                u"status_code = 509: Bandwidth Limit Exceeded")
 
         oauth_token = r.json()
 
-        if oauth_token['access_token'] is not None:
+        if oauth_token[u'access_token'] is not None:
             return oauth_token
         else:
-            raise exc.AuthenticationFailed("Unable to get oauth access token from json")
+            raise exc.AuthenticationFailed(u"Unable to get oauth access token from json")
 
     def authenticate(self):
-        config = ConfigParser.ConfigParser()
+        config = configparser.ConfigParser()
         config.read(TOKENS_FILE)
 
-        if config.has_option("hubic", "refresh_token"):
+        if config.has_option(u"hubic", u"refresh_token"):
             oauth_token = self._refresh_access_token()
         else:
             r = requests.get(
-                OAUTH_ENDPOINT + 'auth/?client_id={0}&redirect_uri={1}'
-                '&scope=credentials.r,account.r&response_type=code&state={2}'.format(
+                OAUTH_ENDPOINT + u'auth/?client_id={0}&redirect_uri={1}'
+                u'&scope=credentials.r,account.r&response_type=code&state={2}'.format(
                     quote(self._client_id),
                     quote_plus(self._redirect_uri),
                     pyrax.utils.random_ascii()  # csrf ? wut ?..
@@ -198,8 +209,8 @@
                 allow_redirects=False
             )
             if r.status_code != 200:
-                raise exc.AuthenticationFailed("Incorrect/unauthorized "
-                                               "client_id (%s)" % str(self._parse_error(r)))
+                raise exc.AuthenticationFailed(u"Incorrect/unauthorized "
+                                               u"client_id (%s)" % str(self._parse_error(r)))
 
             try:
                 from lxml import html as lxml_html
@@ -207,7 +218,7 @@
                 lxml_html = None
 
             if lxml_html:
-                oauth = lxml_html.document_fromstring(r.content).xpath('//input[@name="oauth"]')
+                oauth = lxml_html.document_fromstring(r.content).xpath(u'//input[@name="oauth"]')
                 oauth = oauth[0].value if oauth else None
             else:
                 oauth = re.search(
@@ -216,52 +227,52 @@
                 oauth = oauth.group(1) if oauth else None
 
             if not oauth:
-                raise exc.AuthenticationFailed("Unable to get oauth_id from authorization page")
+                raise exc.AuthenticationFailed(u"Unable to get oauth_id from authorization page")
 
             if self._email is None or self._password is None:
-                raise exc.AuthenticationFailed("Cannot retrieve email and/or password. "
-                                               "Please run expresslane-hubic-setup.sh")
+                raise exc.AuthenticationFailed(u"Cannot retrieve email and/or password. "
+                                               u"Please run expresslane-hubic-setup.sh")
 
             r = requests.post(
-                OAUTH_ENDPOINT + 'auth/',
+                OAUTH_ENDPOINT + u'auth/',
                 data={
-                    'action': 'accepted',
-                    'oauth': oauth,
-                    'login': self._email,
-                    'user_pwd': self._password,
-                    'account': 'r',
-                    'credentials': 'r',
+                    u'action': u'accepted',
+                    u'oauth': oauth,
+                    u'login': self._email,
+                    u'user_pwd': self._password,
+                    u'account': u'r',
+                    u'credentials': u'r',
 
                 },
                 allow_redirects=False
             )
 
             try:
-                query = urlparse.urlsplit(r.headers['location']).query
-                code = dict(urlparse.parse_qsl(query))['code']
+                query = urllib.parse.urlsplit(r.headers[u'location']).query
+                code = dict(urllib.parse.parse_qsl(query))[u'code']
             except:
-                raise exc.AuthenticationFailed("Unable to authorize client_id, "
-                                               "invalid login/password ?")
+                raise exc.AuthenticationFailed(u"Unable to authorize client_id, "
+                                               u"invalid login/password ?")
 
             oauth_token = self._get_access_token(code)
 
-        if oauth_token['token_type'].lower() != 'bearer':
-            raise exc.AuthenticationFailed("Unsupported access token type")
+        if oauth_token[u'token_type'].lower() != u'bearer':
+            raise exc.AuthenticationFailed(u"Unsupported access token type")
 
         r = requests.get(
-            API_ENDPOINT + 'account/credentials',
-            auth=BearerTokenAuth(oauth_token['access_token']),
+            API_ENDPOINT + u'account/credentials',
+            auth=BearerTokenAuth(oauth_token[u'access_token']),
         )
 
         swift_token = r.json()
         self.authenticated = True
-        self.token = swift_token['token']
-        self.expires = swift_token['expires']
-        self.services['object_store'] = Service(self, {
-            'name': 'HubiC',
-            'type': 'cloudfiles',
-            'endpoints': [
-                {'public_url': swift_token['endpoint']}
+        self.token = swift_token[u'token']
+        self.expires = swift_token[u'expires']
+        self.services[u'object_store'] = Service(self, {
+            u'name': u'HubiC',
+            u'type': u'cloudfiles',
+            u'endpoints': [
+                {u'public_url': swift_token[u'endpoint']}
             ]
         })
         self.username = self.password = None

=== modified file 'duplicity/backends/rsyncbackend.py'
--- duplicity/backends/rsyncbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/rsyncbackend.py	2019-02-22 19:07:43 +0000
@@ -19,6 +19,8 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import print_function
+from builtins import map
 import os
 import re
 import tempfile
@@ -29,7 +31,7 @@
 
 
 class RsyncBackend(duplicity.backend.Backend):
-    """Connect to remote store using rsync
+    u"""Connect to remote store using rsync
 
     rsync backend contributed by Sebastian Wilhelmi <seppi@xxxxxxxx>
     rsyncd auth, alternate port support
@@ -37,9 +39,9 @@
 
     """
     def __init__(self, parsed_url):
-        """rsyncBackend initializer"""
+        u"""rsyncBackend initializer"""
         duplicity.backend.Backend.__init__(self, parsed_url)
-        """
+        u"""
         rsyncd module url: rsync://[user:password@]host[:port]::[/]modname/path
                       Note: 3.0.7 is picky about syntax use either 'rsync://' or '::'
                       cmd: rsync [--port=port] host::modname/path
@@ -49,83 +51,83 @@
                           cmd: rsync -e 'ssh [-p=port]' [user@]host:[/]path
         """
         host = parsed_url.hostname
-        port = ""
+        port = u""
         # RSYNC_RSH from calling shell might conflict with our settings
-        if 'RSYNC_RSH' in os.environ:
-            del os.environ['RSYNC_RSH']
+        if u'RSYNC_RSH' in os.environ:
+            del os.environ[u'RSYNC_RSH']
         if self.over_rsyncd():
             # its a module path
             (path, port) = self.get_rsync_path()
-            self.url_string = "%s::%s" % (host, path.lstrip('/:'))
+            self.url_string = u"%s::%s" % (host, path.lstrip(u'/:'))
             if port:
-                port = " --port=%s" % port
+                port = u" --port=%s" % port
         else:
-            host_string = host + ":" if host else ""
-            if parsed_url.path.startswith("//"):
+            host_string = host + u":" if host else u""
+            if parsed_url.path.startswith(u"//"):
                 # its an absolute path
-                self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/'))
+                self.url_string = u"%s/%s" % (host_string, parsed_url.path.lstrip(u'/'))
             else:
                 # its a relative path
-                self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/'))
+                self.url_string = u"%s%s" % (host_string, parsed_url.path.lstrip(u'/'))
             if parsed_url.port:
-                port = "-p %s" % parsed_url.port
+                port = u"-p %s" % parsed_url.port
         # add trailing slash if missing
-        if self.url_string[-1] != '/':
-            self.url_string += '/'
+        if self.url_string[-1] != u'/':
+            self.url_string += u'/'
         # user?
         if parsed_url.username:
             if self.over_rsyncd():
-                os.environ['USER'] = parsed_url.username
+                os.environ[u'USER'] = parsed_url.username
             else:
-                self.url_string = parsed_url.username + "@" + self.url_string
+                self.url_string = parsed_url.username + u"@" + self.url_string
         # password?, don't ask if none was given
         self.use_getpass = False
         password = self.get_password()
         if password:
-            os.environ['RSYNC_PASSWORD'] = password
+            os.environ[u'RSYNC_PASSWORD'] = password
         if self.over_rsyncd():
             portOption = port
         else:
-            portOption = "-e 'ssh %s -oBatchMode=yes %s'" % (port, globals.ssh_options)
+            portOption = u"-e 'ssh %s -oBatchMode=yes %s'" % (port, globals.ssh_options)
         rsyncOptions = globals.rsync_options
         # build cmd
-        self.cmd = "rsync %s %s" % (portOption, rsyncOptions)
+        self.cmd = u"rsync %s %s" % (portOption, rsyncOptions)
 
     def over_rsyncd(self):
         url = self.parsed_url.url_string
-        if re.search('::[^:]*$', url):
+        if re.search(u'::[^:]*$', url):
             return True
         else:
             return False
 
     def get_rsync_path(self):
         url = self.parsed_url.url_string
-        m = re.search("(:\d+|)?::([^:]*)$", url)
+        m = re.search(r"(:\d+|)?::([^:]*)$", url)
         if m:
-            return m.group(2), m.group(1).lstrip(':')
-        raise InvalidBackendURL("Could not determine rsync path: %s"
-                                "" % self.munge_password(url))
+            return m.group(2), m.group(1).lstrip(u':')
+        raise InvalidBackendURL(u"Could not determine rsync path: %s"
+                                u"" % self.munge_password(url))
 
     def _put(self, source_path, remote_filename):
         remote_path = os.path.join(self.url_string, remote_filename)
-        commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)
+        commandline = u"%s %s %s" % (self.cmd, source_path.name, remote_path)
         self.subprocess_popen(commandline)
 
     def _get(self, remote_filename, local_path):
         remote_path = os.path.join(self.url_string, remote_filename)
-        commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)
+        commandline = u"%s %s %s" % (self.cmd, remote_path, local_path.name)
         self.subprocess_popen(commandline)
 
     def _list(self):
         def split(str):
             line = str.split()
-            if len(line) > 4 and line[4] != '.':
+            if len(line) > 4 and line[4] != u'.':
                 return line[4]
             else:
                 return None
-        commandline = "%s %s" % (self.cmd, self.url_string)
+        commandline = u"%s %s" % (self.cmd, self.url_string)
         result, stdout, stderr = self.subprocess_popen(commandline)
-        return [x for x in map(split, stdout.split('\n')) if x]
+        return [x for x in map(split, stdout.split(u'\n')) if x]
 
     def _delete_list(self, filename_list):
         delete_list = filename_list
@@ -142,16 +144,22 @@
         for file in dont_delete_list:
             path = os.path.join(dir, file)
             to_delete.append(path)
-            f = open(path, 'w')
-            print >> exclude, file
+            f = open(path, u'w')
+            print(file, file=exclude)
             f.close()
         exclude.close()
-        commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %
+        commandline = (u"%s --recursive --delete --exclude-from=%s %s/ %s" %
                        (self.cmd, exclude_name, dir, self.url_string))
         self.subprocess_popen(commandline)
         for file in to_delete:
             util.ignore_missing(os.unlink, file)
         os.rmdir(dir)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("rsync", RsyncBackend)
 duplicity.backend.uses_netloc.extend(['rsync'])
+=======
+
+duplicity.backend.register_backend(u"rsync", RsyncBackend)
+duplicity.backend.uses_netloc.extend([u'rsync'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/ssh_paramiko_backend.py'
--- duplicity/backends/ssh_paramiko_backend.py	2018-12-17 17:09:33 +0000
+++ duplicity/backends/ssh_paramiko_backend.py	2019-02-22 19:07:43 +0000
@@ -23,6 +23,11 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from __future__ import division
+from builtins import oct
+from builtins import input
+from builtins import zip
+from past.utils import old_div
 import re
 import string
 import os
@@ -44,7 +49,7 @@
 
 
 class SSHParamikoBackend(duplicity.backend.Backend):
-    """This backend accesses files using the sftp or scp protocols.
+    u"""This backend accesses files using the sftp or scp protocols.
     It does not need any local client programs, but an ssh server and the sftp
     program must be installed on the remote side (or with scp, the programs
     scp, ls, mkdir, rm and a POSIX-compliant shell).
@@ -74,19 +79,22 @@
             # remove first leading '/'
             self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
         else:
-            self.remote_dir = '.'
+            self.remote_dir = u'.'
 
         # lazily import paramiko when we need it
         # debian squeeze's paramiko is a bit old, so we silence randompool
         # depreciation warning note also: passphrased private keys work with
         # squeeze's paramiko only if done with DES, not AES
         import warnings
-        warnings.simplefilter("ignore")
-        import paramiko
+        warnings.simplefilter(u"ignore")
+        try:
+            import paramiko
+        except ImportError:
+            raise
         warnings.resetwarnings()
 
         class AgreedAddPolicy (paramiko.AutoAddPolicy):
-            """
+            u"""
             Policy for showing a yes/no prompt and adding the hostname and new
             host key to the known host file accordingly.
 
@@ -95,28 +103,33 @@
             """
             def missing_host_key(self, client, hostname, key):
                 fp = hexlify(key.get_fingerprint())
+<<<<<<< TREE
                 fingerprint = ':'.join(a + b for a, b in list(zip(fp[::2], fp[1::2])))
                 question = """The authenticity of host '%s' can't be established.
+=======
+                fingerprint = u':'.join(a + b for a, b in list(zip(fp[::2], fp[1::2])))
+                question = u"""The authenticity of host '%s' can't be established.
+>>>>>>> MERGE-SOURCE
 %s key fingerprint is %s.
 Are you sure you want to continue connecting (yes/no)? """ % (hostname,
                                                               key.get_name().upper(),
                                                               fingerprint)
                 while True:
                     sys.stdout.write(question)
-                    choice = raw_input().lower()
-                    if choice in ['yes', 'y']:
+                    choice = input().lower()
+                    if choice in [u'yes', u'y']:
                         paramiko.AutoAddPolicy.missing_host_key(self, client,
                                                                 hostname, key)
                         return
-                    elif choice in ['no', 'n']:
+                    elif choice in [u'no', u'n']:
                         raise AuthenticityException(hostname)
                     else:
-                        question = "Please type 'yes' or 'no': "
+                        question = u"Please type 'yes' or 'no': "
 
         class AuthenticityException (paramiko.SSHException):
             def __init__(self, hostname):
                 paramiko.SSHException.__init__(self,
-                                               'Host key verification for server %s failed.' %
+                                               u'Host key verification for server %s failed.' %
                                                hostname)
 
         self.client = paramiko.SSHClient()
@@ -124,16 +137,16 @@
 
         # paramiko uses logging with the normal python severity levels,
         # but duplicity uses both custom levels and inverted logic...*sigh*
-        self.client.set_log_channel("sshbackend")
-        ours = paramiko.util.get_logger("sshbackend")
+        self.client.set_log_channel(u"sshbackend")
+        ours = paramiko.util.get_logger(u"sshbackend")
         dest = logging.StreamHandler(sys.stderr)
-        dest.setFormatter(logging.Formatter('ssh: %(message)s'))
+        dest.setFormatter(logging.Formatter(u'ssh: %(message)s'))
         ours.addHandler(dest)
 
         # ..and the duplicity levels are neither linear,
         # nor are the names compatible with python logging,
         # eg. 'NOTICE'...WAAAAAH!
-        plevel = logging.getLogger("duplicity").getEffectiveLevel()
+        plevel = logging.getLogger(u"duplicity").getEffectiveLevel()
         if plevel <= 1:
             wanted = logging.DEBUG
         elif plevel <= 5:
@@ -149,60 +162,60 @@
         # load known_hosts files
         # paramiko is very picky wrt format and bails out on any problem...
         try:
-            if os.path.isfile("/etc/ssh/ssh_known_hosts"):
-                self.client.load_system_host_keys("/etc/ssh/ssh_known_hosts")
+            if os.path.isfile(u"/etc/ssh/ssh_known_hosts"):
+                self.client.load_system_host_keys(u"/etc/ssh/ssh_known_hosts")
         except Exception as e:
-            raise BackendException("could not load /etc/ssh/ssh_known_hosts, "
-                                   "maybe corrupt?")
+            raise BackendException(u"could not load /etc/ssh/ssh_known_hosts, "
+                                   u"maybe corrupt?")
         try:
             # use load_host_keys() to signal it's writable to paramiko
             # load if file exists or add filename to create it if needed
-            file = os.path.expanduser('~/.ssh/known_hosts')
+            file = os.path.expanduser(u'~/.ssh/known_hosts')
             if os.path.isfile(file):
                 self.client.load_host_keys(file)
             else:
                 self.client._host_keys_filename = file
         except Exception as e:
-            raise BackendException("could not load ~/.ssh/known_hosts, "
-                                   "maybe corrupt?")
+            raise BackendException(u"could not load ~/.ssh/known_hosts, "
+                                   u"maybe corrupt?")
 
-        """ the next block reorganizes all host parameters into a
+        u""" the next block reorganizes all host parameters into a
         dictionary like SSHConfig does. this dictionary 'self.config'
         becomes the authorative source for these values from here on.
         rationale is that it is easiest to deal wrt overwriting multiple
         values from ssh_config file. (ede 03/2012)
         """
-        self.config = {'hostname': parsed_url.hostname}
+        self.config = {u'hostname': parsed_url.hostname}
         # get system host config entries
-        self.config.update(self.gethostconfig('/etc/ssh/ssh_config',
+        self.config.update(self.gethostconfig(u'/etc/ssh/ssh_config',
                                               parsed_url.hostname))
         # update with user's config file
-        self.config.update(self.gethostconfig('~/.ssh/config',
+        self.config.update(self.gethostconfig(u'~/.ssh/config',
                                               parsed_url.hostname))
         # update with url values
         # username from url
         if parsed_url.username:
-            self.config.update({'user': parsed_url.username})
+            self.config.update({u'user': parsed_url.username})
         # username from input
-        if 'user' not in self.config:
-            self.config.update({'user': getpass.getuser()})
+        if u'user' not in self.config:
+            self.config.update({u'user': getpass.getuser()})
         # port from url
         if parsed_url.port:
-            self.config.update({'port': parsed_url.port})
+            self.config.update({u'port': parsed_url.port})
         # ensure there is deafult 22 or an int value
-        if 'port' in self.config:
-            self.config.update({'port': int(self.config['port'])})
+        if u'port' in self.config:
+            self.config.update({u'port': int(self.config[u'port'])})
         else:
-            self.config.update({'port': 22})
+            self.config.update({u'port': 22})
         # parse ssh options for alternative ssh private key, identity file
-        m = re.search("^(?:.+\s+)?(?:-oIdentityFile=|-i\s+)(([\"'])([^\\2]+)\\2|[\S]+).*",
+        m = re.search(r"^(?:.+\s+)?(?:-oIdentityFile=|-i\s+)(([\"'])([^\\2]+)\\2|[\S]+).*",
                       globals.ssh_options)
         if (m is not None):
             keyfilename = m.group(3) if m.group(3) else m.group(1)
-            self.config['identityfile'] = keyfilename
+            self.config[u'identityfile'] = keyfilename
         # ensure ~ is expanded and identity exists in dictionary
-        if 'identityfile' in self.config:
-            if not isinstance(self.config['identityfile'], list):
+        if u'identityfile' in self.config:
+            if not isinstance(self.config[u'identityfile'], list):
                 # Paramiko 1.9.0 and earlier do not support multiple
                 # identity files when parsing config files and always
                 # return a string; later versions always return a list,
@@ -211,61 +224,61 @@
                 # All recent versions seem to support *using* multiple
                 # identity files, though, so to make things easier, we
                 # simply always use a list.
-                self.config['identityfile'] = [self.config['identityfile']]
+                self.config[u'identityfile'] = [self.config[u'identityfile']]
 
-            self.config['identityfile'] = [
-                os.path.expanduser(i) for i in self.config['identityfile']]
+            self.config[u'identityfile'] = [
+                os.path.expanduser(i) for i in self.config[u'identityfile']]
         else:
-            self.config['identityfile'] = None
+            self.config[u'identityfile'] = None
 
         # get password, enable prompt if askpass is set
         self.use_getpass = globals.ssh_askpass
         # set url values for beautiful login prompt
-        parsed_url.username = self.config['user']
-        parsed_url.hostname = self.config['hostname']
+        parsed_url.username = self.config[u'user']
+        parsed_url.hostname = self.config[u'hostname']
         password = self.get_password()
 
         try:
-            self.client.connect(hostname=self.config['hostname'],
-                                port=self.config['port'],
-                                username=self.config['user'],
+            self.client.connect(hostname=self.config[u'hostname'],
+                                port=self.config[u'port'],
+                                username=self.config[u'user'],
                                 password=password,
                                 allow_agent=True,
                                 look_for_keys=True,
-                                key_filename=self.config['identityfile'])
+                                key_filename=self.config[u'identityfile'])
         except Exception as e:
-            raise BackendException("ssh connection to %s@%s:%d failed: %s" % (
-                self.config['user'],
-                self.config['hostname'],
-                self.config['port'], e))
-        self.client.get_transport().set_keepalive((int)(globals.timeout / 2))
+            raise BackendException(u"ssh connection to %s@%s:%d failed: %s" % (
+                self.config[u'user'],
+                self.config[u'hostname'],
+                self.config[u'port'], e))
+        self.client.get_transport().set_keepalive((int)(old_div(globals.timeout, 2)))
 
         self.scheme = duplicity.backend.strip_prefix(parsed_url.scheme,
-                                                     'paramiko')
-        self.use_scp = (self.scheme == 'scp')
+                                                     u'paramiko')
+        self.use_scp = (self.scheme == u'scp')
 
         # scp or sftp?
         if (self.use_scp):
             # sanity-check the directory name
-            if (re.search("'", self.remote_dir)):
-                raise BackendException("cannot handle directory names with single quotes with scp")
+            if (re.search(u"'", self.remote_dir)):
+                raise BackendException(u"cannot handle directory names with single quotes with scp")
 
             # make directory if needed
-            self.runremote("mkdir -p '%s'" % (self.remote_dir,), False, "scp mkdir ")
+            self.runremote(u"mkdir -p '%s'" % (self.remote_dir,), False, u"scp mkdir ")
         else:
             try:
                 self.sftp = self.client.open_sftp()
             except Exception as e:
-                raise BackendException("sftp negotiation failed: %s" % e)
+                raise BackendException(u"sftp negotiation failed: %s" % e)
 
             # move to the appropriate directory, possibly after creating it and its parents
             dirs = self.remote_dir.split(os.sep)
             if len(dirs) > 0:
                 if not dirs[0]:
                     dirs = dirs[1:]
-                    dirs[0] = '/' + dirs[0]
+                    dirs[0] = u'/' + dirs[0]
                 for d in dirs:
-                    if (d == ''):
+                    if (d == u''):
                         continue
                     try:
                         attrs = self.sftp.stat(d)
@@ -274,44 +287,44 @@
                             try:
                                 self.sftp.mkdir(d)
                             except Exception as e:
-                                raise BackendException("sftp mkdir %s failed: %s" %
-                                                       (self.sftp.normalize(".") + "/" + d, e))
+                                raise BackendException(u"sftp mkdir %s failed: %s" %
+                                                       (self.sftp.normalize(u".") + u"/" + d, e))
                         else:
-                            raise BackendException("sftp stat %s failed: %s" %
-                                                   (self.sftp.normalize(".") + "/" + d, e))
+                            raise BackendException(u"sftp stat %s failed: %s" %
+                                                   (self.sftp.normalize(u".") + u"/" + d, e))
                     try:
                         self.sftp.chdir(d)
                     except Exception as e:
-                        raise BackendException("sftp chdir to %s failed: %s" %
-                                               (self.sftp.normalize(".") + "/" + d, e))
+                        raise BackendException(u"sftp chdir to %s failed: %s" %
+                                               (self.sftp.normalize(u".") + u"/" + d, e))
 
     def _put(self, source_path, remote_filename):
         if self.use_scp:
-            f = file(source_path.name, 'rb')
+            f = file(source_path.name, u'rb')
             try:
                 chan = self.client.get_transport().open_session()
                 chan.settimeout(globals.timeout)
                 # scp in sink mode uses the arg as base directory
-                chan.exec_command("scp -t '%s'" % self.remote_dir)
+                chan.exec_command(u"scp -t '%s'" % self.remote_dir)
             except Exception as e:
-                raise BackendException("scp execution failed: %s" % e)
+                raise BackendException(u"scp execution failed: %s" % e)
             # scp protocol: one 0x0 after startup, one after the Create meta,
             # one after saving if there's a problem: 0x1 or 0x02 and some error
             # text
             response = chan.recv(1)
-            if (response != "\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            if (response != u"\0"):
+                raise BackendException(u"scp remote error: %s" % chan.recv(-1))
             fstat = os.stat(source_path.name)
-            chan.send('C%s %d %s\n' % (oct(fstat.st_mode)[-4:], fstat.st_size,
-                                       remote_filename))
+            chan.send(u'C%s %d %s\n' % (oct(fstat.st_mode)[-4:], fstat.st_size,
+                                        remote_filename))
             response = chan.recv(1)
-            if (response != "\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            chan.sendall(f.read() + '\0')
+            if (response != u"\0"):
+                raise BackendException(u"scp remote error: %s" % chan.recv(-1))
+            chan.sendall(f.read() + u'\0')
             f.close()
             response = chan.recv(1)
-            if (response != "\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            if (response != u"\0"):
+                raise BackendException(u"scp remote error: %s" % chan.recv(-1))
             chan.close()
         else:
             self.sftp.put(source_path.name, remote_filename)
@@ -321,23 +334,23 @@
             try:
                 chan = self.client.get_transport().open_session()
                 chan.settimeout(globals.timeout)
-                chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,
-                                                      remote_filename))
+                chan.exec_command(u"scp -f '%s/%s'" % (self.remote_dir,
+                                                       remote_filename))
             except Exception as e:
-                raise BackendException("scp execution failed: %s" % e)
+                raise BackendException(u"scp execution failed: %s" % e)
 
-            chan.send('\0')  # overall ready indicator
+            chan.send(u'\0')  # overall ready indicator
             msg = chan.recv(-1)
             m = re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$", msg)
             if (m is None or m.group(3) != remote_filename):
-                raise BackendException("scp get %s failed: incorrect response '%s'" %
+                raise BackendException(u"scp get %s failed: incorrect response '%s'" %
                                        (remote_filename, msg))
             chan.recv(1)  # dispose of the newline trailing the C message
 
             size = int(m.group(2))
             togo = size
-            f = file(local_path.name, 'wb')
-            chan.send('\0')  # ready for data
+            f = file(local_path.name, u'wb')
+            chan.send(u'\0')  # ready for data
             try:
                 while togo > 0:
                     if togo > read_blocksize:
@@ -348,14 +361,14 @@
                     f.write(buff)
                     togo -= len(buff)
             except Exception as e:
-                raise BackendException("scp get %s failed: %s" % (remote_filename, e))
+                raise BackendException(u"scp get %s failed: %s" % (remote_filename, e))
 
             msg = chan.recv(1)  # check the final status
-            if msg != '\0':
-                raise BackendException("scp get %s failed: %s" % (remote_filename,
-                                                                  chan.recv(-1)))
+            if msg != u'\0':
+                raise BackendException(u"scp get %s failed: %s" % (remote_filename,
+                                                                   chan.recv(-1)))
             f.close()
-            chan.send('\0')  # send final done indicator
+            chan.send(u'\0')  # send final done indicator
             chan.close()
         else:
             self.sftp.get(remote_filename, local_path.name)
@@ -364,8 +377,8 @@
         # In scp mode unavoidable quoting issues will make this fail if the
         # directory name contains single quotes.
         if self.use_scp:
-            output = self.runremote("ls -1 '%s'" % self.remote_dir, False,
-                                    "scp dir listing ")
+            output = self.runremote(u"ls -1 '%s'" % self.remote_dir, False,
+                                    u"scp dir listing ")
             return output.splitlines()
         else:
             return self.sftp.listdir()
@@ -374,13 +387,13 @@
         # In scp mode unavoidable quoting issues will cause failures if
         # filenames containing single quotes are encountered.
         if self.use_scp:
-            self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False,
-                           "scp rm ")
+            self.runremote(u"rm '%s/%s'" % (self.remote_dir, filename), False,
+                           u"scp rm ")
         else:
             self.sftp.remove(filename)
 
-    def runremote(self, cmd, ignoreexitcode=False, errorprefix=""):
-        """small convenience function that opens a shell channel, runs remote
+    def runremote(self, cmd, ignoreexitcode=False, errorprefix=u""):
+        u"""small convenience function that opens a shell channel, runs remote
         command and returns stdout of command. throws an exception if exit
         code!=0 and not ignored"""
         try:
@@ -389,12 +402,10 @@
             return output
         except Exception as e:
             if not ignoreexitcode:
-                raise BackendException("%sfailed: %s \n %s" % (
+                raise BackendException(u"%sfailed: %s \n %s" % (
                     errorprefix, ch_err.read(-1), e))
 
     def gethostconfig(self, file, host):
-        import paramiko
-
         file = os.path.expanduser(file)
         if not os.path.isfile(file):
             return {}
@@ -403,13 +414,13 @@
         try:
             sshconfig.parse(open(file))
         except Exception as e:
-            raise BackendException("could not load '%s', maybe corrupt?" % (file))
+            raise BackendException(u"could not load '%s', maybe corrupt?" % (file))
 
         return sshconfig.lookup(host)
 
 
-duplicity.backend.register_backend("sftp", SSHParamikoBackend)
-duplicity.backend.register_backend("scp", SSHParamikoBackend)
-duplicity.backend.register_backend("paramiko+sftp", SSHParamikoBackend)
-duplicity.backend.register_backend("paramiko+scp", SSHParamikoBackend)
-duplicity.backend.uses_netloc.extend(['sftp', 'scp', 'paramiko+sftp', 'paramiko+scp'])
+duplicity.backend.register_backend(u"sftp", SSHParamikoBackend)
+duplicity.backend.register_backend(u"scp", SSHParamikoBackend)
+duplicity.backend.register_backend(u"paramiko+sftp", SSHParamikoBackend)
+duplicity.backend.register_backend(u"paramiko+scp", SSHParamikoBackend)
+duplicity.backend.uses_netloc.extend([u'sftp', u'scp', u'paramiko+sftp', u'paramiko+scp'])

=== modified file 'duplicity/backends/ssh_pexpect_backend.py'
--- duplicity/backends/ssh_pexpect_backend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/ssh_pexpect_backend.py	2019-02-22 19:07:43 +0000
@@ -24,7 +24,12 @@
 # have the same syntax.  Also these strings will be executed by the
 # shell, so shouldn't have strange characters in them.
 
-from future_builtins import map
+from __future__ import division
+from future import standard_library
+standard_library.install_aliases()
+from builtins import map
+from past.utils import old_div
+from future.builtins import map
 
 import re
 import string
@@ -37,28 +42,33 @@
 
 
 class SSHPExpectBackend(duplicity.backend.Backend):
-    """This backend copies files using scp.  List not supported.  Filenames
+    u"""This backend copies files using scp.  List not supported.  Filenames
        should not need any quoting or this will break."""
     def __init__(self, parsed_url):
-        """scpBackend initializer"""
+        u"""scpBackend initializer"""
         duplicity.backend.Backend.__init__(self, parsed_url)
 
+        try:
+            import pexpect
+        except ImportError:
+            raise
+
         self.retry_delay = 10
 
-        self.scp_command = "scp"
+        self.scp_command = u"scp"
         if globals.scp_command:
             self.scp_command = globals.scp_command
 
-        self.sftp_command = "sftp"
+        self.sftp_command = u"sftp"
         if globals.sftp_command:
             self.sftp_command = globals.sftp_command
 
-        self.scheme = duplicity.backend.strip_prefix(parsed_url.scheme, 'pexpect')
-        self.use_scp = (self.scheme == 'scp')
+        self.scheme = duplicity.backend.strip_prefix(parsed_url.scheme, u'pexpect')
+        self.use_scp = (self.scheme == u'scp')
 
         # host string of form [user@]hostname
         if parsed_url.username:
-            self.host_string = parsed_url.username + "@" + parsed_url.hostname
+            self.host_string = parsed_url.username + u"@" + parsed_url.hostname
         else:
             self.host_string = parsed_url.hostname
         # make sure remote_dir is always valid
@@ -66,156 +76,154 @@
             # remove leading '/'
             self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
         else:
-            self.remote_dir = '.'
-        self.remote_prefix = self.remote_dir + '/'
+            self.remote_dir = u'.'
+        self.remote_prefix = self.remote_dir + u'/'
         # maybe use different ssh port
         if parsed_url.port:
-            globals.ssh_options = globals.ssh_options + " -oPort=%s" % parsed_url.port
+            globals.ssh_options = globals.ssh_options + u" -oPort=%s" % parsed_url.port
         # set some defaults if user has not specified already.
-        if "ServerAliveInterval" not in globals.ssh_options:
-            globals.ssh_options += " -oServerAliveInterval=%d" % ((int)(globals.timeout / 2))
-        if "ServerAliveCountMax" not in globals.ssh_options:
-            globals.ssh_options += " -oServerAliveCountMax=2"
+        if u"ServerAliveInterval" not in globals.ssh_options:
+            globals.ssh_options += u" -oServerAliveInterval=%d" % ((int)(old_div(globals.timeout, 2)))
+        if u"ServerAliveCountMax" not in globals.ssh_options:
+            globals.ssh_options += u" -oServerAliveCountMax=2"
 
         # set up password
         self.use_getpass = globals.ssh_askpass
         self.password = self.get_password()
 
     def run_scp_command(self, commandline):
-        """ Run an scp command, responding to password prompts """
-        import pexpect
-        log.Info("Running '%s'" % commandline)
+        u""" Run an scp command, responding to password prompts """
+        log.Info(u"Running '%s'" % commandline)
         child = pexpect.spawn(commandline, timeout=None)
         if globals.ssh_askpass:
-            state = "authorizing"
+            state = u"authorizing"
         else:
-            state = "copying"
+            state = u"copying"
         while 1:
-            if state == "authorizing":
+            if state == u"authorizing":
                 match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "(?i)pass(word|phrase .*):",
-                                      "(?i)permission denied",
-                                      "authenticity"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                                      u"(?i)timeout, server not responding",
+                                      u"(?i)pass(word|phrase .*):",
+                                      u"(?i)permission denied",
+                                      u"authenticity"])
+                log.Debug(u"State = %s, Before = '%s'" % (state, child.before.strip()))
                 if match == 0:
-                    log.Warn("Failed to authenticate")
+                    log.Warn(u"Failed to authenticate")
                     break
                 elif match == 1:
-                    log.Warn("Timeout waiting to authenticate")
+                    log.Warn(u"Timeout waiting to authenticate")
                     break
                 elif match == 2:
                     child.sendline(self.password)
-                    state = "copying"
+                    state = u"copying"
                 elif match == 3:
-                    log.Warn("Invalid SSH password")
+                    log.Warn(u"Invalid SSH password")
                     break
                 elif match == 4:
-                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                    log.Warn(u"Remote host authentication failed (missing known_hosts entry?)")
                     break
-            elif state == "copying":
+            elif state == u"copying":
                 match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "stalled",
-                                      "authenticity",
-                                      "ETA"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                                      u"(?i)timeout, server not responding",
+                                      u"stalled",
+                                      u"authenticity",
+                                      u"ETA"])
+                log.Debug(u"State = %s, Before = '%s'" % (state, child.before.strip()))
                 if match == 0:
                     break
                 elif match == 1:
-                    log.Warn("Timeout waiting for response")
+                    log.Warn(u"Timeout waiting for response")
                     break
                 elif match == 2:
-                    state = "stalled"
+                    state = u"stalled"
                 elif match == 3:
-                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                    log.Warn(u"Remote host authentication failed (missing known_hosts entry?)")
                     break
-            elif state == "stalled":
+            elif state == u"stalled":
                 match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "ETA"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                                      u"(?i)timeout, server not responding",
+                                      u"ETA"])
+                log.Debug(u"State = %s, Before = '%s'" % (state, child.before.strip()))
                 if match == 0:
                     break
                 elif match == 1:
-                    log.Warn("Stalled for too long, aborted copy")
+                    log.Warn(u"Stalled for too long, aborted copy")
                     break
                 elif match == 2:
-                    state = "copying"
+                    state = u"copying"
         child.close(force=True)
         if child.exitstatus != 0:
-            raise BackendException("Error running '%s'" % commandline)
+            raise BackendException(u"Error running '%s'" % commandline)
 
     def run_sftp_command(self, commandline, commands):
-        """ Run an sftp command, responding to password prompts, passing commands from list """
-        import pexpect
+        u""" Run an sftp command, responding to password prompts, passing commands from list """
         maxread = 2000  # expected read buffer size
         responses = [pexpect.EOF,
-                     "(?i)timeout, server not responding",
-                     "sftp>",
-                     "(?i)pass(word|phrase .*):",
-                     "(?i)permission denied",
-                     "authenticity",
-                     "(?i)no such file or directory",
-                     "Couldn't delete file: No such file or directory",
-                     "Couldn't delete file",
-                     "open(.*): Failure"]
+                     u"(?i)timeout, server not responding",
+                     u"sftp>",
+                     u"(?i)pass(word|phrase .*):",
+                     u"(?i)permission denied",
+                     u"authenticity",
+                     u"(?i)no such file or directory",
+                     u"Couldn't delete file: No such file or directory",
+                     u"Couldn't delete file",
+                     u"open(.*): Failure"]
         max_response_len = max([len(p) for p in responses[1:]])
-        log.Info("Running '%s'" % (commandline))
+        log.Info(u"Running '%s'" % (commandline))
         child = pexpect.spawn(commandline, timeout=None, maxread=maxread)
         cmdloc = 0
         passprompt = 0
         while 1:
-            msg = ""
+            msg = u""
             match = child.expect(responses,
                                  searchwindowsize=maxread + max_response_len)
-            log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
+            log.Debug(u"State = sftp, Before = '%s'" % (child.before.strip()))
             if match == 0:
                 break
             elif match == 1:
-                msg = "Timeout waiting for response"
+                msg = u"Timeout waiting for response"
                 break
             if match == 2:
                 if cmdloc < len(commands):
                     command = commands[cmdloc]
-                    log.Info("sftp command: '%s'" % (command,))
+                    log.Info(u"sftp command: '%s'" % (command,))
                     child.sendline(command)
                     cmdloc += 1
                 else:
-                    command = 'quit'
+                    command = u'quit'
                     child.sendline(command)
                     res = child.before
             elif match == 3:
                 passprompt += 1
                 child.sendline(self.password)
                 if (passprompt > 1):
-                    raise BackendException("Invalid SSH password.")
+                    raise BackendException(u"Invalid SSH password.")
             elif match == 4:
-                if not child.before.strip().startswith("mkdir"):
-                    msg = "Permission denied"
+                if not child.before.strip().startswith(u"mkdir"):
+                    msg = u"Permission denied"
                     break
             elif match == 5:
-                msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
+                msg = u"Host key authenticity could not be verified (missing known_hosts entry?)"
                 break
             elif match == 6:
-                if not child.before.strip().startswith("rm"):
-                    msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
+                if not child.before.strip().startswith(u"rm"):
+                    msg = u"Remote file or directory does not exist in command='%s'" % (commandline,)
                     break
             elif match == 7:
-                if not child.before.strip().startswith("Removing"):
-                    msg = "Could not delete file in command='%s'" % (commandline,)
+                if not child.before.strip().startswith(u"Removing"):
+                    msg = u"Could not delete file in command='%s'" % (commandline,)
                     break
             elif match == 8:
-                msg = "Could not delete file in command='%s'" % (commandline,)
+                msg = u"Could not delete file in command='%s'" % (commandline,)
                 break
             elif match == 9:
-                msg = "Could not open file in command='%s'" % (commandline,)
+                msg = u"Could not open file in command='%s'" % (commandline,)
                 break
         child.close(force=True)
         if child.exitstatus == 0:
             return res
         else:
-            raise BackendException("Error running '%s': %s" % (commandline, msg))
+            raise BackendException(u"Error running '%s': %s" % (commandline, msg))
 
     def _put(self, source_path, remote_filename):
         if self.use_scp:
@@ -224,17 +232,17 @@
             self.put_sftp(source_path, remote_filename)
 
     def put_sftp(self, source_path, remote_filename):
-        commands = ["put \"%s\" \"%s.%s.part\"" %
+        commands = [u"put \"%s\" \"%s.%s.part\"" %
                     (source_path.name, self.remote_prefix, remote_filename),
-                    "rename \"%s.%s.part\" \"%s%s\"" %
+                    u"rename \"%s.%s.part\" \"%s%s\"" %
                     (self.remote_prefix, remote_filename, self.remote_prefix, remote_filename)]
-        commandline = ("%s %s %s" % (self.sftp_command,
-                                     globals.ssh_options,
-                                     self.host_string))
+        commandline = (u"%s %s %s" % (self.sftp_command,
+                                      globals.ssh_options,
+                                      self.host_string))
         self.run_sftp_command(commandline, commands)
 
     def put_scp(self, source_path, remote_filename):
-        commandline = "%s %s %s %s:%s%s" % \
+        commandline = u"%s %s %s %s:%s%s" % \
             (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
              self.remote_prefix, remote_filename)
         self.run_scp_command(commandline)
@@ -246,15 +254,15 @@
             self.get_sftp(remote_filename, local_path)
 
     def get_sftp(self, remote_filename, local_path):
-        commands = ["get \"%s%s\" \"%s\"" %
+        commands = [u"get \"%s%s\" \"%s\"" %
                     (self.remote_prefix, remote_filename, local_path.name)]
-        commandline = ("%s %s %s" % (self.sftp_command,
-                                     globals.ssh_options,
-                                     self.host_string))
+        commandline = (u"%s %s %s" % (self.sftp_command,
+                                      globals.ssh_options,
+                                      self.host_string))
         self.run_sftp_command(commandline, commands)
 
     def get_scp(self, remote_filename, local_path):
-        commandline = "%s %s %s:%s%s %s" % \
+        commandline = u"%s %s %s:%s%s %s" % \
             (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
              remote_filename, local_path.name)
         self.run_scp_command(commandline)
@@ -267,26 +275,33 @@
         if len(dirs) > 0:
             if not dirs[0]:
                 dirs = dirs[1:]
-                dirs[0] = '/' + dirs[0]
+                dirs[0] = u'/' + dirs[0]
         mkdir_commands = []
         for d in dirs:
-            mkdir_commands += ["mkdir \"%s\"" % (d)] + ["cd \"%s\"" % (d)]
-
-        commands = mkdir_commands + ["ls -1"]
-        commandline = ("%s %s %s" % (self.sftp_command,
-                                     globals.ssh_options,
-                                     self.host_string))
-
-        l = self.run_sftp_command(commandline, commands).split('\n')[1:]
+            mkdir_commands += [u"mkdir \"%s\"" % (d)] + [u"cd \"%s\"" % (d)]
+
+        commands = mkdir_commands + [u"ls -1"]
+        commandline = (u"%s %s %s" % (self.sftp_command,
+                                      globals.ssh_options,
+                                      self.host_string))
+
+        l = self.run_sftp_command(commandline, commands).split(u'\n')[1:]
 
         return [x for x in map(string.strip, l) if x]
 
     def _delete(self, filename):
-        commands = ["cd \"%s\"" % (self.remote_dir,)]
-        commands.append("rm \"%s\"" % filename)
-        commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
+        commands = [u"cd \"%s\"" % (self.remote_dir,)]
+        commands.append(u"rm \"%s\"" % filename)
+        commandline = (u"%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
         self.run_sftp_command(commandline, commands)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("pexpect+sftp", SSHPExpectBackend)
 duplicity.backend.register_backend("pexpect+scp", SSHPExpectBackend)
 duplicity.backend.uses_netloc.extend(['pexpect+sftp', 'pexpect+scp'])
+=======
+
+duplicity.backend.register_backend(u"pexpect+sftp", SSHPExpectBackend)
+duplicity.backend.register_backend(u"pexpect+scp", SSHPExpectBackend)
+duplicity.backend.uses_netloc.extend([u'pexpect+sftp', u'pexpect+scp'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/swiftbackend.py'
--- duplicity/backends/swiftbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/swiftbackend.py	2019-02-22 19:07:43 +0000
@@ -18,6 +18,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from builtins import str
 import os
 
 import duplicity.backend
@@ -27,7 +28,7 @@
 
 
 class SwiftBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for Swift
     """
     def __init__(self, parsed_url):
@@ -37,7 +38,7 @@
             from swiftclient import Connection
             from swiftclient import ClientException
         except ImportError as e:
-            raise BackendException("""\
+            raise BackendException(u"""\
 Swift backend requires the python-swiftclient library.
 Exception: %s""" % str(e))
 
@@ -45,71 +46,77 @@
         conn_kwargs = {}
 
         # if the user has already authenticated
-        if 'SWIFT_PREAUTHURL' in os.environ and 'SWIFT_PREAUTHTOKEN' in os.environ:
-            conn_kwargs['preauthurl'] = os.environ['SWIFT_PREAUTHURL']
-            conn_kwargs['preauthtoken'] = os.environ['SWIFT_PREAUTHTOKEN']
+        if u'SWIFT_PREAUTHURL' in os.environ and u'SWIFT_PREAUTHTOKEN' in os.environ:
+            conn_kwargs[u'preauthurl'] = os.environ[u'SWIFT_PREAUTHURL']
+            conn_kwargs[u'preauthtoken'] = os.environ[u'SWIFT_PREAUTHTOKEN']
 
         else:
-            if 'SWIFT_USERNAME' not in os.environ:
-                raise BackendException('SWIFT_USERNAME environment variable '
-                                       'not set.')
-
-            if 'SWIFT_PASSWORD' not in os.environ:
-                raise BackendException('SWIFT_PASSWORD environment variable '
-                                       'not set.')
-
-            if 'SWIFT_AUTHURL' not in os.environ:
-                raise BackendException('SWIFT_AUTHURL environment variable '
-                                       'not set.')
-
-            conn_kwargs['user'] = os.environ['SWIFT_USERNAME']
-            conn_kwargs['key'] = os.environ['SWIFT_PASSWORD']
-            conn_kwargs['authurl'] = os.environ['SWIFT_AUTHURL']
+            if u'SWIFT_USERNAME' not in os.environ:
+                raise BackendException(u'SWIFT_USERNAME environment variable '
+                                       u'not set.')
+
+            if u'SWIFT_PASSWORD' not in os.environ:
+                raise BackendException(u'SWIFT_PASSWORD environment variable '
+                                       u'not set.')
+
+            if u'SWIFT_AUTHURL' not in os.environ:
+                raise BackendException(u'SWIFT_AUTHURL environment variable '
+                                       u'not set.')
+
+            conn_kwargs[u'user'] = os.environ[u'SWIFT_USERNAME']
+            conn_kwargs[u'key'] = os.environ[u'SWIFT_PASSWORD']
+            conn_kwargs[u'authurl'] = os.environ[u'SWIFT_AUTHURL']
 
         os_options = {}
 
-        if 'SWIFT_AUTHVERSION' in os.environ:
-            conn_kwargs['auth_version'] = os.environ['SWIFT_AUTHVERSION']
-            if os.environ['SWIFT_AUTHVERSION'] == '3':
-                if 'SWIFT_USER_DOMAIN_NAME' in os.environ:
-                    os_options.update({'user_domain_name': os.environ['SWIFT_USER_DOMAIN_NAME']})
-                if 'SWIFT_USER_DOMAIN_ID' in os.environ:
-                    os_options.update({'user_domain_id': os.environ['SWIFT_USER_DOMAIN_ID']})
-                if 'SWIFT_PROJECT_DOMAIN_NAME' in os.environ:
-                    os_options.update({'project_domain_name': os.environ['SWIFT_PROJECT_DOMAIN_NAME']})
-                if 'SWIFT_PROJECT_DOMAIN_ID' in os.environ:
-                    os_options.update({'project_domain_id': os.environ['SWIFT_PROJECT_DOMAIN_ID']})
-                if 'SWIFT_TENANTNAME' in os.environ:
-                    os_options.update({'tenant_name': os.environ['SWIFT_TENANTNAME']})
-                if 'SWIFT_ENDPOINT_TYPE' in os.environ:
-                    os_options.update({'endpoint_type': os.environ['SWIFT_ENDPOINT_TYPE']})
-                if 'SWIFT_USERID' in os.environ:
-                    os_options.update({'user_id': os.environ['SWIFT_USERID']})
-                if 'SWIFT_TENANTID' in os.environ:
-                    os_options.update({'tenant_id': os.environ['SWIFT_TENANTID']})
-                if 'SWIFT_REGIONNAME' in os.environ:
-                    os_options.update({'region_name': os.environ['SWIFT_REGIONNAME']})
+        if u'SWIFT_AUTHVERSION' in os.environ:
+            conn_kwargs[u'auth_version'] = os.environ[u'SWIFT_AUTHVERSION']
+            if os.environ[u'SWIFT_AUTHVERSION'] == u'3':
+                if u'SWIFT_USER_DOMAIN_NAME' in os.environ:
+                    os_options.update({u'user_domain_name': os.environ[u'SWIFT_USER_DOMAIN_NAME']})
+                if u'SWIFT_USER_DOMAIN_ID' in os.environ:
+                    os_options.update({u'user_domain_id': os.environ[u'SWIFT_USER_DOMAIN_ID']})
+                if u'SWIFT_PROJECT_DOMAIN_NAME' in os.environ:
+                    os_options.update({u'project_domain_name': os.environ[u'SWIFT_PROJECT_DOMAIN_NAME']})
+                if u'SWIFT_PROJECT_DOMAIN_ID' in os.environ:
+                    os_options.update({u'project_domain_id': os.environ[u'SWIFT_PROJECT_DOMAIN_ID']})
+                if u'SWIFT_TENANTNAME' in os.environ:
+                    os_options.update({u'tenant_name': os.environ[u'SWIFT_TENANTNAME']})
+                if u'SWIFT_ENDPOINT_TYPE' in os.environ:
+                    os_options.update({u'endpoint_type': os.environ[u'SWIFT_ENDPOINT_TYPE']})
+                if u'SWIFT_USERID' in os.environ:
+                    os_options.update({u'user_id': os.environ[u'SWIFT_USERID']})
+                if u'SWIFT_TENANTID' in os.environ:
+                    os_options.update({u'tenant_id': os.environ[u'SWIFT_TENANTID']})
+                if u'SWIFT_REGIONNAME' in os.environ:
+                    os_options.update({u'region_name': os.environ[u'SWIFT_REGIONNAME']})
 
         else:
-            conn_kwargs['auth_version'] = '1'
-        if 'SWIFT_TENANTNAME' in os.environ:
-            conn_kwargs['tenant_name'] = os.environ['SWIFT_TENANTNAME']
-        if 'SWIFT_REGIONNAME' in os.environ:
-            os_options.update({'region_name': os.environ['SWIFT_REGIONNAME']})
+            conn_kwargs[u'auth_version'] = u'1'
+        if u'SWIFT_TENANTNAME' in os.environ:
+            conn_kwargs[u'tenant_name'] = os.environ[u'SWIFT_TENANTNAME']
+        if u'SWIFT_REGIONNAME' in os.environ:
+            os_options.update({u'region_name': os.environ[u'SWIFT_REGIONNAME']})
 
-        conn_kwargs['os_options'] = os_options
+        conn_kwargs[u'os_options'] = os_options
 
         # This folds the null prefix and all null parts, which means that:
         #  //MyContainer/ and //MyContainer are equivalent.
         #  //MyContainer//My/Prefix/ and //MyContainer/My/Prefix are equivalent.
-        url_parts = [x for x in parsed_url.path.split('/') if x != '']
+        url_parts = [x for x in parsed_url.path.split(u'/') if x != u'']
 
         self.container = url_parts.pop(0)
         if url_parts:
-            self.prefix = '%s/' % '/'.join(url_parts)
+            self.prefix = u'%s/' % u'/'.join(url_parts)
         else:
-            self.prefix = ''
-
+            self.prefix = u''
+
+<<<<<<< TREE
+=======
+        policy = globals.swift_storage_policy
+        policy_header = u'X-Storage-Policy'
+
+>>>>>>> MERGE-SOURCE
         container_metadata = None
         try:
             self.conn = Connection(**conn_kwargs)
@@ -117,18 +124,24 @@
         except ClientException:
             pass
         except Exception as e:
-            log.FatalError("Connection failed: %s %s"
+            log.FatalError(u"Connection failed: %s %s"
                            % (e.__class__.__name__, str(e)),
                            log.ErrorCode.connection_failed)
 
         if container_metadata is None:
-            log.Info("Creating container %s" % self.container)
+            log.Info(u"Creating container %s" % self.container)
             try:
                 self.conn.put_container(self.container)
             except Exception as e:
-                log.FatalError("Container creation failed: %s %s"
+                log.FatalError(u"Container creation failed: %s %s"
                                % (e.__class__.__name__, str(e)),
                                log.ErrorCode.connection_failed)
+<<<<<<< TREE
+=======
+        elif policy and container_metadata[policy_header.lower()] != policy:
+            log.FatalError(u"Container '%s' exists but its storage policy is '%s' not '%s'."
+                           % (self.container, container_metadata[policy_header.lower()], policy))
+>>>>>>> MERGE-SOURCE
 
     def _error_code(self, operation, e):
         if isinstance(e, self.resp_exc):
@@ -141,21 +154,21 @@
 
     def _get(self, remote_filename, local_path):
         headers, body = self.conn.get_object(self.container, self.prefix + remote_filename, resp_chunk_size=1024)
-        with open(local_path.name, 'wb') as f:
+        with open(local_path.name, u'wb') as f:
             for chunk in body:
                 f.write(chunk)
 
     def _list(self):
         headers, objs = self.conn.get_container(self.container, full_listing=True, path=self.prefix)
         # removes prefix from return values. should check for the prefix ?
-        return [o['name'][len(self.prefix):] for o in objs]
+        return [o[u'name'][len(self.prefix):] for o in objs]
 
     def _delete(self, filename):
         self.conn.delete_object(self.container, self.prefix + filename)
 
     def _query(self, filename):
         sobject = self.conn.head_object(self.container, self.prefix + filename)
-        return {'size': int(sobject['content-length'])}
-
-
-duplicity.backend.register_backend("swift", SwiftBackend)
+        return {u'size': int(sobject[u'content-length'])}
+
+
+duplicity.backend.register_backend(u"swift", SwiftBackend)

=== modified file 'duplicity/backends/sxbackend.py'
--- duplicity/backends/sxbackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/sxbackend.py	2019-02-22 19:07:43 +0000
@@ -23,30 +23,35 @@
 
 
 class SXBackend(duplicity.backend.Backend):
-    """Connect to remote store using Skylable Protocol"""
+    u"""Connect to remote store using Skylable Protocol"""
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
         self.url_string = parsed_url.url_string
 
     def _put(self, source_path, remote_filename):
         remote_path = os.path.join(self.url_string, remote_filename)
-        commandline = "sxcp {0} {1}".format(source_path.name, remote_path)
+        commandline = u"sxcp {0} {1}".format(source_path.name, remote_path)
         self.subprocess_popen(commandline)
 
     def _get(self, remote_filename, local_path):
         remote_path = os.path.join(self.url_string, remote_filename)
-        commandline = "sxcp {0} {1}".format(remote_path, local_path.name)
+        commandline = u"sxcp {0} {1}".format(remote_path, local_path.name)
         self.subprocess_popen(commandline)
 
     def _list(self):
         # Do a long listing to avoid connection reset
-        commandline = "sxls {0}".format(self.url_string)
+        commandline = u"sxls {0}".format(self.url_string)
         _, l, _ = self.subprocess_popen(commandline)
         # Look for our files as the last element of a long list line
-        return [x[x.rindex('/') + 1:].split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
+        return [x[x.rindex(u'/') + 1:].split()[-1] for x in l.split(u'\n') if x and not x.startswith(u"total ")]
 
     def _delete(self, filename):
-        commandline = "sxrm {0}/{1}".format(self.url_string, filename)
+        commandline = u"sxrm {0}/{1}".format(self.url_string, filename)
         self.subprocess_popen(commandline)
 
+<<<<<<< TREE
 duplicity.backend.register_backend("sx", SXBackend)
+=======
+
+duplicity.backend.register_backend(u"sx", SXBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/tahoebackend.py'
--- duplicity/backends/tahoebackend.py	2017-08-06 21:10:28 +0000
+++ duplicity/backends/tahoebackend.py	2019-02-22 19:07:43 +0000
@@ -20,56 +20,66 @@
 
 import duplicity.backend
 from duplicity import log
+from duplicity import util
 from duplicity.errors import BackendException
 
 
 class TAHOEBackend(duplicity.backend.Backend):
-    """
+    u"""
     Backend for the Tahoe file system
     """
 
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
 
-        url = parsed_url.path.strip('/').split('/')
+        url = parsed_url.path.strip(u'/').split(u'/')
 
         self.alias = url[0]
 
         if len(url) > 1:
-            self.directory = "/".join(url[1:])
+            self.directory = u"/".join(url[1:])
         else:
-            self.directory = ""
+            self.directory = u""
 
-        log.Debug("tahoe: %s -> %s:%s" % (url, self.alias, self.directory))
+        log.Debug(u"tahoe: %s -> %s:%s" % (url, self.alias, self.directory))
 
     def get_remote_path(self, filename=None):
         if filename is None:
-            if self.directory != "":
-                return "%s:%s" % (self.alias, self.directory)
+            if self.directory != u"":
+                return u"%s:%s" % (self.alias, self.directory)
             else:
-                return "%s:" % self.alias
+                return u"%s:" % self.alias
 
-        if self.directory != "":
-            return "%s:%s/%s" % (self.alias, self.directory, filename)
+        if isinstance(filename, b"".__class__):
+            filename = util.fsdecode(filename)
+        if self.directory != u"":
+            return u"%s:%s/%s" % (self.alias, self.directory, filename)
         else:
-            return "%s:%s" % (self.alias, filename)
+            return u"%s:%s" % (self.alias, filename)
 
     def run(self, *args):
-        cmd = " ".join(args)
+        cmd = u" ".join(args)
         _, output, _ = self.subprocess_popen(cmd)
         return output
 
     def _put(self, source_path, remote_filename):
-        self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))
+        self.run(u"tahoe", u"cp", source_path.uc_name, self.get_remote_path(remote_filename))
 
     def _get(self, remote_filename, local_path):
-        self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)
+        self.run(u"tahoe", u"cp", self.get_remote_path(remote_filename), local_path.uc_name)
 
     def _list(self):
-        output = self.run("tahoe", "ls", self.get_remote_path())
-        return output.split('\n') if output else []
+        output = self.run(u"tahoe", u"ls", self.get_remote_path())
+        return output.split(b'\n') if output else []
 
     def _delete(self, filename):
+<<<<<<< TREE
         self.run("tahoe", "rm", self.get_remote_path(filename))
 
 duplicity.backend.register_backend("tahoe", TAHOEBackend)
+=======
+        self.run(u"tahoe", u"rm", self.get_remote_path(filename))
+
+
+duplicity.backend.register_backend(u"tahoe", TAHOEBackend)
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/backends/webdavbackend.py'
--- duplicity/backends/webdavbackend.py	2018-05-01 15:17:47 +0000
+++ duplicity/backends/webdavbackend.py	2019-02-22 19:07:43 +0000
@@ -21,13 +21,18 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from future import standard_library
+standard_library.install_aliases()
+from builtins import str
+from builtins import range
 import base64
-import httplib
+import http.client
 import os
 import re
-import urllib
-import urllib2
-import urlparse
+import shutil
+import urllib.request  # pylint: disable=import-error
+import urllib.parse  # pylint: disable=import-error
+import urllib.error  # pylint: disable=import-error
 import xml.dom.minidom
 
 import duplicity.backend
@@ -37,35 +42,35 @@
 from duplicity.errors import BackendException, FatalBackendException
 
 
-class CustomMethodRequest(urllib2.Request):
-    """
+class CustomMethodRequest(urllib.request.Request):
+    u"""
     This request subclass allows explicit specification of
     the HTTP request method. Basic urllib2.Request class
     chooses GET or POST depending on self.has_data()
     """
     def __init__(self, method, *args, **kwargs):
         self.method = method
-        urllib2.Request.__init__(self, *args, **kwargs)
+        urllib.request.Request.__init__(self, *args, **kwargs)
 
     def get_method(self):
         return self.method
 
 
-class VerifiedHTTPSConnection(httplib.HTTPSConnection):
+class VerifiedHTTPSConnection(http.client.HTTPSConnection):
         def __init__(self, *args, **kwargs):
             try:
                 global socket, ssl
                 import socket
                 import ssl
             except ImportError:
-                raise FatalBackendException(_("Missing socket or ssl python modules."))
+                raise FatalBackendException(_(u"Missing socket or ssl python modules."))
 
-            httplib.HTTPSConnection.__init__(self, *args, **kwargs)
+            http.client.HTTPSConnection.__init__(self, *args, **kwargs)
 
             self.cacert_file = globals.ssl_cacert_file
-            self.cacert_candidates = ["~/.duplicity/cacert.pem",
-                                      "~/duplicity_cacert.pem",
-                                      "/etc/duplicity/cacert.pem"]
+            self.cacert_candidates = [u"~/.duplicity/cacert.pem",
+                                      u"~/duplicity_cacert.pem",
+                                      u"/etc/duplicity/cacert.pem"]
             # if no cacert file was given search default locations
             if not self.cacert_file:
                 for path in self.cacert_candidates:
@@ -76,7 +81,7 @@
 
             # check if file is accessible (libssl errors are not very detailed)
             if self.cacert_file and not os.access(self.cacert_file, os.R_OK):
-                raise FatalBackendException(_("Cacert database file '%s' is not readable.") %
+                raise FatalBackendException(_(u"Cacert database file '%s' is not readable.") %
                                             self.cacert_file)
 
         def connect(self):
@@ -88,7 +93,7 @@
                 self.tunnel()
 
             # python 2.7.9+ supports default system certs now
-            if "create_default_context" in dir(ssl):
+            if u"create_default_context" in dir(ssl):
                 context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH,
                                                      cafile=self.cacert_file,
                                                      capath=globals.ssl_cacert_path)
@@ -97,17 +102,17 @@
             else:
                 if globals.ssl_cacert_path:
                     raise FatalBackendException(
-                        _("Option '--ssl-cacert-path' is not supported "
-                          "with python 2.7.8 and below."))
+                        _(u"Option '--ssl-cacert-path' is not supported "
+                          u"with python 2.7.8 and below."))
 
                 if not self.cacert_file:
-                    raise FatalBackendException(_("""\
+                    raise FatalBackendException(_(u"""\
 For certificate verification with python 2.7.8 or earlier a cacert database
 file is needed in one of these locations: %s
 Hints:
   Consult the man page, chapter 'SSL Certificate Verification'.
   Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""") %
-                                                ", ".join(self.cacert_candidates))
+                                                u", ".join(self.cacert_candidates))
 
                 # wrap the socket in ssl using verification
                 self.sock = ssl.wrap_socket(sock,
@@ -117,28 +122,28 @@
 
         def request(self, *args, **kwargs):  # pylint: disable=method-hidden
             try:
-                return httplib.HTTPSConnection.request(self, *args, **kwargs)
+                return http.client.HTTPSConnection.request(self, *args, **kwargs)
             except ssl.SSLError as e:
                 # encapsulate ssl errors
-                raise BackendException("SSL failed: %s" % util.uexc(e),
+                raise BackendException(u"SSL failed: %s" % util.uexc(e),
                                        log.ErrorCode.backend_error)
 
 
 class WebDAVBackend(duplicity.backend.Backend):
-    """Backend for accessing a WebDAV repository.
+    u"""Backend for accessing a WebDAV repository.
 
     webdav backend contributed in 2006 by Jesper Zedlitz <jesper@xxxxxxxxxx>
     """
 
-    """
+    u"""
     Request just the names.
     """
-    listbody = '<?xml version="1.0"?><D:propfind xmlns:D="DAV:"><D:prop><D:resourcetype/></D:prop></D:propfind>'
+    listbody = u'<?xml version="1.0"?><D:propfind xmlns:D="DAV:"><D:prop><D:resourcetype/></D:prop></D:propfind>'
 
-    """Connect to remote store using WebDAV Protocol"""
+    u"""Connect to remote store using WebDAV Protocol"""
     def __init__(self, parsed_url):
         duplicity.backend.Backend.__init__(self, parsed_url)
-        self.headers = {'Connection': 'keep-alive'}
+        self.headers = {u'Connection': u'keep-alive'}
         self.parsed_url = parsed_url
         self.digest_challenge = None
         self.digest_auth_handler = None
@@ -147,22 +152,22 @@
         self.password = self.get_password()
         self.directory = self.sanitize_path(parsed_url.path)
 
-        log.Info(_("Using WebDAV protocol %s") % (globals.webdav_proto,))
-        log.Info(_("Using WebDAV host %s port %s") % (parsed_url.hostname,
-                                                      parsed_url.port))
-        log.Info(_("Using WebDAV directory %s") % (self.directory,))
+        log.Info(_(u"Using WebDAV protocol %s") % (globals.webdav_proto,))
+        log.Info(_(u"Using WebDAV host %s port %s") % (parsed_url.hostname,
+                                                       parsed_url.port))
+        log.Info(_(u"Using WebDAV directory %s") % (self.directory,))
 
         self.conn = None
 
     def sanitize_path(self, path):
         if path:
-            foldpath = re.compile('/+')
-            return foldpath.sub('/', path + '/')
+            foldpath = re.compile(u'/+')
+            return foldpath.sub(u'/', path + u'/')
         else:
-            return '/'
+            return u'/'
 
     def getText(self, nodelist):
-        rc = ""
+        rc = u""
         for node in nodelist:
             if node.nodeType == node.TEXT_NODE:
                 rc = rc + node.data
@@ -172,7 +177,7 @@
         self.connect(forced=True)
 
     def connect(self, forced=False):
-        """
+        u"""
         Connect or re-connect to the server, updates self.conn
         # reconnect on errors as a precaution, there are errors e.g.
         # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable
@@ -181,88 +186,96 @@
                 and self.conn.host == self.parsed_url.hostname:
             return
 
-        log.Info(_("WebDAV create connection on '%s'") % (self.parsed_url.hostname))
+        log.Info(_(u"WebDAV create connection on '%s'") % (self.parsed_url.hostname))
         self._close()
         # http schemes needed for redirect urls from servers
-        if self.parsed_url.scheme in ['webdav', 'http']:
-            self.conn = httplib.HTTPConnection(self.parsed_url.hostname, self.parsed_url.port)
-        elif self.parsed_url.scheme in ['webdavs', 'https']:
+        if self.parsed_url.scheme in [u'webdav', u'http']:
+            self.conn = http.client.HTTPConnection(self.parsed_url.hostname, self.parsed_url.port)
+        elif self.parsed_url.scheme in [u'webdavs', u'https']:
             if globals.ssl_no_check_certificate:
-                self.conn = httplib.HTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
+                self.conn = http.client.HTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
             else:
                 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
         else:
-            raise FatalBackendException(_("WebDAV Unknown URI scheme: %s") % (self.parsed_url.scheme))
+            raise FatalBackendException(_(u"WebDAV Unknown URI scheme: %s") % (self.parsed_url.scheme))
 
     def _close(self):
         if self.conn:
             self.conn.close()
 
     def request(self, method, path, data=None, redirected=0):
-        """
+        u"""
         Wraps the connection.request method to retry once if authentication is
         required
         """
         self._close()  # or we get previous request's data or exception
         self.connect()
 
-        quoted_path = urllib.quote(path, "/:~")
+        quoted_path = urllib.parse.quote(path, u"/:~")
 
         if self.digest_challenge is not None:
-            self.headers['Authorization'] = self.get_digest_authorization(path)
+            self.headers[u'Authorization'] = self.get_digest_authorization(path)
 
-        log.Info(_("WebDAV %s %s request with headers: %s ") % (method, quoted_path, self.headers))
-        log.Info(_("WebDAV data length: %s ") % len(str(data)))
+        log.Info(_(u"WebDAV %s %s request with headers: %s ") % (method, quoted_path, self.headers))
+        log.Info(_(u"WebDAV data length: %s ") % len(str(data)))
         self.conn.request(method, quoted_path, data, self.headers)
         response = self.conn.getresponse()
-        log.Info(_("WebDAV response status %s with reason '%s'.") % (response.status, response.reason))
+        log.Info(_(u"WebDAV response status %s with reason '%s'.") % (response.status, response.reason))
         # resolve redirects and reset url on listing requests (they usually come before everything else)
-        if response.status in [301, 302] and method == 'PROPFIND':
-            redirect_url = response.getheader('location', None)
+        if response.status in [301, 302] and method == u'PROPFIND':
+            redirect_url = response.getheader(u'location', None)
             response.close()
             if redirect_url:
-                log.Notice(_("WebDAV redirect to: %s ") % urllib.unquote(redirect_url))
+                log.Notice(_(u"WebDAV redirect to: %s ") % urllib.parse.unquote(redirect_url))
                 if redirected > 10:
-                    raise FatalBackendException(_("WebDAV redirected 10 times. Giving up."))
+                    raise FatalBackendException(_(u"WebDAV redirected 10 times. Giving up."))
                 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)
                 self.directory = self.sanitize_path(self.parsed_url.path)
                 return self.request(method, self.directory, data, redirected + 1)
             else:
-                raise FatalBackendException(_("WebDAV missing location header in redirect response."))
+                raise FatalBackendException(_(u"WebDAV missing location header in redirect response."))
         elif response.status == 401:
             response.read()
             response.close()
-            self.headers['Authorization'] = self.get_authorization(response, quoted_path)
-            log.Info(_("WebDAV retry request with authentification headers."))
-            log.Info(_("WebDAV %s %s request2 with headers: %s ") % (method, quoted_path, self.headers))
-            log.Info(_("WebDAV data length: %s ") % len(str(data)))
+            self.headers[u'Authorization'] = self.get_authorization(response, quoted_path)
+            log.Info(_(u"WebDAV retry request with authentification headers."))
+            log.Info(_(u"WebDAV %s %s request2 with headers: %s ") % (method, quoted_path, self.headers))
+            log.Info(_(u"WebDAV data length: %s ") % len(str(data)))
             self.conn.request(method, quoted_path, data, self.headers)
             response = self.conn.getresponse()
-            log.Info(_("WebDAV response2 status %s with reason '%s'.") % (response.status, response.reason))
+            log.Info(_(u"WebDAV response2 status %s with reason '%s'.") % (response.status, response.reason))
 
         return response
 
     def get_authorization(self, response, path):
-        """
+        u"""
         Fetches the auth header based on the requested method (basic or digest)
         """
         try:
-            auth_hdr = response.getheader('www-authenticate', '')
-            token, challenge = auth_hdr.split(' ', 1)
+            auth_hdr = response.getheader(u'www-authenticate', u'')
+            token, challenge = auth_hdr.split(u' ', 1)
         except ValueError:
             return None
-        if token.split(',')[0].lower() == 'negotiate':
+        if token.split(u',')[0].lower() == u'negotiate':
             try:
                 return self.get_kerberos_authorization()
             except ImportError:
+<<<<<<< TREE
                 log.Warn(_("python-kerberos needed to use kerberos \
+=======
+                log.Warn(_(u"python-kerberos needed to use kerberos \
+>>>>>>> MERGE-SOURCE
                           authorization, falling back to basic auth."))
                 return self.get_basic_authorization()
             except Exception as e:
+<<<<<<< TREE
                 log.Warn(_("Kerberos authorization failed: %s.\
+=======
+                log.Warn(_(u"Kerberos authorization failed: %s.\
+>>>>>>> MERGE-SOURCE
                           Falling back to basic auth.") % e)
                 return self.get_basic_authorization()
-        elif token.lower() == 'basic':
+        elif token.lower() == u'basic':
             return self.get_basic_authorization()
         else:
             self.digest_challenge = self.parse_digest_challenge(challenge)
@@ -272,45 +285,56 @@
         return urllib2.parse_keqv_list(urllib2.parse_http_list(challenge_string))
 
     def get_kerberos_authorization(self):
+<<<<<<< TREE
         import kerberos  # pylint: disable=import-error
         _, ctx = kerberos.authGSSClientInit("HTTP@%s" % self.conn.host)
         kerberos.authGSSClientStep(ctx, "")
+=======
+        import kerberos  # pylint: disable=import-error
+        _, ctx = kerberos.authGSSClientInit(u"HTTP@%s" % self.conn.host)
+        kerberos.authGSSClientStep(ctx, u"")
+>>>>>>> MERGE-SOURCE
         tgt = kerberos.authGSSClientResponse(ctx)
-        return 'Negotiate %s' % tgt
+        return u'Negotiate %s' % tgt
 
     def get_basic_authorization(self):
-        """
+        u"""
         Returns the basic auth header
         """
+<<<<<<< TREE
         auth_string = '%s:%s' % (self.username, self.password)
         return 'Basic %s' % "".join(base64.encodestring(auth_string).split())
+=======
+        auth_string = u'%s:%s' % (self.username, self.password)
+        return u'Basic %s' % base64.encodestring(auth_string).strip()
+>>>>>>> MERGE-SOURCE
 
     def get_digest_authorization(self, path):
-        """
+        u"""
         Returns the digest auth header
         """
         u = self.parsed_url
         if self.digest_auth_handler is None:
-            pw_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
+            pw_manager = urllib.request.HTTPPasswordMgrWithDefaultRealm()
             pw_manager.add_password(None, self.conn.host, self.username, self.password)
-            self.digest_auth_handler = urllib2.HTTPDigestAuthHandler(pw_manager)
+            self.digest_auth_handler = urllib.request.HTTPDigestAuthHandler(pw_manager)
 
         # building a dummy request that gets never sent,
         # needed for call to auth_handler.get_authorization
-        scheme = u.scheme == 'webdavs' and 'https' or 'http'
-        hostname = u.port and "%s:%s" % (u.hostname, u.port) or u.hostname
-        dummy_url = "%s://%s%s" % (scheme, hostname, path)
+        scheme = u.scheme == u'webdavs' and u'https' or u'http'
+        hostname = u.port and u"%s:%s" % (u.hostname, u.port) or u.hostname
+        dummy_url = u"%s://%s%s" % (scheme, hostname, path)
         dummy_req = CustomMethodRequest(self.conn._method, dummy_url)
         auth_string = self.digest_auth_handler.get_authorization(dummy_req,
                                                                  self.digest_challenge)
-        return 'Digest %s' % auth_string
+        return u'Digest %s' % auth_string
 
     def _list(self):
         response = None
         try:
-            self.headers['Depth'] = "1"
-            response = self.request("PROPFIND", self.directory, self.listbody)
-            del self.headers['Depth']
+            self.headers[u'Depth'] = u"1"
+            response = self.request(u"PROPFIND", self.directory, self.listbody)
+            del self.headers[u'Depth']
             # if the target collection does not exist, create it.
             if response.status == 404:
                 response.close()  # otherwise next request fails with ResponseNotReady
@@ -324,12 +348,12 @@
                 status = response.status
                 reason = response.reason
                 response.close()
-                raise BackendException("Bad status code %s reason %s." % (status, reason))
+                raise BackendException(u"Bad status code %s reason %s." % (status, reason))
 
-            log.Debug("%s" % (document,))
+            log.Debug(u"%s" % (document,))
             dom = xml.dom.minidom.parseString(document)
             result = []
-            for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):
+            for href in dom.getElementsByTagName(u'd:href') + dom.getElementsByTagName(u'D:href'):
                 filename = self.taste_href(href)
                 if filename:
                     result.append(filename)
@@ -341,41 +365,41 @@
                 response.close()
 
     def makedir(self):
-        """Make (nested) directories on the server."""
-        dirs = self.directory.split("/")
+        u"""Make (nested) directories on the server."""
+        dirs = self.directory.split(u"/")
         # url causes directory to start with /, but it might be given
         # with or without trailing / (which is required)
-        if dirs[-1] == '':
+        if dirs[-1] == u'':
             dirs = dirs[0:-1]
         for i in range(1, len(dirs)):
-            d = "/".join(dirs[0:i + 1]) + "/"
-
-            self.headers['Depth'] = "1"
-            response = self.request("PROPFIND", d)
-            del self.headers['Depth']
-
-            log.Info("Checking existence dir %s: %d" % (d, response.status))
+            d = u"/".join(dirs[0:i + 1]) + u"/"
+
+            self.headers[u'Depth'] = u"1"
+            response = self.request(u"PROPFIND", d)
+            del self.headers[u'Depth']
+
+            log.Info(u"Checking existence dir %s: %d" % (d, response.status))
 
             if response.status == 404:
-                log.Info(_("Creating missing directory %s") % d)
+                log.Info(_(u"Creating missing directory %s") % d)
 
-                res = self.request("MKCOL", d)
+                res = self.request(u"MKCOL", d)
                 if res.status != 201:
-                    raise BackendException(_("WebDAV MKCOL %s failed: %s %s") %
+                    raise BackendException(_(u"WebDAV MKCOL %s failed: %s %s") %
                                            (d, res.status, res.reason))
 
     def taste_href(self, href):
-        """
+        u"""
         Internal helper to taste the given href node and, if
         it is a duplicity file, collect it as a result file.
 
         @return: A matching filename, or None if the href did not match.
         """
         raw_filename = self.getText(href.childNodes).strip()
-        parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))
+        parsed_url = urllib.parse.urlparse(urllib.parse.unquote(raw_filename))
         filename = parsed_url.path
-        log.Debug(_("WebDAV path decoding and translation: "
-                  "%s -> %s") % (raw_filename, filename))
+        log.Debug(_(u"WebDAV path decoding and translation: "
+                  u"%s -> %s") % (raw_filename, filename))
 
         # at least one WebDAV server returns files in the form
         # of full URL:s. this may or may not be
@@ -385,18 +409,18 @@
         # what the WebDAV protocol mandages.
         if parsed_url.hostname is not None \
            and not (parsed_url.hostname == self.parsed_url.hostname):
-            m = "Received filename was in the form of a "\
-                "full url, but the hostname (%s) did "\
-                "not match that of the webdav backend "\
-                "url (%s) - aborting as a conservative "\
-                "safety measure. If this happens to you, "\
-                "please report the problem"\
-                "" % (parsed_url.hostname,
-                      self.parsed_url.hostname)
+            m = u"Received filename was in the form of a "\
+                u"full url, but the hostname (%s) did "\
+                u"not match that of the webdav backend "\
+                u"url (%s) - aborting as a conservative "\
+                u"safety measure. If this happens to you, "\
+                u"please report the problem"\
+                u"" % (parsed_url.hostname,
+                       self.parsed_url.hostname)
             raise BackendException(m)
 
         if filename.startswith(self.directory):
-            filename = filename.replace(self.directory, '', 1)
+            filename = filename.replace(self.directory, u'', 1)
             return filename
         else:
             return None
@@ -405,11 +429,11 @@
         url = self.directory + remote_filename
         response = None
         try:
-            target_file = local_path.open("wb")
-            response = self.request("GET", url)
+            target_file = local_path.open(u"wb")
+            response = self.request(u"GET", url)
             if response.status == 200:
                 # data=response.read()
-                target_file.write(response.read())
+                shutil.copyfileobj(response, target_file)
                 # import hashlib
                 # log.Info("WebDAV GOT %s bytes with md5=%s" %
                 # (len(data),hashlib.md5(data).hexdigest()) )
@@ -419,7 +443,7 @@
                 status = response.status
                 reason = response.reason
                 response.close()
-                raise BackendException(_("WebDAV GET Bad status code %s reason %s.") %
+                raise BackendException(_(u"WebDAV GET Bad status code %s reason %s.") %
                                        (status, reason))
         except Exception as e:
             raise e
@@ -431,8 +455,8 @@
         url = self.directory + remote_filename
         response = None
         try:
-            source_file = source_path.open("rb")
-            response = self.request("PUT", url, source_file.read())
+            source_file = source_path.open(u"rb")
+            response = self.request(u"PUT", url, source_file)
             # 200 is returned if a file is overwritten during restarting
             if response.status in [200, 201, 204]:
                 response.read()
@@ -441,7 +465,7 @@
                 status = response.status
                 reason = response.reason
                 response.close()
-                raise BackendException(_("WebDAV PUT Bad status code %s reason %s.") %
+                raise BackendException(_(u"WebDAV PUT Bad status code %s reason %s.") %
                                        (status, reason))
         except Exception as e:
             raise e
@@ -453,7 +477,7 @@
         url = self.directory + filename
         response = None
         try:
-            response = self.request("DELETE", url)
+            response = self.request(u"DELETE", url)
             if response.status in [200, 204]:
                 response.read()
                 response.close()
@@ -461,7 +485,7 @@
                 status = response.status
                 reason = response.reason
                 response.close()
-                raise BackendException(_("WebDAV DEL Bad status code %s reason %s.") %
+                raise BackendException(_(u"WebDAV DEL Bad status code %s reason %s.") %
                                        (status, reason))
         except Exception as e:
             raise e
@@ -469,8 +493,17 @@
             if response:
                 response.close()
 
+<<<<<<< TREE
 duplicity.backend.register_backend("http", WebDAVBackend)
 duplicity.backend.register_backend("https", WebDAVBackend)
 duplicity.backend.register_backend("webdav", WebDAVBackend)
 duplicity.backend.register_backend("webdavs", WebDAVBackend)
 duplicity.backend.uses_netloc.extend(['http', 'https', 'webdav', 'webdavs'])
+=======
+
+duplicity.backend.register_backend(u"http", WebDAVBackend)
+duplicity.backend.register_backend(u"https", WebDAVBackend)
+duplicity.backend.register_backend(u"webdav", WebDAVBackend)
+duplicity.backend.register_backend(u"webdavs", WebDAVBackend)
+duplicity.backend.uses_netloc.extend([u'http', u'https', u'webdav', u'webdavs'])
+>>>>>>> MERGE-SOURCE

=== modified file 'duplicity/cached_ops.py'
--- duplicity/cached_ops.py	2014-04-17 20:50:57 +0000
+++ duplicity/cached_ops.py	2019-02-22 19:07:43 +0000
@@ -18,14 +18,15 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""Cache-wrapped functions for grp and pwd lookups."""
+u"""Cache-wrapped functions for grp and pwd lookups."""
 
+from builtins import object
 import grp
 import pwd
 
 
 class CachedCall(object):
-    """Decorator for caching the results of function calls."""
+    u"""Decorator for caching the results of function calls."""
 
     def __init__(self, f):
         self.cache = {}

=== modified file 'duplicity/collections.py'
--- duplicity/collections.py	2018-02-01 21:17:08 +0000
+++ duplicity/collections.py	2019-02-22 19:07:43 +0000
@@ -19,9 +19,16 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-"""Classes and functions on collections of backup volumes"""
+u"""Classes and functions on collections of backup volumes"""
 
-from future_builtins import filter, map
+from past.builtins import cmp
+from builtins import filter
+from builtins import str
+from builtins import zip
+from builtins import map
+from builtins import range
+from builtins import object
+from future.builtins import filter, map
 
 import types
 import gettext
@@ -37,17 +44,26 @@
 from duplicity import util
 from duplicity.gpg import GPGError
 
+<<<<<<< TREE
+=======
+# For type testing against both int and long types that works in python 2/3
+if sys.version_info < (3,):
+    integer_types = (int, int)
+else:
+    integer_types = (int,)
+
+>>>>>>> MERGE-SOURCE
 
 class CollectionsError(Exception):
     pass
 
 
-class BackupSet:
-    """
+class BackupSet(object):
+    u"""
     Backup set - the backup information produced by one session
     """
     def __init__(self, backend, action):
-        """
+        u"""
         Initialize new backup set, only backend is required at first
         """
         self.backend = backend
@@ -63,13 +79,13 @@
         self.action = action
 
     def is_complete(self):
-        """
+        u"""
         Assume complete if found manifest file
         """
         return self.remote_manifest_name
 
     def add_filename(self, filename):
-        """
+        u"""
         Add a filename to given set.  Return true if it fits.
 
         The filename will match the given set if it has the right
@@ -80,7 +96,7 @@
         @type filename: string
         """
         pr = file_naming.parse(filename)
-        if not pr or not (pr.type == "full" or pr.type == "inc"):
+        if not pr or not (pr.type == u"full" or pr.type == u"inc"):
             return False
 
         if not self.info_set:
@@ -108,7 +124,7 @@
         return True
 
     def set_info(self, pr):
-        """
+        u"""
         Set BackupSet information from ParseResults object
 
         @param pr: parse results
@@ -125,15 +141,20 @@
         self.info_set = True
 
     def set_manifest(self, remote_filename):
-        """
+        u"""
         Add local and remote manifest filenames to backup set
         """
         assert not self.remote_manifest_name, (self.remote_manifest_name,
                                                remote_filename)
         self.remote_manifest_name = remote_filename
 
+<<<<<<< TREE
         if self.action not in ["collection-status"]:
             local_filename_list = globals.archive_dir.listdir()
+=======
+        if self.action not in [u"collection-status", u"replicate"]:
+            local_filename_list = globals.archive_dir_path.listdir()
+>>>>>>> MERGE-SOURCE
         else:
             local_filename_list = []
         for local_filename in local_filename_list:
@@ -147,7 +168,7 @@
                 break
 
     def delete(self):
-        """
+        u"""
         Remove all files in set, both local and remote
         """
         rfn = self.get_filenames()
@@ -155,10 +176,15 @@
         try:
             self.backend.delete(rfn)
         except Exception:
-            log.Debug(_("BackupSet.delete: missing %s") % [util.ufn(f) for f in rfn])
+            log.Debug(_(u"BackupSet.delete: missing %s") % [util.fsdecode(f) for f in rfn])
             pass
+<<<<<<< TREE
         if self.action not in ["collection-status"]:
             local_filename_list = globals.archive_dir.listdir()
+=======
+        if self.action not in [u"collection-status", u"replicate"]:
+            local_filename_list = globals.archive_dir_path.listdir()
+>>>>>>> MERGE-SOURCE
         else:
             local_filename_list = []
         for lfn in local_filename_list:
@@ -169,80 +195,95 @@
                 try:
                     globals.archive_dir.append(lfn).delete()
                 except Exception:
-                    log.Debug(_("BackupSet.delete: missing %s") % [util.ufn(f) for f in lfn])
+                    log.Debug(_(u"BackupSet.delete: missing %s") % [util.fsdecode(f) for f in lfn])
                     pass
         util.release_lockfile()
 
     def __unicode__(self):
-        """
+        u"""
         For now just list files in set
         """
         filelist = []
         if self.remote_manifest_name:
             filelist.append(self.remote_manifest_name)
-        filelist.extend(self.volume_name_dict.values())
-        return u"[%s]" % u", ".join(map(util.ufn, filelist))
+        filelist.extend(list(self.volume_name_dict.values()))
+        return u"[%s]" % u", ".join(map(util.fsdecode, filelist))
 
     def get_timestr(self):
-        """
+        u"""
         Return time string suitable for log statements
         """
         return dup_time.timetopretty(self.time or self.end_time)
 
-    def check_manifests(self):
-        """
+    def check_manifests(self, check_remote=True):
+        u"""
         Make sure remote manifest is equal to local one
         """
         if not self.remote_manifest_name and not self.local_manifest_path:
-            log.FatalError(_("Fatal Error: No manifests found for most recent backup"),
+            log.FatalError(_(u"Fatal Error: No manifests found for most recent backup"),
                            log.ErrorCode.no_manifests)
-        assert self.remote_manifest_name, "if only one, should be remote"
+        assert self.remote_manifest_name, u"if only one, should be remote"
 
-        remote_manifest = self.get_remote_manifest()
+        remote_manifest = self.get_remote_manifest() if check_remote else None
         if self.local_manifest_path:
             local_manifest = self.get_local_manifest()
         if remote_manifest and self.local_manifest_path and local_manifest:
             if remote_manifest != local_manifest:
-                log.FatalError(_("Fatal Error: Remote manifest does not match "
-                                 "local one.  Either the remote backup set or "
-                                 "the local archive directory has been corrupted."),
+                log.FatalError(_(u"Fatal Error: Remote manifest does not match "
+                                 u"local one.  Either the remote backup set or "
+                                 u"the local archive directory has been corrupted."),
                                log.ErrorCode.mismatched_manifests)
         if not remote_manifest:
             if self.local_manifest_path:
                 remote_manifest = local_manifest
             else:
-                log.FatalError(_("Fatal Error: Neither remote nor local "
-                                 "manifest is readable."),
+                log.FatalError(_(u"Fatal Error: Neither remote nor local "
+                                 u"manifest is readable."),