← Back to team overview

duplicity-team team mailing list archive

[Merge] lp:~ed.so/duplicity/webdav200fix-0.6 into lp:duplicity

 

edso has proposed merging lp:~ed.so/duplicity/webdav200fix-0.6 into lp:duplicity.

Requested reviews:
  duplicity-team (duplicity-team)

For more details, see:
https://code.launchpad.net/~ed.so/duplicity/webdav200fix-0.6/+merge/223155
-- 
The attached diff has been truncated due to its size.
https://code.launchpad.net/~ed.so/duplicity/webdav200fix-0.6/+merge/223155
Your team duplicity-team is requested to review the proposed merge of lp:~ed.so/duplicity/webdav200fix-0.6 into lp:duplicity.
=== modified file 'CHANGELOG'
--- CHANGELOG	2014-05-11 11:50:12 +0000
+++ CHANGELOG	2014-06-14 13:58:30 +0000
@@ -1,3 +1,4 @@
+<<<<<<< TREE
 New in v0.7.00 (2014/05/??)
 ---------------------------
 * Merged in lp:~mterry/duplicity/require-2.6
@@ -88,6 +89,9 @@
 
 
 New in v0.6.24 (2014/05/09)
+=======
+New in v0.6.24 (2014/05/09)
+>>>>>>> MERGE-SOURCE
 ---------------------------
 Enhancements:
 * Applied two patches from mailing list message at:
@@ -126,39 +130,25 @@
   - Fixes https://bugs.launchpad.net/duplicity/+bug/426282
 * Merged in lp:~fredrik-loch/duplicity/duplicity-S3-SSE
   - Adds support for server side encryption as requested in Bug #996660
+* Merged in lp:~mterry/duplicity/modern-testing
+  - Enable/use more modern testing tools like nosetests and tox as well as more
+    common setup.py hooks like test and sdist.
+  - Specifically:
+    * move setup.py to toplevel where most tools and users expect it
+    * Move and adjust test files to work when running "nosetests" in toplevel
+      directory. Specifically, do a lot more of the test setup in
+      tests/__init__.py rather than the run-tests scripts
+    * Add small tox.ini file for using tox, which is a wrapper for using
+      virtualenv. Only enable for py26 and py27 right now, since modern
+      setuptools dropped support for <2.6 (and tox 1.7 recently dropped <2.6)
+    * Add setup.py hooks for test and sdist which are both standard targets
+      (sdist just outsources to dist/makedist right now)
 * Merged in lp:~mterry/duplicity/drop-u1
   - Ubuntu One is closing shop. So no need to support a u1 backend anymore.
 * Merged in lp:~mterry/duplicity/fix-drop-u1
   - Looks like when the drop-u1 branch got merged, its conflict got resolved
     badly. Here is the right version of backend.py to use (and also drops
     u1backend.py from POTFILES).
-* Merged in lp:~mterry/duplicity/drop-pexpect
-  - Drop our local copy of pexpect in favor of a system version.
-  - It's only used by the pexpect ssh backend (and if you're opting into that,
-    you probably can expect that you will need pexpect) and the tests.
-  - I've done a quick smoketest (backed up and restored using
-    --ssh-backend=pexpect) and it seemed to work fine with a modern version
-    of pexpect.
-* Merged in lp:~mterry/duplicity/2.6isms
-  - Here's a whole stack of minor syntax modernizations that will become
-    necessary in python3. They all work in python2.6.
-  - I've added a new test to keep us honest and prevent backsliding on these
-    modernizations. It runs 2to3 and will fail the test if 2to3 finds anything
-    that needs fixing (with a specific set of exceptions carved out).
-  - This branch has most of the easy 2to3 fixes, the ones with obvious and
-    safe syntax changes.
-  - We could just let 2to3 do them for us, but ideally we use 2to3 as little
-    as possible, since it doesn't always know how to solve a given problem.
-    I will propose a branch later that actually does use 2to3 to generate
-    python3 versions of duplicity if they are requested. But this is a first
-    step to clean up the code base.
-* Merged in lp:~mterry/duplicity/drop-static
-  - Drop static.py.
-  - This is some of the oldest code in duplicity! A bzr blame says it is
-    unmodified (except for whitespace / comment changes) since revision 1.
-  - But it's not needed anymore. Not really even since we updated to python2.4,
-    which introduced the @staticmethod decorator. So this branch drops it and
-    its test file.
 * Merged in lp:~mterry/duplicity/encode-for-print
   - Encode translated strings before passing them to 'print'.
   - The print command can only apparently handle bytes. So when we pass it
@@ -197,45 +187,6 @@
       in it is actually there.
 * Fixed bug #1312328 WebDAV backend can't understand 200 OK response to DELETE
   - Allow both 200 and 204 as valid response to delete
-* Merged in lp:~mterry/duplicity/py3-map-filter
-  - In py3, map and filter return iterable objects, not lists. So in each case
-    we use them, I've either imported the future version or switched to a list
-    comprehension if we really wanted a list.
-* Merged in lp:~mterry/duplicity/backend-unification
-  - Reorganize and simplify backend code.  Specifically:
-    - Formalize the expected API between backends and duplicity.  See the new
-      file duplicity/backends/README for the instructions I've given authors.
-    - Add some tests for our backend wrapper class as well as some tests for
-      individual backends.  For several backends that have some commands do all
-      the heavy lifting (hsi, tahoe, ftp), I've added fake little mock commands
-      so that we can test them locally.  This doesn't truly test our integration
-      with those commands, but at least lets us test the backend glue code.
-    - Removed a lot of duplicate and unused code which backends were using (or
-      not using).  This branch drops 700 lines of code (~20%)
-      in duplicity/backends!
-    - Simplified expectations of backends.  Our wrapper code now does all the
-      retrying, and all the exception handling.  Backends can 'fire and forget'
-      trusting our wrappers to give the user a reasonable error message.
-      Obviously, backends can also add more details and make nicer error
-      messages.  But they don't *have* to.
-    - Separate out the backend classes from our wrapper class.  Now there is no
-      possibility of namespace collision.  All our API methods use one
-      underscore.  Anything else (zero or two underscores) are for the backend
-      class's use.
-    - Added the concept of a 'backend prefix' which is used by par2 and gio
-      backends to provide generic support for "schema+" in urls -- like par2+
-      or gio+.  I've since marked the '--gio' flag as deprecated, in favor of
-      'gio+'.  Now you can even nest such backends like
-      par2+gio+file://blah/blah.
-    - The switch to control which cloudfiles backend had a typo.  I fixed this,
-      but I'm not sure I should have?  If we haven't had complaints, maybe we
-      can just drop the old backend.
-    - I manually tested all the backends we have (except hsi and tahoe -- but
-      those are simple wrappers around commands and I did test those via mocks
-      per above).  I also added a bunch more manual backend tests to
-      ./testing/manual/backendtest.py, which can now be run like the above to
-      test all the files you have configured in config.py or you can pass it a
-      URL which it will use for testing (useful for backend authors).
 * Merged in lp:~mterry/duplicity/encode-exceptions
   - Because exceptions often contain file paths, they have the same problem
     with Python 2.x's implicit decoding using the 'ascii' encoding that we've
@@ -243,9 +194,14 @@
     util.ufn() method to convert an exception to a unicode string and used it
     around the place.
   - Bugs fixed: 1289288, 1311176, 1313966
+<<<<<<< TREE
 * Applied expat fix from edso.  See answer #12 in
   https://answers.launchpad.net/duplicity/+question/248020
 
+=======
+* Applied expat fix from edso.  See answer #12 in
+  https://answers.launchpad.net/duplicity/+question/248020
+>>>>>>> MERGE-SOURCE
 
 
 New in v0.6.23 (2014/01/24)

=== modified file 'Changelog.GNU'
--- Changelog.GNU	2014-05-11 11:50:12 +0000
+++ Changelog.GNU	2014-06-14 13:58:30 +0000
@@ -1,3 +1,4 @@
+<<<<<<< TREE
 2014-05-11  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Merged in lp:~mterry/duplicity/py2.6.0
@@ -20,6 +21,15 @@
     * Applied expat fix from edso.  See answer #12 in
       https://answers.launchpad.net/duplicity/+question/248020
 
+=======
+2014-05-09  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+    * Prep for 0.6.24.
+
+2014-05-07  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+    * Applied expat fix from edso.  See answer #12 in
+      https://answers.launchpad.net/duplicity/+question/248020
+
+>>>>>>> MERGE-SOURCE
 2014-04-30  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Merged in lp:~mterry/duplicity/encode-exceptions
@@ -30,6 +40,7 @@
         around the place.
       - Bugs fixed: 1289288, 1311176, 1313966
 
+<<<<<<< TREE
 2014-04-29  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Merged in lp:~mterry/duplicity/backend-unification
@@ -75,14 +86,20 @@
         we use them, I've either imported the future version or switched to a list
         comprehension if we really wanted a list.
 
+=======
+>>>>>>> MERGE-SOURCE
 2014-04-25  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
     * Fixed bug #1312328 WebDAV backend can't understand 200 OK response to DELETE
       - Allow both 200 and 204 as valid response to delete
 
 2014-04-20  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+<<<<<<< TREE
 
     * Merged in lp:~mterry/duplicity/more-test-reorg
+=======
+    * Merged in lp:~mterry/duplicity/more-test-reorg
+>>>>>>> MERGE-SOURCE
       - Here's another test reorganization / modernization branch. It does the
         following things:
         - Drop duplicity/misc.py. It is confusing to have both misc.py and util.py,
@@ -112,6 +129,7 @@
           in it is actually there.
 
 2014-04-19  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
+<<<<<<< TREE
 
     * Merged in lp:~mterry/duplicity/2.6isms
       - Here's a whole stack of minor syntax modernizations that will become
@@ -133,6 +151,8 @@
       - But it's not needed anymore. Not really even since we updated to python2.4,
         which introduced the @staticmethod decorator. So this branch drops it and
         its test file.
+=======
+>>>>>>> MERGE-SOURCE
     * Merged in lp:~mterry/duplicity/encode-for-print
       - Encode translated strings before passing them to 'print'.
       - The print command can only apparently handle bytes. So when we pass it
@@ -166,24 +186,12 @@
           setuptools dropped support for <2.6 (and tox 1.7 recently dropped <2.6)
         * Add setup.py hooks for test and sdist which are both standard targets
           (sdist just outsources to dist/makedist right now)
-    * Merged in lp:~mterry/duplicity/require-2.6
-      - Require at least Python 2.6.
-      - Our code base already requires 2.6, because 2.6-isms have crept in. Usually
-        because we or a contributor didn't think to test with 2.4. And frankly,
-        I'm not even sure how to test with 2.4 on a modern system.
     * Merged in lp:~mterry/duplicity/drop-u1
       - Ubuntu One is closing shop. So no need to support a u1 backend anymore.
     * Merged in lp:~mterry/duplicity/fix-drop-u1
       - Looks like when the drop-u1 branch got merged, its conflict got resolved
         badly. Here is the right version of backend.py to use (and also drops
         u1backend.py from POTFILES).
-    * Merged in lp:~mterry/duplicity/drop-pexpect
-      - Drop our local copy of pexpect in favor of a system version.
-      - It's only used by the pexpect ssh backend (and if you're opting into that,
-        you probably can expect that you will need pexpect) and the tests.
-      - I've done a quick smoketest (backed up and restored using
-        --ssh-backend=pexpect) and it seemed to work fine with a modern version
-        of pexpect.
 
 2014-03-10  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 

=== modified file 'README'
--- README	2014-04-16 20:45:09 +0000
+++ README	2014-06-14 13:58:30 +0000
@@ -19,7 +19,7 @@
 
 REQUIREMENTS:
 
- * Python v2.6 or later
+ * Python v2.4 or later
  * librsync v0.9.6 or later
  * GnuPG v1.x for encryption
  * python-lockfile for concurrency locking
@@ -28,6 +28,7 @@
  * for ftp over SSL -- lftp version 3.7.15 or later
  * Boto 2.0 or later for single-processing S3 or GCS access (default)
  * Boto 2.1.1 or later for multi-processing S3 access
+ * Python v2.6 or later for multi-processing S3 access
  * Boto 2.7.0 or later for Glacier S3 access
 
 If you install from the source package, you will also need:

=== modified file 'bin/duplicity'
--- bin/duplicity	2014-05-11 11:50:12 +0000
+++ bin/duplicity	2014-06-14 13:58:30 +0000
@@ -289,6 +289,8 @@
 
     def validate_block(orig_size, dest_filename):
         info = backend.query_info([dest_filename])[dest_filename]
+        if 'size' not in info:
+            return # backend didn't know how to query size
         size = info['size']
         if size is None:
             return # error querying file
@@ -1041,7 +1043,7 @@
         log.Notice(_("Deleting local %s (not authoritative at backend).") % util.ufn(del_name))
         try:
             util.ignore_missing(os.unlink, del_name)
-        except Exception as e:
+        except Exception, e:
             log.Warn(_("Unable to delete %s: %s") % (util.ufn(del_name), util.uexc(e)))
 
     def copy_to_local(fn):
@@ -1504,18 +1506,18 @@
     # sys.exit() function.  Python handles this by
     # raising the SystemExit exception.  Cleanup code
     # goes here, if needed.
-    except SystemExit as e:
+    except SystemExit, e:
         # No traceback, just get out
         util.release_lockfile()
         sys.exit(e)
 
-    except KeyboardInterrupt as e:
+    except KeyboardInterrupt, e:
         # No traceback, just get out
         log.Info(_("INT intercepted...exiting."))
         util.release_lockfile()
         sys.exit(4)
 
-    except gpg.GPGError as e:
+    except gpg.GPGError, e:
         # For gpg errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
         util.release_lockfile()
@@ -1525,7 +1527,7 @@
                        log.ErrorCode.gpg_failed,
                        e.__class__.__name__)
 
-    except duplicity.errors.UserError as e:
+    except duplicity.errors.UserError, e:
         util.release_lockfile()
         # For user errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
@@ -1535,7 +1537,7 @@
                        log.ErrorCode.user_error,
                        e.__class__.__name__)
 
-    except duplicity.errors.BackendException as e:
+    except duplicity.errors.BackendException, e:
         util.release_lockfile()
         # For backend errors, don't show an ugly stack trace by
         # default. But do with sufficient verbosity.
@@ -1545,7 +1547,7 @@
                        log.ErrorCode.user_error,
                        e.__class__.__name__)
 
-    except Exception as e:
+    except Exception, e:
         util.release_lockfile()
         if "Forced assertion for testing" in str(e):
             log.FatalError(u"%s: %s" % (e.__class__.__name__, util.uexc(e)),

=== modified file 'bin/duplicity.1'
--- bin/duplicity.1	2014-04-17 19:34:23 +0000
+++ bin/duplicity.1	2014-06-14 13:58:30 +0000
@@ -51,7 +51,7 @@
 .SH REQUIREMENTS
 Duplicity requires a POSIX-like operating system with a
 .B python
-interpreter version 2.6+ installed.
+interpreter version 2.4+ installed.
 It is best used under GNU/Linux.
 
 Some backends also require additional components (probably available as packages for your specific platform):
@@ -120,9 +120,6 @@
 .B ssh pexpect backend
 .B sftp/scp client binaries
 OpenSSH - http://www.openssh.com/
-.br
-.B Python pexpect module
-- http://pexpect.sourceforge.net/pexpect.html
 .TP
 .BR "swift backend (OpenStack Object Storage)"
 .B Python swiftclient module

=== modified file 'bin/rdiffdir'
--- bin/rdiffdir	2014-05-11 11:50:12 +0000
+++ bin/rdiffdir	2014-06-14 13:58:30 +0000
@@ -64,7 +64,7 @@
                                        "include-filelist-stdin", "include-globbing-filelist",
                                        "include-regexp=", "max-blocksize", "null-separator",
                                        "verbosity=", "write-sig-to="])
-    except getopt.error as e:
+    except getopt.error, e:
         command_line_error("Bad command line option: %s" % (str(e),))
 
     for opt, arg in optlist:

=== modified file 'dist/duplicity.spec.template'
--- dist/duplicity.spec.template	2014-04-16 20:45:09 +0000
+++ dist/duplicity.spec.template	2014-06-14 13:58:30 +0000
@@ -10,8 +10,8 @@
 License: GPL
 Group: Applications/Archiving
 BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
-requires: librsync >= 0.9.6, %{PYTHON_NAME} >= 2.6, gnupg >= 1.0.6
-BuildPrereq: %{PYTHON_NAME}-devel >= 2.6, librsync-devel >= 0.9.6
+requires: librsync >= 0.9.6, %{PYTHON_NAME} >= 2.4, gnupg >= 1.0.6
+BuildPrereq: %{PYTHON_NAME}-devel >= 2.4, librsync-devel >= 0.9.6
 
 %description
 Duplicity incrementally backs up files and directory by encrypting

=== modified file 'duplicity/__init__.py'
--- duplicity/__init__.py	2014-04-16 20:45:09 +0000
+++ duplicity/__init__.py	2014-06-14 13:58:30 +0000
@@ -19,5 +19,12 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import __builtin__
 import gettext
-gettext.install('duplicity', unicode=True, names=['ngettext'])
+
+t = gettext.translation('duplicity', fallback=True)
+t.install(unicode=True)
+
+# Once we can depend on python >=2.5, we can just use names='ngettext' above.
+# But for now, do the install manually.
+__builtin__.__dict__['ngettext'] = t.ungettext

=== modified file 'duplicity/_librsyncmodule.c'
--- duplicity/_librsyncmodule.c	2014-04-16 20:45:09 +0000
+++ duplicity/_librsyncmodule.c	2014-06-14 13:58:30 +0000
@@ -26,6 +26,15 @@
 #include <librsync.h>
 #define RS_JOB_BLOCKSIZE 65536
 
+/* Support Python 2.4 and 2.5 */
+#ifndef PyVarObject_HEAD_INIT
+    #define PyVarObject_HEAD_INIT(type, size) \
+        PyObject_HEAD_INIT(type) size,
+#endif
+#ifndef Py_TYPE
+    #define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
+#endif
+
 static PyObject *librsyncError;
 
 /* Sets python error string from result */

=== modified file 'duplicity/backend.py'
--- duplicity/backend.py	2014-05-11 05:43:32 +0000
+++ duplicity/backend.py	2014-06-14 13:58:30 +0000
@@ -24,7 +24,6 @@
 intended to be used by the backends themselves.
 """
 
-import errno
 import os
 import sys
 import socket
@@ -32,22 +31,20 @@
 import re
 import getpass
 import gettext
-import types
 import urllib
-import urlparse
+import urlparse_2_5 as urlparser
 
 from duplicity import dup_temp
+from duplicity import dup_threading
 from duplicity import file_naming
 from duplicity import globals
 from duplicity import log
-from duplicity import path
 from duplicity import progress
 from duplicity import util
 
 from duplicity.util import exception_traceback
 
-from duplicity.errors import BackendException
-from duplicity.errors import FatalBackendException
+from duplicity.errors import BackendException, FatalBackendError
 from duplicity.errors import TemporaryLoadException
 from duplicity.errors import ConflictingScheme
 from duplicity.errors import InvalidBackendURL
@@ -59,29 +56,8 @@
 # todo: this should really NOT be done here
 socket.setdefaulttimeout(globals.timeout)
 
+_forced_backend = None
 _backends = {}
-_backend_prefixes = {}
-
-# These URL schemes have a backend with a notion of an RFC "network location".
-# The 'file' and 's3+http' schemes should not be in this list.
-# 'http' and 'https' are not actually used for duplicity backend urls, but are needed
-# in order to properly support urls returned from some webdav servers. adding them here
-# is a hack. we should instead not stomp on the url parsing module to begin with.
-#
-# This looks similar to urlparse's 'uses_netloc' list, but urlparse doesn't use
-# that list for parsing, only creating urls.  And doesn't include our custom
-# schemes anyway.  So we keep our own here for our own use.
-uses_netloc = ['ftp',
-               'ftps',
-               'hsi',
-               's3',
-               'scp', 'ssh', 'sftp',
-               'webdav', 'webdavs',
-               'gdocs',
-               'http', 'https',
-               'imap', 'imaps',
-               'mega']
-
 
 def import_backends():
     """
@@ -100,6 +76,8 @@
         if fn.endswith("backend.py"):
             fn = fn[:-3]
             imp = "duplicity.backends.%s" % (fn,)
+            # ignore gio as it is explicitly loaded in commandline.parse_cmdline_options()
+            if fn == "giobackend": continue
             try:
                 __import__(imp)
                 res = "Succeeded"
@@ -112,6 +90,14 @@
             continue
 
 
+def force_backend(backend):
+    """
+    Forces the use of a particular backend, regardless of schema
+    """
+    global _forced_backend
+    _forced_backend = backend
+
+
 def register_backend(scheme, backend_factory):
     """
     Register a given backend factory responsible for URL:s with the
@@ -138,32 +124,6 @@
     _backends[scheme] = backend_factory
 
 
-def register_backend_prefix(scheme, backend_factory):
-    """
-    Register a given backend factory responsible for URL:s with the
-    given scheme prefix.
-
-    The backend must be a callable which, when called with a URL as
-    the single parameter, returns an object implementing the backend
-    protocol (i.e., a subclass of Backend).
-
-    Typically the callable will be the Backend subclass itself.
-
-    This function is not thread-safe and is intended to be called
-    during module importation or start-up.
-    """
-    global _backend_prefixes
-
-    assert callable(backend_factory), "backend factory must be callable"
-
-    if scheme in _backend_prefixes:
-        raise ConflictingScheme("the prefix %s already has a backend "
-                                "associated with it"
-                                "" % (scheme,))
-
-    _backend_prefixes[scheme] = backend_factory
-
-
 def is_backend_url(url_string):
     """
     @return Whether the given string looks like a backend URL.
@@ -177,9 +137,9 @@
         return False
 
 
-def get_backend_object(url_string):
+def get_backend(url_string):
     """
-    Find the right backend class instance for the given URL, or return None
+    Instantiate a backend suitable for the given URL, or return None
     if the given string looks like a local path rather than a URL.
 
     Raise InvalidBackendURL if the URL is not a valid URL.
@@ -187,45 +147,63 @@
     if not is_backend_url(url_string):
         return None
 
-    global _backends, _backend_prefixes
-
     pu = ParsedUrl(url_string)
+
+    # Implicit local path
     assert pu.scheme, "should be a backend url according to is_backend_url"
 
-    factory = None
-
-    for prefix in _backend_prefixes:
-        if url_string.startswith(prefix + '+'):
-            factory = _backend_prefixes[prefix]
-            pu = ParsedUrl(url_string.lstrip(prefix + '+'))
-            break
-
-    if factory is None:
-        if not pu.scheme in _backends:
-            raise UnsupportedBackendScheme(url_string)
-        else:
-            factory = _backends[pu.scheme]
-
-    try:
-        return factory(pu)
-    except ImportError:
-        raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
-
-
-def get_backend(url_string):
-    """
-    Instantiate a backend suitable for the given URL, or return None
-    if the given string looks like a local path rather than a URL.
-
-    Raise InvalidBackendURL if the URL is not a valid URL.
-    """
-    if globals.use_gio:
-        url_string = 'gio+' + url_string
-    obj = get_backend_object(url_string)
-    if obj:
-        obj = BackendWrapper(obj)
-    return obj
-
+    global _backends, _forced_backend
+
+    if _forced_backend:
+        return _forced_backend(pu)
+    elif not pu.scheme in _backends:
+        raise UnsupportedBackendScheme(url_string)
+    else:
+        try:
+            return _backends[pu.scheme](pu)
+        except ImportError:
+            raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
+
+_urlparser_initialized = False
+_urlparser_initialized_lock = dup_threading.threading_module().Lock()
+
+def _ensure_urlparser_initialized():
+    """
+    Ensure that the appropriate clobbering of variables in the
+    urlparser module has been done. In the future, the need for this
+    clobbering to begin with should preferably be eliminated.
+    """
+    def init():
+        global _urlparser_initialized
+
+        if not _urlparser_initialized:
+            # These URL schemes have a backend with a notion of an RFC "network location".
+            # The 'file' and 's3+http' schemes should not be in this list.
+            # 'http' and 'https' are not actually used for duplicity backend urls, but are needed
+            # in order to properly support urls returned from some webdav servers. adding them here
+            # is a hack. we should instead not stomp on the url parsing module to begin with.
+            #
+            # todo: eliminate the need for backend specific hacking here completely.
+            urlparser.uses_netloc = ['ftp',
+                                     'ftps',
+                                     'hsi',
+                                     'rsync',
+                                     's3',
+                                     'u1',
+                                     'scp', 'ssh', 'sftp',
+                                     'webdav', 'webdavs',
+                                     'gdocs',
+                                     'http', 'https',
+                                     'imap', 'imaps',
+                                     'mega']
+
+            # Do not transform or otherwise parse the URL path component.
+            urlparser.uses_query = []
+            urlparser.uses_fragm = []
+
+            _urlparser_initialized = True
+
+    dup_threading.with_lock(_urlparser_initialized_lock, init)
 
 class ParsedUrl:
     """
@@ -239,6 +217,7 @@
     Raise InvalidBackendURL on invalid URL's
     """
     def __init__(self, url_string):
+        _ensure_urlparser_initialized()
         self.url_string = url_string
 
         # Python < 2.6.5 still examine urlparse.uses_netlock when parsing urls,
@@ -251,7 +230,7 @@
         # problems here, so they will be caught early.
 
         try:
-            pu = urlparse.urlparse(url_string)
+            pu = urlparser.urlparse(url_string)
         except Exception:
             raise InvalidBackendURL("Syntax error in: %s" % url_string)
 
@@ -302,6 +281,7 @@
             if not ( self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string)):
                 raise InvalidBackendURL("Syntax error (port) in: %s A%s B%s C%s" % (url_string, (self.scheme in ['rsync']), re.search('::[^:]+$', self.netloc), self.netloc ) )
 
+<<<<<<< TREE
         # Our URL system uses two slashes more than urlparse's does when using
         # non-netloc URLs.  And we want to make sure that if urlparse assuming
         # a netloc where we don't want one, that we correct it.
@@ -313,18 +293,20 @@
             elif not self.path.startswith('//') and self.path.startswith('/'):
                 self.path = '//' + self.path
 
+=======
+>>>>>>> MERGE-SOURCE
         # This happens for implicit local paths.
         if not self.scheme:
             return
 
         # Our backends do not handle implicit hosts.
-        if self.scheme in uses_netloc and not self.hostname:
+        if self.scheme in urlparser.uses_netloc and not self.hostname:
             raise InvalidBackendURL("Missing hostname in a backend URL which "
                                     "requires an explicit hostname: %s"
                                     "" % (url_string))
 
         # Our backends do not handle implicit relative paths.
-        if self.scheme not in uses_netloc and not self.path.startswith('//'):
+        if self.scheme not in urlparser.uses_netloc and not self.path.startswith('//'):
             raise InvalidBackendURL("missing // - relative paths not supported "
                                     "for scheme %s: %s"
                                     "" % (self.scheme, url_string))
@@ -342,74 +324,165 @@
     # Replace the full network location with the stripped copy.
     return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1)
 
-def _get_code_from_exception(backend, operation, e):
-    if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error:
-        return e.code
-    elif hasattr(backend, '_error_code'):
-        return backend._error_code(operation, e) or log.ErrorCode.backend_error
-    elif hasattr(e, 'errno'):
-        # A few backends return such errors (local, paramiko, etc)
-        if e.errno == errno.EACCES:
-            return log.ErrorCode.backend_permission_denied
-        elif e.errno == errno.ENOENT:
-            return log.ErrorCode.backend_not_found
-        elif e.errno == errno.ENOSPC:
-            return log.ErrorCode.backend_no_space
-    return log.ErrorCode.backend_error
-
-def retry(operation, fatal=True):
-    # Decorators with arguments introduce a new level of indirection.  So we
-    # have to return a decorator function (which itself returns a function!)
-    def outer_retry(fn):
-        def inner_retry(self, *args):
-            for n in range(1, globals.num_retries + 1):
+
+# Decorator for backend operation functions to simplify writing one that
+# retries.  Make sure to add a keyword argument 'raise_errors' to your function
+# and if it is true, raise an exception on an error.  If false, fatal-log it.
+def retry(fn):
+    def iterate(*args):
+        for n in range(1, globals.num_retries):
+            try:
+                kwargs = {"raise_errors" : True}
+                return fn(*args, **kwargs)
+            except Exception, e:
+                log.Warn(_("Attempt %s failed: %s: %s")
+                         % (n, e.__class__.__name__, util.uexc(e)))
+                log.Debug(_("Backtrace of previous error: %s")
+                          % exception_traceback())
+                if isinstance(e, TemporaryLoadException):
+                    time.sleep(30) # wait longer before trying again
+                else:
+                    time.sleep(10) # wait a bit before trying again
+        # Now try one last time, but fatal-log instead of raising errors
+        kwargs = {"raise_errors" : False}
+        return fn(*args, **kwargs)
+    return iterate
+
+# same as above, a bit dumber and always dies fatally if last trial fails
+# hence no need for the raise_errors var ;), we really catch everything here
+# as we don't know what the underlying code comes up with and we really *do*
+# want to retry globals.num_retries times under all circumstances
+def retry_fatal(fn):
+    def _retry_fatal(self, *args):
+        try:
+            n = 0
+            for n in range(1, globals.num_retries):
                 try:
+                    self.retry_count = n
                     return fn(self, *args)
-                except FatalBackendException as e:
+                except FatalBackendError, e:
                     # die on fatal errors
                     raise e
-                except Exception as e:
+                except Exception, e:
                     # retry on anything else
+                    log.Warn(_("Attempt %s failed. %s: %s")
+                             % (n, e.__class__.__name__, util.uexc(e)))
                     log.Debug(_("Backtrace of previous error: %s")
                               % exception_traceback())
-                    at_end = n == globals.num_retries
-                    code = _get_code_from_exception(self.backend, operation, e)
-                    if code == log.ErrorCode.backend_not_found:
-                        # If we tried to do something, but the file just isn't there,
-                        # no need to retry.
-                        at_end = True
-                    if at_end and fatal:
-                        def make_filename(f):
-                            if isinstance(f, path.ROPath):
-                                return util.escape(f.name)
-                            else:
-                                return util.escape(f)
-                        extra = ' '.join([operation] + [make_filename(x) for x in args if x])
-                        log.FatalError(_("Giving up after %s attempts. %s: %s")
-                                       % (n, e.__class__.__name__,
-                                          util.uexc(e)), code=code, extra=extra)
-                    else:
-                        log.Warn(_("Attempt %s failed. %s: %s")
-                                 % (n, e.__class__.__name__, util.uexc(e)))
-                    if not at_end:
-                        if isinstance(e, TemporaryLoadException):
-                            time.sleep(90) # wait longer before trying again
-                        else:
-                            time.sleep(30) # wait a bit before trying again
-                        if hasattr(self.backend, '_retry_cleanup'):
-                            self.backend._retry_cleanup()
-
-        return inner_retry
-    return outer_retry
-
+                    time.sleep(10) # wait a bit before trying again
+        # final trial, die on exception
+            self.retry_count = n+1
+            return fn(self, *args)
+        except Exception, e:
+            log.Debug(_("Backtrace of previous error: %s")
+                        % exception_traceback())
+            log.FatalError(_("Giving up after %s attempts. %s: %s")
+                         % (self.retry_count, e.__class__.__name__, util.uexc(e)),
+                          log.ErrorCode.backend_error)
+        self.retry_count = 0
+
+    return _retry_fatal
 
 class Backend(object):
     """
-    See README in backends directory for information on how to write a backend.
+    Represents a generic duplicity backend, capable of storing and
+    retrieving files.
+
+    Concrete sub-classes are expected to implement:
+
+      - put
+      - get
+      - list
+      - delete
+      - close (if needed)
+
+    Optional:
+
+      - move
     """
+    
     def __init__(self, parsed_url):
         self.parsed_url = parsed_url
 
+    def put(self, source_path, remote_filename = None):
+        """
+        Transfer source_path (Path object) to remote_filename (string)
+
+        If remote_filename is None, get the filename from the last
+        path component of pathname.
+        """
+        raise NotImplementedError()
+
+    def move(self, source_path, remote_filename = None):
+        """
+        Move source_path (Path object) to remote_filename (string)
+
+        Same as put(), but unlinks source_path in the process.  This allows the
+        local backend to do this more efficiently using rename.
+        """
+        self.put(source_path, remote_filename)
+        source_path.delete()
+
+    def get(self, remote_filename, local_path):
+        """Retrieve remote_filename and place in local_path"""
+        raise NotImplementedError()
+
+    def list(self):
+        """
+        Return list of filenames (byte strings) present in backend
+        """
+        def tobytes(filename):
+            "Convert a (maybe unicode) filename to bytes"
+            if isinstance(filename, unicode):
+                # There shouldn't be any encoding errors for files we care
+                # about, since duplicity filenames are ascii.  But user files
+                # may be in the same directory.  So just replace characters.
+                return filename.encode(sys.getfilesystemencoding(), 'replace')
+            else:
+                return filename
+
+        if hasattr(self, '_list'):
+            # Make sure that duplicity internals only ever see byte strings
+            # for filenames, no matter what the backend thinks it is talking.
+            return map(tobytes, self._list())
+        else:
+            raise NotImplementedError()
+
+    def delete(self, filename_list):
+        """
+        Delete each filename in filename_list, in order if possible.
+        """
+        raise NotImplementedError()
+
+    # Should never cause FatalError.
+    # Returns a dictionary of dictionaries.  The outer dictionary maps
+    # filenames to metadata dictionaries.  Supported metadata are:
+    #
+    # 'size': if >= 0, size of file
+    #         if -1, file is not found
+    #         if None, error querying file
+    #
+    # Returned dictionary is guaranteed to contain a metadata dictionary for
+    # each filename, but not all metadata are guaranteed to be present.
+    def query_info(self, filename_list, raise_errors=True):
+        """
+        Return metadata about each filename in filename_list
+        """
+        info = {}
+        if hasattr(self, '_query_list_info'):
+            info = self._query_list_info(filename_list)
+        elif hasattr(self, '_query_file_info'):
+            for filename in filename_list:
+                info[filename] = self._query_file_info(filename)
+
+        # Fill out any missing entries (may happen if backend has no support
+        # or its query_list support is lazy)
+        for filename in filename_list:
+            if filename not in info:
+                info[filename] = {}
+
+        return info
+
     """ use getpass by default, inherited backends may overwrite this behaviour """
     use_getpass = True
 
@@ -448,7 +521,27 @@
         else:
             return commandline
 
-    def __subprocess_popen(self, commandline):
+    """
+    DEPRECATED:
+    run_command(_persist) - legacy wrappers for subprocess_popen(_persist)
+    """
+    def run_command(self, commandline):
+        return self.subprocess_popen(commandline)
+    def run_command_persist(self, commandline):
+        return self.subprocess_popen_persist(commandline)
+
+    """
+    DEPRECATED:
+    popen(_persist) - legacy wrappers for subprocess_popen(_persist)
+    """
+    def popen(self, commandline):
+        result, stdout, stderr = self.subprocess_popen(commandline)
+        return stdout
+    def popen_persist(self, commandline):
+        result, stdout, stderr = self.subprocess_popen_persist(commandline)
+        return stdout
+
+    def _subprocess_popen(self, commandline):
         """
         For internal use.
         Execute the given command line, interpreted as a shell command.
@@ -460,10 +553,6 @@
 
         return p.returncode, stdout, stderr
 
-    """ a dictionary for breaking exceptions, syntax is
-        { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
-    popen_breaks = {}
-
     def subprocess_popen(self, commandline):
         """
         Execute the given command line with error check.
@@ -473,179 +562,54 @@
         """
         private = self.munge_password(commandline)
         log.Info(_("Reading results of '%s'") % private)
-        result, stdout, stderr = self.__subprocess_popen(commandline)
+        result, stdout, stderr = self._subprocess_popen(commandline)
         if result != 0:
+            raise BackendException("Error running '%s'" % private)
+        return result, stdout, stderr
+
+    """ a dictionary for persist breaking exceptions, syntax is
+        { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
+    popen_persist_breaks = {}
+
+    def subprocess_popen_persist(self, commandline):
+        """
+        Execute the given command line with error check.
+        Retries globals.num_retries times with 30s delay.
+        Returns int Exitcode, string StdOut, string StdErr
+
+        Raise a BackendException on failure.
+        """
+        private = self.munge_password(commandline)
+
+        for n in range(1, globals.num_retries+1):
+            # sleep before retry
+            if n > 1:
+                time.sleep(30)
+            log.Info(_("Reading results of '%s'") % private)
+            result, stdout, stderr = self._subprocess_popen(commandline)
+            if result == 0:
+                return result, stdout, stderr
+
             try:
                 m = re.search("^\s*([\S]+)", commandline)
                 cmd = m.group(1)
-                ignores = self.popen_breaks[ cmd ]
+                ignores = self.popen_persist_breaks[ cmd ]
                 ignores.index(result)
                 """ ignore a predefined set of error codes """
                 return 0, '', ''
             except (KeyError, ValueError):
-                raise BackendException("Error running '%s': returned %d, with output:\n%s" %
-                                       (private, result, stdout + '\n' + stderr))
-        return result, stdout, stderr
-
-
-class BackendWrapper(object):
-    """
-    Represents a generic duplicity backend, capable of storing and
-    retrieving files.
-    """
-    
-    def __init__(self, backend):
-        self.backend = backend
-
-    def __do_put(self, source_path, remote_filename):
-        if hasattr(self.backend, '_put'):
-            log.Info(_("Writing %s") % util.ufn(remote_filename))
-            self.backend._put(source_path, remote_filename)
-        else:
-            raise NotImplementedError()
-
-    @retry('put', fatal=True)
-    def put(self, source_path, remote_filename=None):
-        """
-        Transfer source_path (Path object) to remote_filename (string)
-
-        If remote_filename is None, get the filename from the last
-        path component of pathname.
-        """
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-        self.__do_put(source_path, remote_filename)
-
-    @retry('move', fatal=True)
-    def move(self, source_path, remote_filename=None):
-        """
-        Move source_path (Path object) to remote_filename (string)
-
-        Same as put(), but unlinks source_path in the process.  This allows the
-        local backend to do this more efficiently using rename.
-        """
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-        if hasattr(self.backend, '_move'):
-            if self.backend._move(source_path, remote_filename) is not False:
-                source_path.setdata()
-                return
-        self.__do_put(source_path, remote_filename)
-        source_path.delete()
-
-    @retry('get', fatal=True)
-    def get(self, remote_filename, local_path):
-        """Retrieve remote_filename and place in local_path"""
-        if hasattr(self.backend, '_get'):
-            self.backend._get(remote_filename, local_path)
-            if not local_path.exists():
-                raise BackendException(_("File %s not found locally after get "
-                                         "from backend") % util.ufn(local_path.name))
-            local_path.setdata()
-        else:
-            raise NotImplementedError()
-
-    @retry('list', fatal=True)
-    def list(self):
-        """
-        Return list of filenames (byte strings) present in backend
-        """
-        def tobytes(filename):
-            "Convert a (maybe unicode) filename to bytes"
-            if isinstance(filename, unicode):
-                # There shouldn't be any encoding errors for files we care
-                # about, since duplicity filenames are ascii.  But user files
-                # may be in the same directory.  So just replace characters.
-                return filename.encode(sys.getfilesystemencoding(), 'replace')
-            else:
-                return filename
-
-        if hasattr(self.backend, '_list'):
-            # Make sure that duplicity internals only ever see byte strings
-            # for filenames, no matter what the backend thinks it is talking.
-            return [tobytes(x) for x in self.backend._list()]
-        else:
-            raise NotImplementedError()
-
-    def delete(self, filename_list):
-        """
-        Delete each filename in filename_list, in order if possible.
-        """
-        assert type(filename_list) is not types.StringType
-        if hasattr(self.backend, '_delete_list'):
-            self._do_delete_list(filename_list)
-        elif hasattr(self.backend, '_delete'):
-            for filename in filename_list:
-                self._do_delete(filename)
-        else:
-            raise NotImplementedError()
-
-    @retry('delete', fatal=False)
-    def _do_delete_list(self, filename_list):
-        self.backend._delete_list(filename_list)
-
-    @retry('delete', fatal=False)
-    def _do_delete(self, filename):
-        self.backend._delete(filename)
-
-    # Should never cause FatalError.
-    # Returns a dictionary of dictionaries.  The outer dictionary maps
-    # filenames to metadata dictionaries.  Supported metadata are:
-    #
-    # 'size': if >= 0, size of file
-    #         if -1, file is not found
-    #         if None, error querying file
-    #
-    # Returned dictionary is guaranteed to contain a metadata dictionary for
-    # each filename, and all metadata are guaranteed to be present.
-    def query_info(self, filename_list):
-        """
-        Return metadata about each filename in filename_list
-        """
-        info = {}
-        if hasattr(self.backend, '_query_list'):
-            info = self._do_query_list(filename_list)
-            if info is None:
-                info = {}
-        elif hasattr(self.backend, '_query'):
-            for filename in filename_list:
-                info[filename] = self._do_query(filename)
-
-        # Fill out any missing entries (may happen if backend has no support
-        # or its query_list support is lazy)
-        for filename in filename_list:
-            if filename not in info or info[filename] is None:
-                info[filename] = {}
-            for metadata in ['size']:
-                info[filename].setdefault(metadata, None)
-
-        return info
-
-    @retry('query', fatal=False)
-    def _do_query_list(self, filename_list):
-        info = self.backend._query_list(filename_list)
-        if info is None:
-            info = {}
-        return info
-
-    @retry('query', fatal=False)
-    def _do_query(self, filename):
-        try:
-            return self.backend._query(filename)
-        except Exception as e:
-            code = _get_code_from_exception(self.backend, 'query', e)
-            if code == log.ErrorCode.backend_not_found:
-                return {'size': -1}
-            else:
-                raise e
-
-    def close(self):
-        """
-        Close the backend, releasing any resources held and
-        invalidating any file objects obtained from the backend.
-        """
-        if hasattr(self.backend, '_close'):
-            self.backend._close()
+                pass
+
+            log.Warn(ngettext("Running '%s' failed with code %d (attempt #%d)",
+                              "Running '%s' failed with code %d (attempt #%d)", n) %
+                               (private, result, n))
+            if stdout or stderr:
+                    log.Warn(_("Error is:\n%s") % stderr + (stderr and stdout and "\n") + stdout)
+
+        log.Warn(ngettext("Giving up trying to execute '%s' after %d attempt",
+                          "Giving up trying to execute '%s' after %d attempts",
+                          globals.num_retries) % (private, globals.num_retries))
+        raise BackendException("Error running '%s'" % private)
 
     def get_fileobj_read(self, filename, parseresults = None):
         """
@@ -662,6 +626,37 @@
         tdp.setdata()
         return tdp.filtered_open_with_delete("rb")
 
+    def get_fileobj_write(self, filename,
+                          parseresults = None,
+                          sizelist = None):
+        """
+        Return fileobj opened for writing, which will cause the file
+        to be written to the backend on close().
+
+        The file will be encoded as specified in parseresults (or as
+        read from the filename), and stored in a temp file until it
+        can be copied over and deleted.
+
+        If sizelist is not None, it should be set to an empty list.
+        The number of bytes will be inserted into the list.
+        """
+        if not parseresults:
+            parseresults = file_naming.parse(filename)
+            assert parseresults, u"Filename %s not correctly parsed" % util.ufn(filename)
+        tdp = dup_temp.new_tempduppath(parseresults)
+
+        def close_file_hook():
+            """This is called when returned fileobj is closed"""
+            self.put(tdp, filename)
+            if sizelist is not None:
+                tdp.setdata()
+                sizelist.append(tdp.getsize())
+            tdp.delete()
+
+        fh = dup_temp.FileobjHooked(tdp.filtered_open("wb"))
+        fh.addhook(close_file_hook)
+        return fh
+
     def get_data(self, filename, parseresults = None):
         """
         Retrieve a file from backend, process it, return contents.
@@ -670,3 +665,18 @@
         buf = fin.read()
         assert not fin.close()
         return buf
+
+    def put_data(self, buffer, filename, parseresults = None):
+        """
+        Put buffer into filename on backend after processing.
+        """
+        fout = self.get_fileobj_write(filename, parseresults)
+        fout.write(buffer)
+        assert not fout.close()
+
+    def close(self):
+        """
+        Close the backend, releasing any resources held and
+        invalidating any file objects obtained from the backend.
+        """
+        pass

=== renamed file 'duplicity/backends/README' => 'duplicity/backends/README.THIS'
=== modified file 'duplicity/backends/_boto_multi.py'
--- duplicity/backends/_boto_multi.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/_boto_multi.py	2014-06-14 13:58:30 +0000
@@ -33,8 +33,8 @@
 from duplicity.filechunkio import FileChunkIO
 from duplicity import progress
 
-from ._boto_single import BotoBackend as BotoSingleBackend
-from ._boto_single import get_connection
+from _boto_single import BotoBackend as BotoSingleBackend
+from _boto_single import get_connection
 
 BOTO_MIN_VERSION = "2.1.1"
 
@@ -63,7 +63,7 @@
             try:
                 args = self.queue.get(True, 1)
                 progress.report_transfer(args[0], args[1])
-            except Queue.Empty as e:
+            except Queue.Empty, e:
                 pass
 
 
@@ -98,8 +98,8 @@
 
         self._pool = multiprocessing.Pool(processes=number_of_procs)
 
-    def _close(self):
-        BotoSingleBackend._close(self)
+    def close(self):
+        BotoSingleBackend.close(self)
         log.Debug("Closing pool")
         self._pool.terminate()
         self._pool.join()
@@ -210,7 +210,7 @@
             conn = None
             bucket = None
             del conn
-        except Exception as e:
+        except Exception, e:
             traceback.print_exc()
             if num_retries:
                 log.Debug("%s: Upload of chunk %d failed. Retrying %d more times..." % (

=== modified file 'duplicity/backends/_boto_single.py'
--- duplicity/backends/_boto_single.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/_boto_single.py	2014-06-14 13:58:30 +0000
@@ -25,7 +25,9 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import FatalBackendException, BackendException
+from duplicity.errors import * #@UnusedWildImport
+from duplicity.util import exception_traceback
+from duplicity.backend import retry
 from duplicity import progress
 
 BOTO_MIN_VERSION = "2.1.1"
@@ -135,7 +137,7 @@
         # This folds the null prefix and all null parts, which means that:
         #  //MyBucket/ and //MyBucket are equivalent.
         #  //MyBucket//My///My/Prefix/ and //MyBucket/My/Prefix are equivalent.
-        self.url_parts = [x for x in parsed_url.path.split('/') if x != '']
+        self.url_parts = filter(lambda x: x != '', parsed_url.path.split('/'))
 
         if self.url_parts:
             self.bucket_name = self.url_parts.pop(0)
@@ -161,7 +163,7 @@
         self.resetConnection()
         self._listed_keys = {}
 
-    def _close(self):
+    def close(self):
         del self._listed_keys
         self._listed_keys = {}
         self.bucket = None
@@ -183,69 +185,137 @@
         self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)
         self.bucket = self.conn.lookup(self.bucket_name)
 
-    def _retry_cleanup(self):
-        self.resetConnection()
-
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename=None):
         from boto.s3.connection import Location
         if globals.s3_european_buckets:
             if not globals.s3_use_new_style:
-                raise FatalBackendException("European bucket creation was requested, but not new-style "
-                                            "bucket addressing (--s3-use-new-style)",
-                                            code=log.ErrorCode.s3_bucket_not_style)
-
-        if self.bucket is None:
+                log.FatalError("European bucket creation was requested, but not new-style "
+                               "bucket addressing (--s3-use-new-style)",
+                               log.ErrorCode.s3_bucket_not_style)
+        #Network glitch may prevent first few attempts of creating/looking up a bucket
+        for n in range(1, globals.num_retries+1):
+            if self.bucket:
+                break
+            if n > 1:
+                time.sleep(30)
+                self.resetConnection()
             try:
-                self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
-            except Exception as e:
-                if "NoSuchBucket" in str(e):
-                    if globals.s3_european_buckets:
-                        self.bucket = self.conn.create_bucket(self.bucket_name,
-                                                              location=Location.EU)
+                try:
+                    self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
+                except Exception, e:
+                    if "NoSuchBucket" in str(e):
+                        if globals.s3_european_buckets:
+                            self.bucket = self.conn.create_bucket(self.bucket_name,
+                                                                  location=Location.EU)
+                        else:
+                            self.bucket = self.conn.create_bucket(self.bucket_name)
                     else:
-                        self.bucket = self.conn.create_bucket(self.bucket_name)
+                        raise e
+            except Exception, e:
+                log.Warn("Failed to create bucket (attempt #%d) '%s' failed (reason: %s: %s)"
+                         "" % (n, self.bucket_name,
+                               e.__class__.__name__,
+                               str(e)))
+
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        key = self.bucket.new_key(self.key_prefix + remote_filename)
+
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
+                time.sleep(10)
+
+            if globals.s3_use_rrs:
+                storage_class = 'REDUCED_REDUNDANCY'
+            else:
+                storage_class = 'STANDARD'
+            log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
+            try:
+                if globals.s3_use_sse:
+                    headers = {
+                    'Content-Type': 'application/octet-stream',
+                    'x-amz-storage-class': storage_class,
+                    'x-amz-server-side-encryption': 'AES256'
+                }
                 else:
-                    raise
-
-        key = self.bucket.new_key(self.key_prefix + remote_filename)
-
-        if globals.s3_use_rrs:
-            storage_class = 'REDUCED_REDUNDANCY'
-        else:
-            storage_class = 'STANDARD'
-        log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
-        if globals.s3_use_sse:
-            headers = {
-            'Content-Type': 'application/octet-stream',
-            'x-amz-storage-class': storage_class,
-            'x-amz-server-side-encryption': 'AES256'
-        }
-        else:
-            headers = {
-            'Content-Type': 'application/octet-stream',
-            'x-amz-storage-class': storage_class
-        }
-        
-        upload_start = time.time()
-        self.upload(source_path.name, key, headers)
-        upload_end = time.time()
-        total_s = abs(upload_end-upload_start) or 1  # prevent a zero value!
-        rough_upload_speed = os.path.getsize(source_path.name)/total_s
-        log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
-
-    def _get(self, remote_filename, local_path):
+                    headers = {
+                    'Content-Type': 'application/octet-stream',
+                    'x-amz-storage-class': storage_class
+                }
+                
+                upload_start = time.time()
+                self.upload(source_path.name, key, headers)
+                upload_end = time.time()
+                total_s = abs(upload_end-upload_start) or 1  # prevent a zero value!
+                rough_upload_speed = os.path.getsize(source_path.name)/total_s
+                self.resetConnection()
+                log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
+                return
+            except Exception, e:
+                log.Warn("Upload '%s/%s' failed (attempt #%d, reason: %s: %s)"
+                         "" % (self.straight_url,
+                               remote_filename,
+                               n,
+                               e.__class__.__name__,
+                               str(e)))
+                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
+                self.resetConnection()
+        log.Warn("Giving up trying to upload %s/%s after %d attempts" %
+                 (self.straight_url, remote_filename, globals.num_retries))
+        raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
+
+    def get(self, remote_filename, local_path):
         key_name = self.key_prefix + remote_filename
         self.pre_process_download(remote_filename, wait=True)
         key = self._listed_keys[key_name]
-        self.resetConnection()
-        key.get_contents_to_filename(local_path.name)
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
+                time.sleep(10)
+            log.Info("Downloading %s/%s" % (self.straight_url, remote_filename))
+            try:
+                self.resetConnection()
+                key.get_contents_to_filename(local_path.name)
+                local_path.setdata()
+                return
+            except Exception, e:
+                log.Warn("Download %s/%s failed (attempt #%d, reason: %s: %s)"
+                         "" % (self.straight_url,
+                               remote_filename,
+                               n,
+                               e.__class__.__name__,
+                               str(e)), 1)
+                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
+
+        log.Warn("Giving up trying to download %s/%s after %d attempts" %
+                (self.straight_url, remote_filename, globals.num_retries))
+        raise BackendException("Error downloading %s/%s" % (self.straight_url, remote_filename))
 
     def _list(self):
         if not self.bucket:
             raise BackendException("No connection to backend")
-        return self.list_filenames_in_bucket()
-
-    def list_filenames_in_bucket(self):
+
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(30)
+                self.resetConnection()
+            log.Info("Listing %s" % self.straight_url)
+            try:
+                return self._list_filenames_in_bucket()
+            except Exception, e:
+                log.Warn("List %s failed (attempt #%d, reason: %s: %s)"
+                         "" % (self.straight_url,
+                               n,
+                               e.__class__.__name__,
+                               str(e)), 1)
+                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
+        log.Warn("Giving up trying to list %s after %d attempts" %
+                (self.straight_url, globals.num_retries))
+        raise BackendException("Error listng %s" % self.straight_url)
+
+    def _list_filenames_in_bucket(self):
         # We add a 'd' to the prefix to make sure it is not null (for boto) and
         # to optimize the listing of our filenames, which always begin with 'd'.
         # This will cause a failure in the regression tests as below:
@@ -266,37 +336,76 @@
                 pass
         return filename_list
 
-    def _delete(self, filename):
-        self.bucket.delete_key(self.key_prefix + filename)
+    def delete(self, filename_list):
+        for filename in filename_list:
+            self.bucket.delete_key(self.key_prefix + filename)
+            log.Debug("Deleted %s/%s" % (self.straight_url, filename))
 
-    def _query(self, filename):
-        key = self.bucket.lookup(self.key_prefix + filename)
-        if key is None:
-            return {'size': -1}
-        return {'size': key.size}
+    @retry
+    def _query_file_info(self, filename, raise_errors=False):
+        try:
+            key = self.bucket.lookup(self.key_prefix + filename)
+            if key is None:
+                return {'size': -1}
+            return {'size': key.size}
+        except Exception, e:
+            log.Warn("Query %s/%s failed: %s"
+                     "" % (self.straight_url,
+                           filename,
+                           str(e)))
+            self.resetConnection()
+            if raise_errors:
+                raise e
+            else:
+                return {'size': None}
 
     def upload(self, filename, key, headers):
-        key.set_contents_from_filename(filename, headers,
-                                       cb=progress.report_transfer,
-                                       num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
-                                       )  # Max num of callbacks = 8 times x megabyte
-        key.close()
+            key.set_contents_from_filename(filename, headers,
+                                           cb=progress.report_transfer,
+                                           num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
+                                           )  # Max num of callbacks = 8 times x megabyte
+            key.close()
 
-    def pre_process_download(self, remote_filename, wait=False):
+    def pre_process_download(self, files_to_download, wait=False):
         # Used primarily to move files in Glacier to S3
-        key_name = self.key_prefix + remote_filename
-        if not self._listed_keys.get(key_name, False):
-            self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
-        key = self._listed_keys[key_name]
+        if isinstance(files_to_download, basestring):
+            files_to_download = [files_to_download]
 
-        if key.storage_class == "GLACIER":
-            # We need to move the file out of glacier
-            if not self.bucket.get_key(key.key).ongoing_restore:
-                log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
-                key.restore(days=1)  # Shouldn't need this again after 1 day
-            if wait:
-                log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
-                while self.bucket.get_key(key.key).ongoing_restore:
-                    time.sleep(60)
+        for remote_filename in files_to_download:
+            success = False
+            for n in range(1, globals.num_retries+1):
+                if n > 1:
+                    # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
+                    time.sleep(10)
                     self.resetConnection()
-                log.Info("File %s was successfully restored from Glacier" % remote_filename)
+                try:
+                    key_name = self.key_prefix + remote_filename
+                    if not self._listed_keys.get(key_name, False):
+                        self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
+                    key = self._listed_keys[key_name]
+
+                    if key.storage_class == "GLACIER":
+                        # We need to move the file out of glacier
+                        if not self.bucket.get_key(key.key).ongoing_restore:
+                            log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
+                            key.restore(days=1)  # Shouldn't need this again after 1 day
+                        if wait:
+                            log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
+                            while self.bucket.get_key(key.key).ongoing_restore:
+                                time.sleep(60)
+                                self.resetConnection()
+                            log.Info("File %s was successfully restored from Glacier" % remote_filename)
+                    success = True
+                    break
+                except Exception, e:
+                    log.Warn("Restoration from Glacier for file %s/%s failed (attempt #%d, reason: %s: %s)"
+                             "" % (self.straight_url,
+                                   remote_filename,
+                                   n,
+                                   e.__class__.__name__,
+                                   str(e)), 1)
+                    log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
+            if not success:
+                log.Warn("Giving up trying to restore %s/%s after %d attempts" %
+                        (self.straight_url, remote_filename, globals.num_retries))
+                raise BackendException("Error restoring %s/%s from Glacier to S3" % (self.straight_url, remote_filename))

=== modified file 'duplicity/backends/_cf_cloudfiles.py'
--- duplicity/backends/_cf_cloudfiles.py	2014-04-29 23:49:01 +0000
+++ duplicity/backends/_cf_cloudfiles.py	2014-06-14 13:58:30 +0000
@@ -19,11 +19,14 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+import time
 
 import duplicity.backend
+from duplicity import globals
 from duplicity import log
-from duplicity import util
-from duplicity.errors import BackendException
+from duplicity.errors import * #@UnusedWildImport
+from duplicity.util import exception_traceback
+from duplicity.backend import retry
 
 class CloudFilesBackend(duplicity.backend.Backend):
     """
@@ -41,17 +44,17 @@
         self.resp_exc = ResponseError
         conn_kwargs = {}
 
-        if 'CLOUDFILES_USERNAME' not in os.environ:
+        if not os.environ.has_key('CLOUDFILES_USERNAME'):
             raise BackendException('CLOUDFILES_USERNAME environment variable'
                                    'not set.')
 
-        if 'CLOUDFILES_APIKEY' not in os.environ:
+        if not os.environ.has_key('CLOUDFILES_APIKEY'):
             raise BackendException('CLOUDFILES_APIKEY environment variable not set.')
 
         conn_kwargs['username'] = os.environ['CLOUDFILES_USERNAME']
         conn_kwargs['api_key'] = os.environ['CLOUDFILES_APIKEY']
 
-        if 'CLOUDFILES_AUTHURL' in os.environ:
+        if os.environ.has_key('CLOUDFILES_AUTHURL'):
             conn_kwargs['authurl'] = os.environ['CLOUDFILES_AUTHURL']
         else:
             conn_kwargs['authurl'] = consts.default_authurl
@@ -60,43 +63,130 @@
 
         try:
             conn = Connection(**conn_kwargs)
-        except Exception as e:
+        except Exception, e:
             log.FatalError("Connection failed, please check your credentials: %s %s"
-                           % (e.__class__.__name__, util.uexc(e)),
+                           % (e.__class__.__name__, str(e)),
                            log.ErrorCode.connection_failed)
         self.container = conn.create_container(container)
 
-    def _error_code(self, operation, e):
+    def put(self, source_path, remote_filename = None):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
+        for n in range(1, globals.num_retries+1):
+            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
+            try:
+                sobject = self.container.create_object(remote_filename)
+                sobject.load_from_filename(source_path.name)
+                return
+            except self.resp_exc, error:
+                log.Warn("Upload of '%s' failed (attempt %d): CloudFiles returned: %s %s"
+                         % (remote_filename, n, error.status, error.reason))
+            except Exception, e:
+                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
+                        % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up uploading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error uploading '%s'" % remote_filename)
+
+    def get(self, remote_filename, local_path):
+        for n in range(1, globals.num_retries+1):
+            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
+            try:
+                sobject = self.container.create_object(remote_filename)
+                f = open(local_path.name, 'w')
+                for chunk in sobject.stream():
+                    f.write(chunk)
+                local_path.setdata()
+                return
+            except self.resp_exc, resperr:
+                log.Warn("Download of '%s' failed (attempt %s): CloudFiles returned: %s %s"
+                         % (remote_filename, n, resperr.status, resperr.reason))
+            except Exception, e:
+                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up downloading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error downloading '%s/%s'"
+                               % (self.container, remote_filename))
+
+    def _list(self):
+        for n in range(1, globals.num_retries+1):
+            log.Info("Listing '%s'" % (self.container))
+            try:
+                # Cloud Files will return a max of 10,000 objects.  We have
+                # to make multiple requests to get them all.
+                objs = self.container.list_objects()
+                keys = objs
+                while len(objs) == 10000:
+                    objs = self.container.list_objects(marker=keys[-1])
+                    keys += objs
+                return keys
+            except self.resp_exc, resperr:
+                log.Warn("Listing of '%s' failed (attempt %s): CloudFiles returned: %s %s"
+                         % (self.container, n, resperr.status, resperr.reason))
+            except Exception, e:
+                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
+                         % (self.container, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up listing of '%s' after %s attempts"
+                 % (self.container, globals.num_retries))
+        raise BackendException("Error listing '%s'"
+                               % (self.container))
+
+    def delete_one(self, remote_filename):
+        for n in range(1, globals.num_retries+1):
+            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
+            try:
+                self.container.delete_object(remote_filename)
+                return
+            except self.resp_exc, resperr:
+                if n > 1 and resperr.status == 404:
+                    # We failed on a timeout, but delete succeeded on the server
+                    log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename )
+                    return
+                log.Warn("Delete of '%s' failed (attempt %s): CloudFiles returned: %s %s"
+                         % (remote_filename, n, resperr.status, resperr.reason))
+            except Exception, e:
+                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up deleting '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error deleting '%s/%s'"
+                               % (self.container, remote_filename))
+
+    def delete(self, filename_list):
+        for file in filename_list:
+            self.delete_one(file)
+            log.Debug("Deleted '%s/%s'" % (self.container, file))
+
+    @retry
+    def _query_file_info(self, filename, raise_errors=False):
         from cloudfiles.errors import NoSuchObject
-        if isinstance(e, NoSuchObject):
-            return log.ErrorCode.backend_not_found
-        elif isinstance(e, self.resp_exc):
-            if e.status == 404:
-                return log.ErrorCode.backend_not_found
-
-    def _put(self, source_path, remote_filename):
-        sobject = self.container.create_object(remote_filename)
-        sobject.load_from_filename(source_path.name)
-
-    def _get(self, remote_filename, local_path):
-        sobject = self.container.create_object(remote_filename)
-        with open(local_path.name, 'wb') as f:
-            for chunk in sobject.stream():
-                f.write(chunk)
-
-    def _list(self):
-        # Cloud Files will return a max of 10,000 objects.  We have
-        # to make multiple requests to get them all.
-        objs = self.container.list_objects()
-        keys = objs
-        while len(objs) == 10000:
-            objs = self.container.list_objects(marker=keys[-1])
-            keys += objs
-        return keys
-
-    def _delete(self, filename):
-        self.container.delete_object(filename)
-
-    def _query(self, filename):
-        sobject = self.container.get_object(filename)
-        return {'size': sobject.size}
+        try:
+            sobject = self.container.get_object(filename)
+            return {'size': sobject.size}
+        except NoSuchObject:
+            return {'size': -1}
+        except Exception, e:
+            log.Warn("Error querying '%s/%s': %s"
+                     "" % (self.container,
+                           filename,
+                           str(e)))
+            if raise_errors:
+                raise e
+            else:
+                return {'size': None}
+
+duplicity.backend.register_backend("cf+http", CloudFilesBackend)

=== modified file 'duplicity/backends/_cf_pyrax.py'
--- duplicity/backends/_cf_pyrax.py	2014-04-29 23:49:01 +0000
+++ duplicity/backends/_cf_pyrax.py	2014-06-14 13:58:30 +0000
@@ -19,12 +19,14 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+import time
 
 import duplicity.backend
+from duplicity import globals
 from duplicity import log
-from duplicity import util
-from duplicity.errors import BackendException
-
+from duplicity.errors import *  # @UnusedWildImport
+from duplicity.util import exception_traceback
+from duplicity.backend import retry
 
 class PyraxBackend(duplicity.backend.Backend):
     """
@@ -43,63 +45,150 @@
 
         conn_kwargs = {}
 
-        if 'CLOUDFILES_USERNAME' not in os.environ:
+        if not os.environ.has_key('CLOUDFILES_USERNAME'):
             raise BackendException('CLOUDFILES_USERNAME environment variable'
                                    'not set.')
 
-        if 'CLOUDFILES_APIKEY' not in os.environ:
+        if not os.environ.has_key('CLOUDFILES_APIKEY'):
             raise BackendException('CLOUDFILES_APIKEY environment variable not set.')
 
         conn_kwargs['username'] = os.environ['CLOUDFILES_USERNAME']
         conn_kwargs['api_key'] = os.environ['CLOUDFILES_APIKEY']
 
-        if 'CLOUDFILES_REGION' in os.environ:
+        if os.environ.has_key('CLOUDFILES_REGION'):
             conn_kwargs['region'] = os.environ['CLOUDFILES_REGION']
 
         container = parsed_url.path.lstrip('/')
 
         try:
             pyrax.set_credentials(**conn_kwargs)
-        except Exception as e:
+        except Exception, e:
             log.FatalError("Connection failed, please check your credentials: %s %s"
-                           % (e.__class__.__name__, util.uexc(e)),
+                           % (e.__class__.__name__, str(e)),
                            log.ErrorCode.connection_failed)
 
         self.client_exc = pyrax.exceptions.ClientException
         self.nso_exc = pyrax.exceptions.NoSuchObject
+        self.cloudfiles = pyrax.cloudfiles
         self.container = pyrax.cloudfiles.create_container(container)
 
-    def _error_code(self, operation, e):
-        if isinstance(e, self.nso_exc):
-            return log.ErrorCode.backend_not_found
-        elif isinstance(e, self.client_exc):
-            if e.status == 404:
-                return log.ErrorCode.backend_not_found
-        elif hasattr(e, 'http_status'):
-            if e.http_status == 404:
-                return log.ErrorCode.backend_not_found
-
-    def _put(self, source_path, remote_filename):
-        self.container.upload_file(source_path.name, remote_filename)
-
-    def _get(self, remote_filename, local_path):
-        sobject = self.container.get_object(remote_filename)
-        with open(local_path.name, 'wb') as f:
-            f.write(sobject.get())
+    def put(self, source_path, remote_filename = None):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
+        for n in range(1, globals.num_retries + 1):
+            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
+            try:
+                self.container.upload_file(source_path.name, remote_filename)
+                return
+            except self.client_exc, error:
+                log.Warn("Upload of '%s' failed (attempt %d): pyrax returned: %s %s"
+                         % (remote_filename, n, error.__class__.__name__, error.message))
+            except Exception, e:
+                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
+                        % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up uploading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error uploading '%s'" % remote_filename)
+
+    def get(self, remote_filename, local_path):
+        for n in range(1, globals.num_retries + 1):
+            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
+            try:
+                sobject = self.container.get_object(remote_filename)
+                f = open(local_path.name, 'w')
+                f.write(sobject.get())
+                local_path.setdata()
+                return
+            except self.nso_exc:
+                return
+            except self.client_exc, resperr:
+                log.Warn("Download of '%s' failed (attempt %s): pyrax returned: %s %s"
+                         % (remote_filename, n, resperr.__class__.__name__, resperr.message))
+            except Exception, e:
+                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up downloading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error downloading '%s/%s'"
+                               % (self.container, remote_filename))
 
     def _list(self):
-        # Cloud Files will return a max of 10,000 objects.  We have
-        # to make multiple requests to get them all.
-        objs = self.container.get_object_names()
-        keys = objs
-        while len(objs) == 10000:
-            objs = self.container.get_object_names(marker = keys[-1])
-            keys += objs
-        return keys
-
-    def _delete(self, filename):
-        self.container.delete_object(filename)
-
-    def _query(self, filename):
-        sobject = self.container.get_object(filename)
-        return {'size': sobject.total_bytes}
+        for n in range(1, globals.num_retries + 1):
+            log.Info("Listing '%s'" % (self.container))
+            try:
+                # Cloud Files will return a max of 10,000 objects.  We have
+                # to make multiple requests to get them all.
+                objs = self.container.get_object_names()
+                keys = objs
+                while len(objs) == 10000:
+                    objs = self.container.get_object_names(marker = keys[-1])
+                    keys += objs
+                return keys
+            except self.client_exc, resperr:
+                log.Warn("Listing of '%s' failed (attempt %s): pyrax returned: %s %s"
+                         % (self.container, n, resperr.__class__.__name__, resperr.message))
+            except Exception, e:
+                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
+                         % (self.container, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up listing of '%s' after %s attempts"
+                 % (self.container, globals.num_retries))
+        raise BackendException("Error listing '%s'"
+                               % (self.container))
+
+    def delete_one(self, remote_filename):
+        for n in range(1, globals.num_retries + 1):
+            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
+            try:
+                self.container.delete_object(remote_filename)
+                return
+            except self.client_exc, resperr:
+                if n > 1 and resperr.status == 404:
+                    # We failed on a timeout, but delete succeeded on the server
+                    log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename)
+                    return
+                log.Warn("Delete of '%s' failed (attempt %s): pyrax returned: %s %s"
+                         % (remote_filename, n, resperr.__class__.__name__, resperr.message))
+            except Exception, e:
+                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up deleting '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error deleting '%s/%s'"
+                               % (self.container, remote_filename))
+
+    def delete(self, filename_list):
+        for file_ in filename_list:
+            self.delete_one(file_)
+            log.Debug("Deleted '%s/%s'" % (self.container, file_))
+
+    @retry
+    def _query_file_info(self, filename, raise_errors = False):
+        try:
+            sobject = self.container.get_object(filename)
+            return {'size': sobject.total_bytes}
+        except self.nso_exc:
+            return {'size': -1}
+        except Exception, e:
+            log.Warn("Error querying '%s/%s': %s"
+                     "" % (self.container,
+                           filename,
+                           str(e)))
+            if raise_errors:
+                raise e
+            else:
+                return {'size': None}
+
+duplicity.backend.register_backend("cf+http", PyraxBackend)

=== modified file 'duplicity/backends/_ssh_paramiko.py'
--- duplicity/backends/_ssh_paramiko.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/_ssh_paramiko.py	2014-06-14 13:58:30 +0000
@@ -28,6 +28,7 @@
 import os
 import errno
 import sys
+import time
 import getpass
 import logging
 from binascii import hexlify
@@ -35,7 +36,8 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import BackendException
+from duplicity import util
+from duplicity.errors import *
 
 read_blocksize=65635            # for doing scp retrievals, where we need to read ourselves
 
@@ -133,7 +135,7 @@
         try:
             if os.path.isfile("/etc/ssh/ssh_known_hosts"):
                 self.client.load_system_host_keys("/etc/ssh/ssh_known_hosts")
-        except Exception as e:
+        except Exception, e:
             raise BackendException("could not load /etc/ssh/ssh_known_hosts, maybe corrupt?")
         try:
             # use load_host_keys() to signal it's writable to paramiko
@@ -143,7 +145,7 @@
                 self.client.load_host_keys(file)
             else:
                 self.client._host_keys_filename = file
-        except Exception as e:
+        except Exception, e:
             raise BackendException("could not load ~/.ssh/known_hosts, maybe corrupt?")
 
         """ the next block reorganizes all host parameters into a
@@ -210,7 +212,7 @@
                                 allow_agent=True, 
                                 look_for_keys=True,
                                 key_filename=self.config['identityfile'])
-        except Exception as e:
+        except Exception, e:
             raise BackendException("ssh connection to %s@%s:%d failed: %s" % (
                                     self.config['user'],
                                     self.config['hostname'],
@@ -228,9 +230,10 @@
         else:
             try:
                 self.sftp=self.client.open_sftp()
-            except Exception as e:
+            except Exception, e:
                 raise BackendException("sftp negotiation failed: %s" % e)
 
+
             # move to the appropriate directory, possibly after creating it and its parents
             dirs = self.remote_dir.split(os.sep)
             if len(dirs) > 0:
@@ -242,104 +245,170 @@
                         continue
                     try:
                         attrs=self.sftp.stat(d)
-                    except IOError as e:
+                    except IOError, e:
                         if e.errno == errno.ENOENT:
                             try:
                                 self.sftp.mkdir(d)
-                            except Exception as e:
+                            except Exception, e:
                                 raise BackendException("sftp mkdir %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
                         else:
                             raise BackendException("sftp stat %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
                     try:
                         self.sftp.chdir(d)
-                    except Exception as e:
+                    except Exception, e:
                         raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
 
-    def _put(self, source_path, remote_filename):
-        if globals.use_scp:
-            f=file(source_path.name,'rb')
-            try:
-                chan=self.client.get_transport().open_session()
-                chan.settimeout(globals.timeout)
-                chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
-            except Exception as e:
-                raise BackendException("scp execution failed: %s" % e)
-            # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
-            # if there's a problem: 0x1 or 0x02 and some error text
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            fstat=os.stat(source_path.name)
-            chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            chan.sendall(f.read()+'\0')
-            f.close()
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            chan.close()
-        else:
-            self.sftp.put(source_path.name,remote_filename)
-
-    def _get(self, remote_filename, local_path):
-        if globals.use_scp:
-            try:
-                chan=self.client.get_transport().open_session()
-                chan.settimeout(globals.timeout)
-                chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
-            except Exception as e:
-                raise BackendException("scp execution failed: %s" % e)
-
-            chan.send('\0')     # overall ready indicator
-            msg=chan.recv(-1)
-            m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
-            if (m==None or m.group(3)!=remote_filename):
-                raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
-            chan.recv(1)        # dispose of the newline trailing the C message
-
-            size=int(m.group(2))
-            togo=size
-            f=file(local_path.name,'wb')
-            chan.send('\0')     # ready for data
-            try:
-                while togo>0:
-                    if togo>read_blocksize:
-                        blocksize = read_blocksize
-                    else:
-                        blocksize = togo
-                    buff=chan.recv(blocksize)
-                    f.write(buff)
-                    togo-=len(buff)
-            except Exception as e:
-                raise BackendException("scp get %s failed: %s" % (remote_filename,e))
-
-            msg=chan.recv(1)    # check the final status
-            if msg!='\0':
-                raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
-            f.close()
-            chan.send('\0')     # send final done indicator
-            chan.close()
-        else:
-            self.sftp.get(remote_filename,local_path.name)
+    def put(self, source_path, remote_filename = None):
+        """transfers a single file to the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the remote directory or file name
+        contain single quotes."""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(self.retry_delay)
+            try:
+                if (globals.use_scp):
+                    f=file(source_path.name,'rb')
+                    try:
+                        chan=self.client.get_transport().open_session()
+                        chan.settimeout(globals.timeout)
+                        chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
+                    except Exception, e:
+                        raise BackendException("scp execution failed: %s" % e)
+                    # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
+                    # if there's a problem: 0x1 or 0x02 and some error text
+                    response=chan.recv(1)
+                    if (response!="\0"):
+                        raise BackendException("scp remote error: %s" % chan.recv(-1))
+                    fstat=os.stat(source_path.name)
+                    chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
+                    response=chan.recv(1)
+                    if (response!="\0"):
+                        raise BackendException("scp remote error: %s" % chan.recv(-1))
+                    chan.sendall(f.read()+'\0')
+                    f.close()
+                    response=chan.recv(1)
+                    if (response!="\0"):
+                        raise BackendException("scp remote error: %s" % chan.recv(-1))
+                    chan.close()
+                    return
+                else:
+                    try:
+                        self.sftp.put(source_path.name,remote_filename)
+                        return
+                    except Exception, e:
+                        raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
+            except Exception, e:
+                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
+        raise BackendException("Giving up trying to upload '%s' after %d attempts" % (remote_filename,n))
+
+
+    def get(self, remote_filename, local_path):
+        """retrieves a single file from the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the remote directory or file names
+        contain single quotes."""
+        
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(self.retry_delay)
+            try:
+                if (globals.use_scp):
+                    try:
+                        chan=self.client.get_transport().open_session()
+                        chan.settimeout(globals.timeout)
+                        chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
+                    except Exception, e:
+                        raise BackendException("scp execution failed: %s" % e)
+
+                    chan.send('\0')     # overall ready indicator
+                    msg=chan.recv(-1)
+                    m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
+                    if (m==None or m.group(3)!=remote_filename):
+                        raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
+                    chan.recv(1)        # dispose of the newline trailing the C message
+
+                    size=int(m.group(2))
+                    togo=size
+                    f=file(local_path.name,'wb')
+                    chan.send('\0')     # ready for data
+                    try:
+                        while togo>0:
+                            if togo>read_blocksize:
+                                blocksize = read_blocksize
+                            else:
+                                blocksize = togo
+                            buff=chan.recv(blocksize)
+                            f.write(buff)
+                            togo-=len(buff)
+                    except Exception, e:
+                        raise BackendException("scp get %s failed: %s" % (remote_filename,e))
+
+                    msg=chan.recv(1)    # check the final status
+                    if msg!='\0':
+                        raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
+                    f.close()
+                    chan.send('\0')     # send final done indicator
+                    chan.close()
+                    return
+                else:
+                    try:
+                        self.sftp.get(remote_filename,local_path.name)
+                        return
+                    except Exception, e:
+                        raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
+                local_path.setdata()
+            except Exception, e:
+                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
+        raise BackendException("Giving up trying to download '%s' after %d attempts" % (remote_filename,n))
 
     def _list(self):
-        # In scp mode unavoidable quoting issues will make this fail if the
-        # directory name contains single quotes.
-        if globals.use_scp:
-            output = self.runremote("ls -1 '%s'" % self.remote_dir, False, "scp dir listing ")
-            return output.splitlines()
-        else:
-            return self.sftp.listdir()
-
-    def _delete(self, filename):
-        # In scp mode unavoidable quoting issues will cause failures if
-        # filenames containing single quotes are encountered.
-        if globals.use_scp:
-            self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False, "scp rm ")
-        else:
-            self.sftp.remove(filename)
+        """lists the contents of the one-and-only duplicity dir on the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the directory name
+        contains single quotes."""
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(self.retry_delay)
+            try:
+                if (globals.use_scp):
+                    output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")
+                    return output.splitlines()
+                else:
+                    try:
+                        return self.sftp.listdir()
+                    except Exception, e:
+                        raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
+            except Exception, e:
+                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
+        raise BackendException("Giving up trying to list '%s' after %d attempts" % (self.remote_dir,n))
+
+    def delete(self, filename_list):
+        """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
+        will cause failures if filenames containing single quotes are encountered."""
+        for fn in filename_list:
+            # Try to delete each file several times before giving up completely.
+            for n in range(1, globals.num_retries+1):
+                try:
+                    if (globals.use_scp):
+                        self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
+                    else:
+                        try:
+                            self.sftp.remove(fn)
+                        except Exception, e:
+                            raise BackendException("sftp rm %s failed: %s" % (fn,e))
+
+                    # If we get here, we deleted this file successfully. Move on to the next one.
+                    break
+                except Exception, e:
+                    if n == globals.num_retries:
+                        log.FatalError(util.uexc(e), log.ErrorCode.backend_error)
+                    else:
+                        log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
+                        time.sleep(self.retry_delay)
 
     def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
         """small convenience function that opens a shell channel, runs remote command and returns
@@ -348,7 +417,7 @@
             chan=self.client.get_transport().open_session()
             chan.settimeout(globals.timeout)
             chan.exec_command(cmd)
-        except Exception as e:
+        except Exception, e:
             raise BackendException("%sexecution failed: %s" % (errorprefix,e))
         output=chan.recv(-1)
         res=chan.recv_exit_status()
@@ -366,7 +435,11 @@
         sshconfig = paramiko.SSHConfig()
         try:
             sshconfig.parse(open(file))
-        except Exception as e:
+        except Exception, e:
             raise BackendException("could not load '%s', maybe corrupt?" % (file))
         
         return sshconfig.lookup(host)
+
+duplicity.backend.register_backend("sftp", SSHParamikoBackend)
+duplicity.backend.register_backend("scp", SSHParamikoBackend)
+duplicity.backend.register_backend("ssh", SSHParamikoBackend)

=== modified file 'duplicity/backends/_ssh_pexpect.py'
--- duplicity/backends/_ssh_pexpect.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/_ssh_pexpect.py	2014-06-14 13:58:30 +0000
@@ -24,20 +24,19 @@
 # have the same syntax.  Also these strings will be executed by the
 # shell, so shouldn't have strange characters in them.
 
-from future_builtins import map
-
 import re
 import string
+import time
 import os
 
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import BackendException
+from duplicity import pexpect
+from duplicity.errors import * #@UnusedWildImport
 
 class SSHPExpectBackend(duplicity.backend.Backend):
-    """This backend copies files using scp.  List not supported.  Filenames
-       should not need any quoting or this will break."""
+    """This backend copies files using scp.  List not supported"""
     def __init__(self, parsed_url):
         """scpBackend initializer"""
         duplicity.backend.Backend.__init__(self, parsed_url)
@@ -77,72 +76,77 @@
 
     def run_scp_command(self, commandline):
         """ Run an scp command, responding to password prompts """
-        import pexpect
-        log.Info("Running '%s'" % commandline)
-        child = pexpect.spawn(commandline, timeout = None)
-        if globals.ssh_askpass:
-            state = "authorizing"
-        else:
-            state = "copying"
-        while 1:
-            if state == "authorizing":
-                match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "(?i)pass(word|phrase .*):",
-                                      "(?i)permission denied",
-                                      "authenticity"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                if match == 0:
-                    log.Warn("Failed to authenticate")
-                    break
-                elif match == 1:
-                    log.Warn("Timeout waiting to authenticate")
-                    break
-                elif match == 2:
-                    child.sendline(self.password)
-                    state = "copying"
-                elif match == 3:
-                    log.Warn("Invalid SSH password")
-                    break
-                elif match == 4:
-                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
-                    break
-            elif state == "copying":
-                match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "stalled",
-                                      "authenticity",
-                                      "ETA"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                if match == 0:
-                    break
-                elif match == 1:
-                    log.Warn("Timeout waiting for response")
-                    break
-                elif match == 2:
-                    state = "stalled"
-                elif match == 3:
-                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
-                    break
-            elif state == "stalled":
-                match = child.expect([pexpect.EOF,
-                                      "(?i)timeout, server not responding",
-                                      "ETA"])
-                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                if match == 0:
-                    break
-                elif match == 1:
-                    log.Warn("Stalled for too long, aborted copy")
-                    break
-                elif match == 2:
-                    state = "copying"
-        child.close(force = True)
-        if child.exitstatus != 0:
-            raise BackendException("Error running '%s'" % commandline)
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(self.retry_delay)
+            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
+            child = pexpect.spawn(commandline, timeout = None)
+            if globals.ssh_askpass:
+                state = "authorizing"
+            else:
+                state = "copying"
+            while 1:
+                if state == "authorizing":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "(?i)pass(word|phrase .*):",
+                                          "(?i)permission denied",
+                                          "authenticity"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        log.Warn("Failed to authenticate")
+                        break
+                    elif match == 1:
+                        log.Warn("Timeout waiting to authenticate")
+                        break
+                    elif match == 2:
+                        child.sendline(self.password)
+                        state = "copying"
+                    elif match == 3:
+                        log.Warn("Invalid SSH password")
+                        break
+                    elif match == 4:
+                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                        break
+                elif state == "copying":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "stalled",
+                                          "authenticity",
+                                          "ETA"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        break
+                    elif match == 1:
+                        log.Warn("Timeout waiting for response")
+                        break
+                    elif match == 2:
+                        state = "stalled"
+                    elif match == 3:
+                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                        break
+                elif state == "stalled":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "ETA"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        break
+                    elif match == 1:
+                        log.Warn("Stalled for too long, aborted copy")
+                        break
+                    elif match == 2:
+                        state = "copying"
+            child.close(force = True)
+            if child.exitstatus == 0:
+                return
+            log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
+        log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
+        raise BackendException("Error running '%s'" % commandline)
 
     def run_sftp_command(self, commandline, commands):
         """ Run an sftp command, responding to password prompts, passing commands from list """
-        import pexpect
         maxread = 2000 # expected read buffer size
         responses = [pexpect.EOF,
                      "(?i)timeout, server not responding",
@@ -155,69 +159,76 @@
                      "Couldn't delete file",
                      "open(.*): Failure"]
         max_response_len = max([len(p) for p in responses[1:]])
-        log.Info("Running '%s'" % (commandline))
-        child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
-        cmdloc = 0
-        passprompt = 0
-        while 1:
-            msg = ""
-            match = child.expect(responses,
-                                 searchwindowsize=maxread+max_response_len)
-            log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
-            if match == 0:
-                break
-            elif match == 1:
-                msg = "Timeout waiting for response"
-                break
-            if match == 2:
-                if cmdloc < len(commands):
-                    command = commands[cmdloc]
-                    log.Info("sftp command: '%s'" % (command,))
-                    child.sendline(command)
-                    cmdloc += 1
-                else:
-                    command = 'quit'
-                    child.sendline(command)
-                    res = child.before
-            elif match == 3:
-                passprompt += 1
-                child.sendline(self.password)
-                if (passprompt>1):
-                    raise BackendException("Invalid SSH password.")
-            elif match == 4:
-                if not child.before.strip().startswith("mkdir"):
-                    msg = "Permission denied"
-                    break
-            elif match == 5:
-                msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
-                break
-            elif match == 6:
-                if not child.before.strip().startswith("rm"):
-                    msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
-                    break
-            elif match == 7:
-                if not child.before.strip().startswith("Removing"):
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(self.retry_delay)
+            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
+            child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
+            cmdloc = 0
+            passprompt = 0
+            while 1:
+                msg = ""
+                match = child.expect(responses,
+                                     searchwindowsize=maxread+max_response_len)
+                log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
+                if match == 0:
+                    break
+                elif match == 1:
+                    msg = "Timeout waiting for response"
+                    break
+                if match == 2:
+                    if cmdloc < len(commands):
+                        command = commands[cmdloc]
+                        log.Info("sftp command: '%s'" % (command,))
+                        child.sendline(command)
+                        cmdloc += 1
+                    else:
+                        command = 'quit'
+                        child.sendline(command)
+                        res = child.before
+                elif match == 3:
+                    passprompt += 1
+                    child.sendline(self.password)
+                    if (passprompt>1):
+                        raise BackendException("Invalid SSH password.")
+                elif match == 4:
+                    if not child.before.strip().startswith("mkdir"):
+                        msg = "Permission denied"
+                        break
+                elif match == 5:
+                    msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
+                    break
+                elif match == 6:
+                    if not child.before.strip().startswith("rm"):
+                        msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
+                        break
+                elif match == 7:
+                    if not child.before.strip().startswith("Removing"):
+                        msg = "Could not delete file in command='%s'" % (commandline,)
+                        break;
+                elif match == 8:
                     msg = "Could not delete file in command='%s'" % (commandline,)
-                    break;
-            elif match == 8:
-                msg = "Could not delete file in command='%s'" % (commandline,)
-                break
-            elif match == 9:
-                msg = "Could not open file in command='%s'" % (commandline,)
-                break
-        child.close(force = True)
-        if child.exitstatus == 0:
-            return res
-        else:
-            raise BackendException("Error running '%s': %s" % (commandline, msg))
+                    break
+                elif match == 9:
+                    msg = "Could not open file in command='%s'" % (commandline,)
+                    break
+            child.close(force = True)
+            if child.exitstatus == 0:
+                return res
+            log.Warn("Running '%s' with commands:\n %s\n failed (attempt #%d): %s" % (commandline, "\n ".join(commands), n, msg))
+        raise BackendException("Giving up trying to execute '%s' with commands:\n %s\n after %d attempts" % (commandline, "\n ".join(commands), globals.num_retries))
 
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename = None):
         if globals.use_scp:
-            self.put_scp(source_path, remote_filename)
+            self.put_scp(source_path, remote_filename = remote_filename)
         else:
-            self.put_sftp(source_path, remote_filename)
+            self.put_sftp(source_path, remote_filename = remote_filename)
 
-    def put_sftp(self, source_path, remote_filename):
+    def put_sftp(self, source_path, remote_filename = None):
+        """Use sftp to copy source_dir/filename to remote computer"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         commands = ["put \"%s\" \"%s.%s.part\"" %
                     (source_path.name, self.remote_prefix, remote_filename),
                     "rename \"%s.%s.part\" \"%s%s\"" %
@@ -227,36 +238,53 @@
                                      self.host_string))
         self.run_sftp_command(commandline, commands)
 
-    def put_scp(self, source_path, remote_filename):
+    def put_scp(self, source_path, remote_filename = None):
+        """Use scp to copy source_dir/filename to remote computer"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         commandline = "%s %s %s %s:%s%s" % \
             (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
              self.remote_prefix, remote_filename)
         self.run_scp_command(commandline)
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
         if globals.use_scp:
             self.get_scp(remote_filename, local_path)
         else:
             self.get_sftp(remote_filename, local_path)
 
     def get_sftp(self, remote_filename, local_path):
+        """Use sftp to get a remote file"""
         commands = ["get \"%s%s\" \"%s\"" %
                     (self.remote_prefix, remote_filename, local_path.name)]
         commandline = ("%s %s %s" % (self.sftp_command,
                                      globals.ssh_options,
                                      self.host_string))
         self.run_sftp_command(commandline, commands)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found locally after get "
+                                   "from backend" % local_path.name)
 
     def get_scp(self, remote_filename, local_path):
+        """Use scp to get a remote file"""
         commandline = "%s %s %s:%s%s %s" % \
             (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
              remote_filename, local_path.name)
         self.run_scp_command(commandline)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found locally after get "
+                                   "from backend" % local_path.name)
 
     def _list(self):
-        # Note that this command can get confused when dealing with
-        # files with newlines in them, as the embedded newlines cannot
-        # be distinguished from the file boundaries.
+        """
+        List files available for scp
+
+        Note that this command can get confused when dealing with
+        files with newlines in them, as the embedded newlines cannot
+        be distinguished from the file boundaries.
+        """
         dirs = self.remote_dir.split(os.sep)
         if len(dirs) > 0:
             if not dirs[0] :
@@ -273,10 +301,18 @@
 
         l = self.run_sftp_command(commandline, commands).split('\n')[1:]
 
-        return [x for x in map(string.strip, l) if x]
+        return filter(lambda x: x, map(string.strip, l))
 
-    def _delete(self, filename):
+    def delete(self, filename_list):
+        """
+        Runs sftp rm to delete files.  Files must not require quoting.
+        """
         commands = ["cd \"%s\"" % (self.remote_dir,)]
-        commands.append("rm \"%s\"" % filename)
+        for fn in filename_list:
+            commands.append("rm \"%s\"" % fn)
         commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
         self.run_sftp_command(commandline, commands)
+
+duplicity.backend.register_backend("ssh", SSHPExpectBackend)
+duplicity.backend.register_backend("scp", SSHPExpectBackend)
+duplicity.backend.register_backend("sftp", SSHPExpectBackend)

=== modified file 'duplicity/backends/botobackend.py'
--- duplicity/backends/botobackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/botobackend.py	2014-06-14 13:58:30 +0000
@@ -22,12 +22,18 @@
 
 import duplicity.backend
 from duplicity import globals
+import sys
 
 if globals.s3_use_multiprocessing:
-    from ._boto_multi import BotoBackend
+    if sys.version_info[:2] < (2, 6):
+        print "Sorry, S3 multiprocessing requires version 2.6 or later of python"
+        sys.exit(1)
+    from _boto_multi import BotoBackend as BotoMultiUploadBackend
+    duplicity.backend.register_backend("gs", BotoMultiUploadBackend)
+    duplicity.backend.register_backend("s3", BotoMultiUploadBackend)
+    duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend)
 else:
-    from ._boto_single import BotoBackend
-
-duplicity.backend.register_backend("gs", BotoBackend)
-duplicity.backend.register_backend("s3", BotoBackend)
-duplicity.backend.register_backend("s3+http", BotoBackend)
+    from _boto_single import BotoBackend as BotoSingleUploadBackend
+    duplicity.backend.register_backend("gs", BotoSingleUploadBackend)
+    duplicity.backend.register_backend("s3", BotoSingleUploadBackend)
+    duplicity.backend.register_backend("s3+http", BotoSingleUploadBackend)

=== modified file 'duplicity/backends/cfbackend.py'
--- duplicity/backends/cfbackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/cfbackend.py	2014-06-14 13:58:30 +0000
@@ -18,13 +18,10 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import duplicity.backend
 from duplicity import globals
 
 if (globals.cf_backend and
     globals.cf_backend.lower().strip() == 'pyrax'):
-    from ._cf_pyrax import PyraxBackend as CFBackend
+    import _cf_pyrax
 else:
-    from ._cf_cloudfiles import CloudFilesBackend as CFBackend
-
-duplicity.backend.register_backend("cf+http", CFBackend)
+    import _cf_cloudfiles

=== modified file 'duplicity/backends/dpbxbackend.py'
--- duplicity/backends/dpbxbackend.py	2014-05-10 10:49:20 +0000
+++ duplicity/backends/dpbxbackend.py	2014-06-14 13:58:30 +0000
@@ -29,14 +29,16 @@
 import urllib
 import re
 import locale, sys
-from functools import reduce
 
 import traceback, StringIO
+from exceptions import Exception
 
 import duplicity.backend
+from duplicity import globals
 from duplicity import log
-from duplicity import util
-from duplicity.errors import BackendException
+from duplicity.errors import *
+from duplicity import tempdir
+from duplicity.backend import retry_fatal
 
 
 # This application key is registered in my name (jno at pisem dot net).
@@ -73,19 +75,24 @@
         def wrapper(self, *args):
             from dropbox import rest
             if login_required and not self.sess.is_linked():
+<<<<<<< TREE
                 raise BackendException("dpbx Cannot login: check your credentials", log.ErrorCode.dpbx_nologin)
                 return
+=======
+              log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
+              return
+>>>>>>> MERGE-SOURCE
 
             try:
                 return f(self, *args)
-            except TypeError as e:
+            except TypeError, e:
                 log_exception(e)
-                raise BackendException('dpbx type error "%s"' % (e,))
-            except rest.ErrorResponse as e:
-                msg = e.user_error_msg or util.uexc(e)
+                log.FatalError('dpbx type error "%s"' % (str(e),), log.ErrorCode.backend_code_error)
+            except rest.ErrorResponse, e:
+                msg = e.user_error_msg or str(e)
                 log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error)
                 raise e
-            except Exception as e:
+            except Exception, e:
                 log_exception(e)
                 log.Error('dpbx code error "%s"' % (e,), log.ErrorCode.backend_code_error)
                 raise e
@@ -117,7 +124,7 @@
 
             def write_creds(self, token):
                 open(self.TOKEN_FILE, 'w').close() # create/reset file
-                os.chmod(self.TOKEN_FILE, 0o600)     # set it -rw------ (NOOP in Windows?)
+                os.chmod(self.TOKEN_FILE,0600)     # set it -rw------ (NOOP in Windows?)
                 # now write the content
                 f = open(self.TOKEN_FILE, 'w')
                 f.write("|".join([token.key, token.secret]))
@@ -155,29 +162,41 @@
 
     def login(self):
         if not self.sess.is_linked():
+<<<<<<< TREE
             try: # to login to the box
                 self.sess.link()
             except rest.ErrorResponse as e:
                 log.FatalError('dpbx Error: %s\n' % util.uexc(e), log.ErrorCode.dpbx_nologin)
             if not self.sess.is_linked(): # stil not logged in
                 log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
-
-    def _error_code(self, operation, e):
-        from dropbox import rest
-        if isinstance(e, rest.ErrorResponse):
-            if e.status == 404:
-                return log.ErrorCode.backend_not_found
-
+=======
+          try: # to login to the box
+            self.sess.link()
+          except rest.ErrorResponse, e:
+            log.FatalError('dpbx Error: %s\n' % str(e), log.ErrorCode.dpbx_nologin)
+          if not self.sess.is_linked(): # stil not logged in
+            log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
+>>>>>>> MERGE-SOURCE
+
+    @retry_fatal
     @command()
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename = None):
+        """Transfer source_path to remote_filename"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
         remote_dir  = urllib.unquote(self.parsed_url.path.lstrip('/'))
         remote_path = os.path.join(remote_dir, remote_filename).rstrip()
+
         from_file = open(source_path.name, "rb")
+
         resp = self.api_client.put_file(remote_path, from_file)
         log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp))
 
+    @retry_fatal
     @command()
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
+        """Get remote filename, saving it to local_path"""
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
 
         to_file = open( local_path.name, 'wb' )
@@ -190,8 +209,10 @@
 
         local_path.setdata()
 
+    @retry_fatal
     @command()
-    def _list(self):
+    def _list(self,none=None):
+        """List files in directory"""
         # Do a long listing to avoid connection reset
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
         resp = self.api_client.metadata(remote_dir)
@@ -206,14 +227,21 @@
                 l.append(name.encode(encoding))
         return l
 
+    @retry_fatal
     @command()
-    def _delete(self, filename):
+    def delete(self, filename_list):
+        """Delete files in filename_list"""
+        if not filename_list :
+          log.Debug('dpbx.delete(): no op')
+          return
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
-        remote_name = os.path.join( remote_dir, filename )
-        resp = self.api_client.file_delete( remote_name )
-        log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
+        for filename in filename_list:
+          remote_name = os.path.join( remote_dir, filename )
+          resp = self.api_client.file_delete( remote_name )
+          log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
 
     @command()
+<<<<<<< TREE
     def _close(self):
         """close backend session? no! just "flush" the data"""
         info = self.api_client.account_info()
@@ -237,6 +265,31 @@
             if meta :
                 for k in meta :
                     log.Debug(':: :: :: %s=[%s]' % (k, meta[k]))
+=======
+    def close(self):
+      """close backend session? no! just "flush" the data"""
+      info = self.api_client.account_info()
+      log.Debug('dpbx.close():')
+      for k in info :
+        log.Debug(':: %s=[%s]'%(k,info[k]))
+      entries = []
+      more = True
+      cursor = None
+      while more :
+        info = self.api_client.delta(cursor)
+        if info.get('reset',False) :
+          log.Debug("delta returned True value for \"reset\", no matter")
+        cursor = info.get('cursor',None)
+        more   = info.get('more',False)
+        entr   = info.get('entries',[])
+        entries += entr
+      for path,meta in entries:
+        mm = meta and 'ok' or 'DELETE'
+        log.Info(':: :: [%s] %s'%(path,mm))
+        if meta :
+          for k in meta :
+            log.Debug(':: :: :: %s=[%s]'%(k,meta[k]))
+>>>>>>> MERGE-SOURCE
 
     def _mkdir(self, path):
         """create a new directory"""

=== modified file 'duplicity/backends/ftpbackend.py'
--- duplicity/backends/ftpbackend.py	2014-04-26 12:54:37 +0000
+++ duplicity/backends/ftpbackend.py	2014-06-14 13:58:30 +0000
@@ -25,6 +25,7 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
+from duplicity.errors import * #@UnusedWildImport
 from duplicity import tempdir
 
 class FTPBackend(duplicity.backend.Backend):
@@ -64,7 +65,7 @@
         # This squelches the "file not found" result from ncftpls when
         # the ftp backend looks for a collection that does not exist.
         # version 3.2.2 has error code 5, 1280 is some legacy value
-        self.popen_breaks[ 'ncftpls' ] = [ 5, 1280 ]
+        self.popen_persist_breaks[ 'ncftpls' ] = [ 5, 1280 ]
 
         # Use an explicit directory name.
         if self.url_string[-1] != '/':
@@ -87,28 +88,37 @@
         if parsed_url.port != None and parsed_url.port != 21:
             self.flags += " -P '%s'" % (parsed_url.port)
 
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename = None):
+        """Transfer source_path to remote_filename"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
         commandline = "ncftpput %s -m -V -C '%s' '%s'" % \
             (self.flags, source_path.name, remote_path)
-        self.subprocess_popen(commandline)
+        self.run_command_persist(commandline)
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
+        """Get remote filename, saving it to local_path"""
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
         commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \
             (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)
-        self.subprocess_popen(commandline)
+        self.run_command_persist(commandline)
+        local_path.setdata()
 
     def _list(self):
+        """List files in directory"""
         # Do a long listing to avoid connection reset
         commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)
-        _, l, _ = self.subprocess_popen(commandline)
+        l = self.popen_persist(commandline).split('\n')
+        l = filter(lambda x: x, l)
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
+        return [x.split()[-1] for x in l if not x.startswith("total ")]
 
-    def _delete(self, filename):
-        commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
-            (self.flags, filename, self.url_string)
-        self.subprocess_popen(commandline)
+    def delete(self, filename_list):
+        """Delete files in filename_list"""
+        for filename in filename_list:
+            commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
+                (self.flags, filename, self.url_string)
+            self.popen_persist(commandline)
 
 duplicity.backend.register_backend("ftp", FTPBackend)

=== modified file 'duplicity/backends/ftpsbackend.py'
--- duplicity/backends/ftpsbackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/ftpsbackend.py	2014-06-14 13:58:30 +0000
@@ -28,6 +28,7 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
+from duplicity.errors import *
 from duplicity import tempdir
 
 class FTPSBackend(duplicity.backend.Backend):
@@ -84,29 +85,43 @@
             os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password))
         os.close(self.tempfile)
 
-    def _put(self, source_path, remote_filename):
+        self.flags = "-f %s" % self.tempname
+
+    def put(self, source_path, remote_filename = None):
+        """Transfer source_path to remote_filename"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
         commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \
             (self.tempname, source_path.name, remote_path)
-        self.subprocess_popen(commandline)
+        l = self.run_command_persist(commandline)
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
+        """Get remote filename, saving it to local_path"""
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
         commandline = "lftp -c 'source %s;get %s -o %s'" % \
             (self.tempname, remote_path.lstrip('/'), local_path.name)
-        self.subprocess_popen(commandline)
+        self.run_command_persist(commandline)
+        local_path.setdata()
 
     def _list(self):
+        """List files in directory"""
         # Do a long listing to avoid connection reset
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
         commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir)
-        _, l, _ = self.subprocess_popen(commandline)
+        l = self.popen_persist(commandline).split('\n')
+        l = filter(lambda x: x, l)
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l.split('\n') if x]
+        return [x.split()[-1] for x in l]
 
-    def _delete(self, filename):
-        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
-        commandline = "lftp -c 'source %s;cd \'%s\';rm \'%s\''" % (self.tempname, remote_dir, filename)
-        self.subprocess_popen(commandline)
+    def delete(self, filename_list):
+        """Delete files in filename_list"""
+        filelist = ""
+        for filename in filename_list:
+            filelist += "\'%s\' " % filename
+        if filelist.rstrip():
+            remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
+            commandline = "lftp -c 'source %s;cd \'%s\';rm %s'" % (self.tempname, remote_dir, filelist.rstrip())
+            self.popen_persist(commandline)
 
 duplicity.backend.register_backend("ftps", FTPSBackend)

=== modified file 'duplicity/backends/gdocsbackend.py'
--- duplicity/backends/gdocsbackend.py	2014-04-21 19:21:45 +0000
+++ duplicity/backends/gdocsbackend.py	2014-06-14 13:58:30 +0000
@@ -23,7 +23,9 @@
 import urllib
 
 import duplicity.backend
-from duplicity.errors import BackendException
+from duplicity.backend import retry
+from duplicity import log
+from duplicity.errors import * #@UnusedWildImport
 
 
 class GDocsBackend(duplicity.backend.Backend):
@@ -51,14 +53,14 @@
         self.client = gdata.docs.client.DocsClient(source='duplicity $version')
         self.client.ssl = True
         self.client.http_client.debug = False
-        self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
+        self.__authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
 
         # Fetch destination folder entry (and crete hierarchy if required).
         folder_names = string.split(parsed_url.path[1:], '/')
         parent_folder = None
         parent_folder_id = GDocsBackend.ROOT_FOLDER_ID
         for folder_name in folder_names:
-            entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
+            entries = self.__fetch_entries(parent_folder_id, 'folder', folder_name)
             if entries is not None:
                 if len(entries) == 1:
                     parent_folder = entries[0]
@@ -75,54 +77,106 @@
                 raise BackendException("Error while fetching destination folder '%s'." % folder_name)
         self.folder = parent_folder
 
-    def _put(self, source_path, remote_filename):
-        self._delete(remote_filename)
-
-        # Set uploader instance. Note that resumable uploads are required in order to
-        # enable uploads for all file types.
-        # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
-        file = source_path.open()
-        uploader = gdata.client.ResumableUploader(
-          self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
-          chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
-          desired_class=gdata.docs.data.Resource)
-        if uploader:
-            # Chunked upload.
-            entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
-            uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
-            entry = uploader.UploadFile(uri, entry=entry)
-            if not entry:
-                raise BackendException("Failed to upload file '%s' to remote folder '%s'"
-                                       % (source_path.get_filename(), self.folder.title.text))
-        else:
-            raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'"
-                                   % (source_path.get_filename(), self.folder.title.text))
-        assert not file.close()
-
-    def _get(self, remote_filename, local_path):
-        entries = self._fetch_entries(self.folder.resource_id.text,
-                                      GDocsBackend.BACKUP_DOCUMENT_TYPE,
-                                      remote_filename)
-        if len(entries) == 1:
-            entry = entries[0]
-            self.client.DownloadResource(entry, local_path.name)
-        else:
-            raise BackendException("Failed to find file '%s' in remote folder '%s'"
-                                   % (remote_filename, self.folder.title.text))
-
-    def _list(self):
-        entries = self._fetch_entries(self.folder.resource_id.text,
-                                      GDocsBackend.BACKUP_DOCUMENT_TYPE)
-        return [entry.title.text for entry in entries]
-
-    def _delete(self, filename):
-        entries = self._fetch_entries(self.folder.resource_id.text,
-                                      GDocsBackend.BACKUP_DOCUMENT_TYPE,
-                                      filename)
-        for entry in entries:
-            self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
-
-    def _authorize(self, email, password, captcha_token=None, captcha_response=None):
+    @retry
+    def put(self, source_path, remote_filename=None, raise_errors=False):
+        """Transfer source_path to remote_filename"""
+        # Default remote file name.
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
+        # Upload!
+        try:
+            # If remote file already exists in destination folder, remove it.
+            entries = self.__fetch_entries(self.folder.resource_id.text,
+                                           GDocsBackend.BACKUP_DOCUMENT_TYPE,
+                                           remote_filename)
+            for entry in entries:
+                self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
+
+            # Set uploader instance. Note that resumable uploads are required in order to
+            # enable uploads for all file types.
+            # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
+            file = source_path.open()
+            uploader = gdata.client.ResumableUploader(
+              self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
+              chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
+              desired_class=gdata.docs.data.Resource)
+            if uploader:
+                # Chunked upload.
+                entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
+                uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
+                entry = uploader.UploadFile(uri, entry=entry)
+                if not entry:
+                    self.__handle_error("Failed to upload file '%s' to remote folder '%s'"
+                                        % (source_path.get_filename(), self.folder.title.text), raise_errors)
+            else:
+                self.__handle_error("Failed to initialize upload of file '%s' to remote folder '%s'"
+                         % (source_path.get_filename(), self.folder.title.text), raise_errors)
+            assert not file.close()
+        except Exception, e:
+            self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
+                                % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors)
+
+    @retry
+    def get(self, remote_filename, local_path, raise_errors=False):
+        """Get remote filename, saving it to local_path"""
+        try:
+            entries = self.__fetch_entries(self.folder.resource_id.text,
+                                           GDocsBackend.BACKUP_DOCUMENT_TYPE,
+                                           remote_filename)
+            if len(entries) == 1:
+                entry = entries[0]
+                self.client.DownloadResource(entry, local_path.name)
+                local_path.setdata()
+                return
+            else:
+                self.__handle_error("Failed to find file '%s' in remote folder '%s'"
+                                    % (remote_filename, self.folder.title.text), raise_errors)
+        except Exception, e:
+            self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
+                                 % (remote_filename, self.folder.title.text, str(e)), raise_errors)
+
+    @retry
+    def _list(self, raise_errors=False):
+        """List files in folder"""
+        try:
+            entries = self.__fetch_entries(self.folder.resource_id.text,
+                                           GDocsBackend.BACKUP_DOCUMENT_TYPE)
+            return [entry.title.text for entry in entries]
+        except Exception, e:
+            self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
+                                % (self.folder.title.text, str(e)), raise_errors)
+
+    @retry
+    def delete(self, filename_list, raise_errors=False):
+        """Delete files in filename_list"""
+        for filename in filename_list:
+            try:
+                entries = self.__fetch_entries(self.folder.resource_id.text,
+                                               GDocsBackend.BACKUP_DOCUMENT_TYPE,
+                                               filename)
+                if len(entries) > 0:
+                    success = True
+                    for entry in entries:
+                        if not self.client.delete(entry.get_edit_link().href + '?delete=true', force=True):
+                            success = False
+                    if not success:
+                        self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
+                                            % (filename, self.folder.title.text), raise_errors)
+                else:
+                    log.Warn("Failed to fetch file '%s' in remote folder '%s'"
+                             % (filename, self.folder.title.text))
+            except Exception, e:
+                self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
+                                    % (filename, self.folder.title.text, str(e)), raise_errors)
+
+    def __handle_error(self, message, raise_errors=True):
+        if raise_errors:
+            raise BackendException(message)
+        else:
+            log.FatalError(message, log.ErrorCode.backend_error)
+
+    def __authorize(self, email, password, captcha_token=None, captcha_response=None):
         try:
             self.client.client_login(email,
                                      password,
@@ -130,20 +184,22 @@
                                      service='writely',
                                      captcha_token=captcha_token,
                                      captcha_response=captcha_response)
-        except gdata.client.CaptchaChallenge as challenge:
+        except gdata.client.CaptchaChallenge, challenge:
             print('A captcha challenge in required. Please visit ' + challenge.captcha_url)
             answer = None
             while not answer:
                 answer = raw_input('Answer to the challenge? ')
-            self._authorize(email, password, challenge.captcha_token, answer)
+            self.__authorize(email, password, challenge.captcha_token, answer)
         except gdata.client.BadAuthentication:
-            raise BackendException('Invalid user credentials given. Be aware that accounts '
-                                   'that use 2-step verification require creating an application specific '
-                                   'access code for using this Duplicity backend. Follow the instruction in '
-                                   'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
-                                   'and create your application-specific password to run duplicity backups.')
+            self.__handle_error('Invalid user credentials given. Be aware that accounts '
+                                'that use 2-step verification require creating an application specific '
+                                'access code for using this Duplicity backend. Follow the instrucction in '
+                                'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
+                                'and create your application-specific password to run duplicity backups.')
+        except Exception, e:
+            self.__handle_error('Error while authenticating client: %s.' % str(e))
 
-    def _fetch_entries(self, folder_id, type, title=None):
+    def __fetch_entries(self, folder_id, type, title=None):
         # Build URI.
         uri = '/feeds/default/private/full/%s/contents' % folder_id
         if type == 'folder':
@@ -155,31 +211,34 @@
         if title:
             uri += '&title=' + urllib.quote(title) + '&title-exact=true'
 
-        # Fetch entries.
-        entries = self.client.get_all_resources(uri=uri)
-
-        # When filtering by entry title, API is returning (don't know why) documents in other
-        # folders (apart from folder_id) matching the title, so some extra filtering is required.
-        if title:
-            result = []
-            for entry in entries:
-                resource_type = entry.get_resource_type()
-                if (not type) \
-                   or (type == 'folder' and resource_type == 'folder') \
-                   or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
-
-                    if folder_id != GDocsBackend.ROOT_FOLDER_ID:
-                        for link in entry.in_collections():
-                            folder_entry = self.client.get_entry(link.href, None, None,
-                                                                 desired_class=gdata.docs.data.Resource)
-                            if folder_entry and (folder_entry.resource_id.text == folder_id):
-                                result.append(entry)
-                    elif len(entry.in_collections()) == 0:
-                        result.append(entry)
-        else:
-            result = entries
-
-        # Done!
-        return result
+        try:
+            # Fetch entries.
+            entries = self.client.get_all_resources(uri=uri)
+
+            # When filtering by entry title, API is returning (don't know why) documents in other
+            # folders (apart from folder_id) matching the title, so some extra filtering is required.
+            if title:
+                result = []
+                for entry in entries:
+                    resource_type = entry.get_resource_type()
+                    if (not type) \
+                       or (type == 'folder' and resource_type == 'folder') \
+                       or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
+
+                        if folder_id != GDocsBackend.ROOT_FOLDER_ID:
+                            for link in entry.in_collections():
+                                folder_entry = self.client.get_entry(link.href, None, None,
+                                                                     desired_class=gdata.docs.data.Resource)
+                                if folder_entry and (folder_entry.resource_id.text == folder_id):
+                                    result.append(entry)
+                        elif len(entry.in_collections()) == 0:
+                            result.append(entry)
+            else:
+                result = entries
+
+            # Done!
+            return result
+        except Exception, e:
+            self.__handle_error('Error while fetching remote entries: %s.' % str(e))
 
 duplicity.backend.register_backend('gdocs', GDocsBackend)

=== modified file 'duplicity/backends/giobackend.py'
--- duplicity/backends/giobackend.py	2014-04-29 23:49:01 +0000
+++ duplicity/backends/giobackend.py	2014-06-14 13:58:30 +0000
@@ -19,13 +19,18 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+import types
 import subprocess
 import atexit
 import signal
+from gi.repository import Gio #@UnresolvedImport
+from gi.repository import GLib #@UnresolvedImport
 
 import duplicity.backend
+from duplicity.backend import retry
 from duplicity import log
 from duplicity import util
+from duplicity.errors import * #@UnusedWildImport
 
 def ensure_dbus():
     # GIO requires a dbus session bus which can start the gvfs daemons
@@ -41,39 +46,36 @@
                     atexit.register(os.kill, int(parts[1]), signal.SIGTERM)
                 os.environ[parts[0]] = parts[1]
 
+class DupMountOperation(Gio.MountOperation):
+    """A simple MountOperation that grabs the password from the environment
+       or the user.
+    """
+    def __init__(self, backend):
+        Gio.MountOperation.__init__(self)
+        self.backend = backend
+        self.connect('ask-password', self.ask_password_cb)
+        self.connect('ask-question', self.ask_question_cb)
+
+    def ask_password_cb(self, *args, **kwargs):
+        self.set_password(self.backend.get_password())
+        self.reply(Gio.MountOperationResult.HANDLED)
+
+    def ask_question_cb(self, *args, **kwargs):
+        # Obviously just always answering with the first choice is a naive
+        # approach.  But there's no easy way to allow for answering questions
+        # in duplicity's typical run-from-cron mode with environment variables.
+        # And only a couple gvfs backends ask questions: 'sftp' does about
+        # new hosts and 'afc' does if the device is locked.  0 should be a
+        # safe choice.
+        self.set_choice(0)
+        self.reply(Gio.MountOperationResult.HANDLED)
+
 class GIOBackend(duplicity.backend.Backend):
     """Use this backend when saving to a GIO URL.
        This is a bit of a meta-backend, in that it can handle multiple schemas.
        URLs look like schema://user@server/path.
     """
     def __init__(self, parsed_url):
-        from gi.repository import Gio #@UnresolvedImport
-        from gi.repository import GLib #@UnresolvedImport
-
-        class DupMountOperation(Gio.MountOperation):
-            """A simple MountOperation that grabs the password from the environment
-               or the user.
-            """
-            def __init__(self, backend):
-                Gio.MountOperation.__init__(self)
-                self.backend = backend
-                self.connect('ask-password', self.ask_password_cb)
-                self.connect('ask-question', self.ask_question_cb)
-
-            def ask_password_cb(self, *args, **kwargs):
-                self.set_password(self.backend.get_password())
-                self.reply(Gio.MountOperationResult.HANDLED)
-
-            def ask_question_cb(self, *args, **kwargs):
-                # Obviously just always answering with the first choice is a naive
-                # approach.  But there's no easy way to allow for answering questions
-                # in duplicity's typical run-from-cron mode with environment variables.
-                # And only a couple gvfs backends ask questions: 'sftp' does about
-                # new hosts and 'afc' does if the device is locked.  0 should be a
-                # safe choice.
-                self.set_choice(0)
-                self.reply(Gio.MountOperationResult.HANDLED)
-
         duplicity.backend.Backend.__init__(self, parsed_url)
 
         ensure_dbus()
@@ -84,86 +86,118 @@
         op = DupMountOperation(self)
         loop = GLib.MainLoop()
         self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE,
-                                                op, None,
-                                                self.__done_with_mount, loop)
+                                                op, None, self.done_with_mount,
+                                                loop)
         loop.run() # halt program until we're done mounting
 
         # Now make the directory if it doesn't exist
         try:
             self.remote_file.make_directory_with_parents(None)
-        except GLib.GError as e:
+        except GLib.GError, e:
             if e.code != Gio.IOErrorEnum.EXISTS:
                 raise
 
-    def __done_with_mount(self, fileobj, result, loop):
-        from gi.repository import Gio #@UnresolvedImport
-        from gi.repository import GLib #@UnresolvedImport
+    def done_with_mount(self, fileobj, result, loop):
         try:
             fileobj.mount_enclosing_volume_finish(result)
-        except GLib.GError as e:
+        except GLib.GError, e:
             # check for NOT_SUPPORTED because some schemas (e.g. file://) validly don't
             if e.code != Gio.IOErrorEnum.ALREADY_MOUNTED and e.code != Gio.IOErrorEnum.NOT_SUPPORTED:
                 log.FatalError(_("Connection failed, please check your password: %s")
                                % util.uexc(e), log.ErrorCode.connection_failed)
         loop.quit()
 
-    def __copy_progress(self, *args, **kwargs):
-        pass
-
-    def __copy_file(self, source, target):
-        from gi.repository import Gio #@UnresolvedImport
-        source.copy(target,
-                    Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
-                    None, self.__copy_progress, None)
-
-    def _error_code(self, operation, e):
-        from gi.repository import Gio #@UnresolvedImport
-        from gi.repository import GLib #@UnresolvedImport
+    def handle_error(self, raise_error, e, op, file1=None, file2=None):
+        if raise_error:
+            raise e
+        code = log.ErrorCode.backend_error
         if isinstance(e, GLib.GError):
-            if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete':
-                # Sometimes delete will return a generic failure on a file not
-                # found (notably the FTP does that)
-                return log.ErrorCode.backend_not_found
-            elif e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
-                return log.ErrorCode.backend_permission_denied
+            if e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
+                code = log.ErrorCode.backend_permission_denied
             elif e.code == Gio.IOErrorEnum.NOT_FOUND:
-                return log.ErrorCode.backend_not_found
+                code = log.ErrorCode.backend_not_found
             elif e.code == Gio.IOErrorEnum.NO_SPACE:
-                return log.ErrorCode.backend_no_space
-
-    def _put(self, source_path, remote_filename):
-        from gi.repository import Gio #@UnresolvedImport
+                code = log.ErrorCode.backend_no_space
+        extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
+        extra = ' '.join([op, extra])
+        log.FatalError(util.uexc(e), code, extra)
+
+    def copy_progress(self, *args, **kwargs):
+        pass
+
+    @retry
+    def copy_file(self, op, source, target, raise_errors=False):
+        log.Info(_("Writing %s") % util.ufn(target.get_parse_name()))
+        try:
+            source.copy(target,
+                        Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
+                        None, self.copy_progress, None)
+        except Exception, e:
+            self.handle_error(raise_errors, e, op, source.get_parse_name(),
+                              target.get_parse_name())
+
+    def put(self, source_path, remote_filename = None):
+        """Copy file to remote"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         source_file = Gio.File.new_for_path(source_path.name)
         target_file = self.remote_file.get_child(remote_filename)
-        self.__copy_file(source_file, target_file)
+        self.copy_file('put', source_file, target_file)
 
-    def _get(self, filename, local_path):
-        from gi.repository import Gio #@UnresolvedImport
+    def get(self, filename, local_path):
+        """Get file and put in local_path (Path object)"""
         source_file = self.remote_file.get_child(filename)
         target_file = Gio.File.new_for_path(local_path.name)
-        self.__copy_file(source_file, target_file)
+        self.copy_file('get', source_file, target_file)
+        local_path.setdata()
 
-    def _list(self):
-        from gi.repository import Gio #@UnresolvedImport
+    @retry
+    def _list(self, raise_errors=False):
+        """List files in that directory"""
         files = []
-        enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
-                                                   Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
-                                                   None)
-        info = enum.next_file(None)
-        while info:
-            files.append(info.get_name())
+        try:
+            enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
+                                                       Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
+                                                       None)
             info = enum.next_file(None)
+            while info:
+                files.append(info.get_name())
+                info = enum.next_file(None)
+        except Exception, e:
+            self.handle_error(raise_errors, e, 'list',
+                              self.remote_file.get_parse_name())
         return files
 
-    def _delete(self, filename):
-        target_file = self.remote_file.get_child(filename)
-        target_file.delete(None)
-
-    def _query(self, filename):
-        from gi.repository import Gio #@UnresolvedImport
-        target_file = self.remote_file.get_child(filename)
-        info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE,
-                                      Gio.FileQueryInfoFlags.NONE, None)
-        return {'size': info.get_size()}
-
-duplicity.backend.register_backend_prefix('gio', GIOBackend)
+    @retry
+    def delete(self, filename_list, raise_errors=False):
+        """Delete all files in filename list"""
+        assert type(filename_list) is not types.StringType
+        for filename in filename_list:
+            target_file = self.remote_file.get_child(filename)
+            try:
+                target_file.delete(None)
+            except Exception, e:
+                if isinstance(e, GLib.GError):
+                    if e.code == Gio.IOErrorEnum.NOT_FOUND:
+                        continue
+                self.handle_error(raise_errors, e, 'delete',
+                                  target_file.get_parse_name())
+                return
+
+    @retry
+    def _query_file_info(self, filename, raise_errors=False):
+        """Query attributes on filename"""
+        target_file = self.remote_file.get_child(filename)
+        attrs = Gio.FILE_ATTRIBUTE_STANDARD_SIZE
+        try:
+            info = target_file.query_info(attrs, Gio.FileQueryInfoFlags.NONE,
+                                          None)
+            return {'size': info.get_size()}
+        except Exception, e:
+            if isinstance(e, GLib.GError):
+                if e.code == Gio.IOErrorEnum.NOT_FOUND:
+                    return {'size': -1} # early exit, no need to retry
+            if raise_errors:
+                raise e
+            else:
+                return {'size': None}

=== modified file 'duplicity/backends/hsibackend.py'
--- duplicity/backends/hsibackend.py	2014-04-26 12:54:37 +0000
+++ duplicity/backends/hsibackend.py	2014-06-14 13:58:30 +0000
@@ -20,7 +20,9 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+
 import duplicity.backend
+from duplicity.errors import * #@UnusedWildImport
 
 hsi_command = "hsi"
 class HSIBackend(duplicity.backend.Backend):
@@ -33,23 +35,36 @@
         else:
             self.remote_prefix = ""
 
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename = None):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename)
-        self.subprocess_popen(commandline)
+        try:
+            self.run_command(commandline)
+        except Exception:
+            print commandline
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
         commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)
-        self.subprocess_popen(commandline)
+        self.run_command(commandline)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found" % local_path.name)
 
-    def _list(self):
+    def list(self):
         commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)
         l = os.popen3(commandline)[2].readlines()[3:]
         for i in range(0,len(l)):
             l[i] = l[i].split()[-1]
-        return [x for x in l if x]
+        print filter(lambda x: x, l)
+        return filter(lambda x: x, l)
 
-    def _delete(self, filename):
-        commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
-        self.subprocess_popen(commandline)
+    def delete(self, filename_list):
+        assert len(filename_list) > 0
+        for fn in filename_list:
+            commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, fn)
+            self.run_command(commandline)
 
 duplicity.backend.register_backend("hsi", HSIBackend)
+
+

=== modified file 'duplicity/backends/imapbackend.py'
--- duplicity/backends/imapbackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/imapbackend.py	2014-06-14 13:58:30 +0000
@@ -44,7 +44,7 @@
                   (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))
 
         #  Store url for reconnection on error
-        self.url = parsed_url
+        self._url = parsed_url
 
         #  Set the username
         if ( parsed_url.username is None ):
@@ -54,19 +54,19 @@
 
         #  Set the password
         if ( not parsed_url.password ):
-            if 'IMAP_PASSWORD' in os.environ:
+            if os.environ.has_key('IMAP_PASSWORD'):
                 password = os.environ.get('IMAP_PASSWORD')
             else:
                 password = getpass.getpass("Enter account password: ")
         else:
             password = parsed_url.password
 
-        self.username = username
-        self.password = password
-        self.resetConnection()
+        self._username = username
+        self._password = password
+        self._resetConnection()
 
-    def resetConnection(self):
-        parsed_url = self.url
+    def _resetConnection(self):
+        parsed_url = self._url
         try:
             imap_server = os.environ['IMAP_SERVER']
         except KeyError:
@@ -74,32 +74,32 @@
 
         #  Try to close the connection cleanly
         try:
-            self.conn.close()
+            self._conn.close()
         except Exception:
             pass
 
         if (parsed_url.scheme == "imap"):
             cl = imaplib.IMAP4
-            self.conn = cl(imap_server, 143)
+            self._conn = cl(imap_server, 143)
         elif (parsed_url.scheme == "imaps"):
             cl = imaplib.IMAP4_SSL
-            self.conn = cl(imap_server, 993)
+            self._conn = cl(imap_server, 993)
 
         log.Debug("Type of imap class: %s" % (cl.__name__))
         self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
 
         #  Login
         if (not(globals.imap_full_address)):
-            self.conn.login(self.username, self.password)
-            self.conn.select(globals.imap_mailbox)
+            self._conn.login(self._username, self._password)
+            self._conn.select(globals.imap_mailbox)
             log.Info("IMAP connected")
         else:
-            self.conn.login(self.username + "@" + parsed_url.hostname, self.password)
-            self.conn.select(globals.imap_mailbox)
+            self._conn.login(self._username + "@" + parsed_url.hostname, self._password)
+            self._conn.select(globals.imap_mailbox)
             log.Info("IMAP connected")
 
 
-    def prepareBody(self,f,rname):
+    def _prepareBody(self,f,rname):
         mp = email.MIMEMultipart.MIMEMultipart()
 
         # I am going to use the remote_dir as the From address so that
@@ -117,7 +117,9 @@
 
         return mp.as_string()
 
-    def _put(self, source_path, remote_filename):
+    def put(self, source_path, remote_filename = None):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         f=source_path.open("rb")
         allowedTimeout = globals.timeout
         if (allowedTimeout == 0):
@@ -125,12 +127,12 @@
             allowedTimeout = 2880
         while allowedTimeout > 0:
             try:
-                self.conn.select(remote_filename)
-                body=self.prepareBody(f,remote_filename)
+                self._conn.select(remote_filename)
+                body=self._prepareBody(f,remote_filename)
                 # If we don't select the IMAP folder before
                 # append, the message goes into the INBOX.
-                self.conn.select(globals.imap_mailbox)
-                self.conn.append(globals.imap_mailbox, None, None, body)
+                self._conn.select(globals.imap_mailbox)
+                self._conn.append(globals.imap_mailbox, None, None, body)
                 break
             except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                 allowedTimeout -= 1
@@ -138,7 +140,7 @@
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
-                        self.resetConnection()
+                        self._resetConnection()
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
@@ -147,15 +149,15 @@
 
         log.Info("IMAP mail with '%s' subject stored" % remote_filename)
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
         allowedTimeout = globals.timeout
         if (allowedTimeout == 0):
             # Allow a total timeout of 1 day
             allowedTimeout = 2880
         while allowedTimeout > 0:
             try:
-                self.conn.select(globals.imap_mailbox)
-                (result,list) = self.conn.search(None, 'Subject', remote_filename)
+                self._conn.select(globals.imap_mailbox)
+                (result,list) = self._conn.search(None, 'Subject', remote_filename)
                 if result != "OK":
                     raise Exception(list[0])
 
@@ -163,7 +165,7 @@
                 if list[0] == '':
                     raise Exception("no mail with subject %s")
 
-                (result,list) = self.conn.fetch(list[0],"(RFC822)")
+                (result,list) = self._conn.fetch(list[0],"(RFC822)")
 
                 if result != "OK":
                     raise Exception(list[0])
@@ -183,7 +185,7 @@
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
-                        self.resetConnection()
+                        self._resetConnection()
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
@@ -197,7 +199,7 @@
 
     def _list(self):
         ret = []
-        (result,list) = self.conn.select(globals.imap_mailbox)
+        (result,list) = self._conn.select(globals.imap_mailbox)
         if result != "OK":
             raise BackendException(list[0])
 
@@ -205,14 +207,14 @@
         # address
 
         # Search returns an error if you haven't selected an IMAP folder.
-        (result,list) = self.conn.search(None, 'FROM', self.remote_dir)
+        (result,list) = self._conn.search(None, 'FROM', self.remote_dir)
         if result!="OK":
             raise Exception(list[0])
         if list[0]=='':
             return ret
         nums=list[0].split(" ")
         set="%s:%s"%(nums[0],nums[-1])
-        (result,list) = self.conn.fetch(set,"(BODY[HEADER])")
+        (result,list) = self._conn.fetch(set,"(BODY[HEADER])")
         if result!="OK":
             raise Exception(list[0])
 
@@ -230,32 +232,34 @@
                     log.Info("IMAP LIST: %s %s" % (subj,header_from))
         return ret
 
-    def imapf(self,fun,*args):
+    def _imapf(self,fun,*args):
         (ret,list)=fun(*args)
         if ret != "OK":
             raise Exception(list[0])
         return list
 
-    def delete_single_mail(self,i):
-        self.imapf(self.conn.store,i,"+FLAGS",'\\DELETED')
-
-    def expunge(self):
-        list=self.imapf(self.conn.expunge)
-
-    def _delete_list(self, filename_list):
+    def _delete_single_mail(self,i):
+        self._imapf(self._conn.store,i,"+FLAGS",'\\DELETED')
+
+    def _expunge(self):
+        list=self._imapf(self._conn.expunge)
+
+    def delete(self, filename_list):
+        assert len(filename_list) > 0
         for filename in filename_list:
-            list = self.imapf(self.conn.search,None,"(SUBJECT %s)"%filename)
+            list = self._imapf(self._conn.search,None,"(SUBJECT %s)"%filename)
             list = list[0].split()
-            if len(list) > 0 and list[0] != "":
-                self.delete_single_mail(list[0])
-                log.Notice("marked %s to be deleted" % filename)
-        self.expunge()
-        log.Notice("IMAP expunged %s files" % len(filename_list))
+            if len(list)==0 or list[0]=="":raise Exception("no such mail with subject '%s'"%filename)
+            self._delete_single_mail(list[0])
+            log.Notice("marked %s to be deleted" % filename)
+        self._expunge()
+        log.Notice("IMAP expunged %s files" % len(list))
 
-    def _close(self):
-        self.conn.select(globals.imap_mailbox)
-        self.conn.close()
-        self.conn.logout()
+    def close(self):
+        self._conn.select(globals.imap_mailbox)
+        self._conn.close()
+        self._conn.logout()
 
 duplicity.backend.register_backend("imap", ImapBackend);
 duplicity.backend.register_backend("imaps", ImapBackend);
+

=== modified file 'duplicity/backends/localbackend.py'
--- duplicity/backends/localbackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/localbackend.py	2014-06-14 13:58:30 +0000
@@ -20,11 +20,14 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+import types
+import errno
 
 import duplicity.backend
 from duplicity import log
 from duplicity import path
-from duplicity.errors import BackendException
+from duplicity import util
+from duplicity.errors import * #@UnusedWildImport
 
 
 class LocalBackend(duplicity.backend.Backend):
@@ -40,37 +43,90 @@
         if not parsed_url.path.startswith('//'):
             raise BackendException("Bad file:// path syntax.")
         self.remote_pathdir = path.Path(parsed_url.path[2:])
-        try:
-            os.makedirs(self.remote_pathdir.base)
-        except Exception:
-            pass
-
-    def _move(self, source_path, remote_filename):
-        target_path = self.remote_pathdir.append(remote_filename)
-        try:
-            source_path.rename(target_path)
-            return True
-        except OSError:
-            return False
-
-    def _put(self, source_path, remote_filename):
-        target_path = self.remote_pathdir.append(remote_filename)
-        target_path.writefileobj(source_path.open("rb"))
-
-    def _get(self, filename, local_path):
+
+    def handle_error(self, e, op, file1 = None, file2 = None):
+        code = log.ErrorCode.backend_error
+        if hasattr(e, 'errno'):
+            if e.errno == errno.EACCES:
+                code = log.ErrorCode.backend_permission_denied
+            elif e.errno == errno.ENOENT:
+                code = log.ErrorCode.backend_not_found
+            elif e.errno == errno.ENOSPC:
+                code = log.ErrorCode.backend_no_space
+        extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
+        extra = ' '.join([op, extra])
+        if op != 'delete' and op != 'query':
+            log.FatalError(util.uexc(e), code, extra)
+        else:
+            log.Warn(util.uexc(e), code, extra)
+
+    def move(self, source_path, remote_filename = None):
+        self.put(source_path, remote_filename, rename_instead = True)
+
+    def put(self, source_path, remote_filename = None, rename_instead = False):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        target_path = self.remote_pathdir.append(remote_filename)
+        log.Info("Writing %s" % target_path.name)
+        """Try renaming first (if allowed to), copying if doesn't work"""
+        if rename_instead:
+            try:
+                source_path.rename(target_path)
+            except OSError:
+                pass
+            except Exception, e:
+                self.handle_error(e, 'put', source_path.name, target_path.name)
+            else:
+                return
+        try:
+            target_path.writefileobj(source_path.open("rb"))
+        except Exception, e:
+            self.handle_error(e, 'put', source_path.name, target_path.name)
+
+        """If we get here, renaming failed previously"""
+        if rename_instead:
+            """We need to simulate its behaviour"""
+            source_path.delete()
+
+    def get(self, filename, local_path):
+        """Get file and put in local_path (Path object)"""
         source_path = self.remote_pathdir.append(filename)
-        local_path.writefileobj(source_path.open("rb"))
+        try:
+            local_path.writefileobj(source_path.open("rb"))
+        except Exception, e:
+            self.handle_error(e, 'get', source_path.name, local_path.name)
 
     def _list(self):
-        return self.remote_pathdir.listdir()
-
-    def _delete(self, filename):
-        self.remote_pathdir.append(filename).delete()
-
-    def _query(self, filename):
-        target_file = self.remote_pathdir.append(filename)
-        target_file.setdata()
-        size = target_file.getsize() if target_file.exists() else -1
-        return {'size': size}
+        """List files in that directory"""
+        try:
+                os.makedirs(self.remote_pathdir.base)
+        except Exception:
+                pass
+        try:
+            return self.remote_pathdir.listdir()
+        except Exception, e:
+            self.handle_error(e, 'list', self.remote_pathdir.name)
+
+    def delete(self, filename_list):
+        """Delete all files in filename list"""
+        assert type(filename_list) is not types.StringType
+        for filename in filename_list:
+            try:
+                self.remote_pathdir.append(filename).delete()
+            except Exception, e:
+                self.handle_error(e, 'delete', self.remote_pathdir.append(filename).name)
+
+    def _query_file_info(self, filename):
+        """Query attributes on filename"""
+        try:
+            target_file = self.remote_pathdir.append(filename)
+            if not os.path.exists(target_file.name):
+                return {'size': -1}
+            target_file.setdata()
+            size = target_file.getsize()
+            return {'size': size}
+        except Exception, e:
+            self.handle_error(e, 'query', target_file.name)
+            return {'size': None}
 
 duplicity.backend.register_backend("file", LocalBackend)

=== modified file 'duplicity/backends/megabackend.py'
--- duplicity/backends/megabackend.py	2014-04-26 12:35:04 +0000
+++ duplicity/backends/megabackend.py	2014-06-14 13:58:30 +0000
@@ -22,8 +22,9 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import duplicity.backend
+from duplicity.backend import retry
 from duplicity import log
-from duplicity.errors import BackendException
+from duplicity.errors import * #@UnusedWildImport
 
 
 class MegaBackend(duplicity.backend.Backend):
@@ -62,64 +63,113 @@
 
         self.folder = parent_folder
 
-    def _put(self, source_path, remote_filename):
-        try:
-            self._delete(remote_filename)
-        except Exception:
-            pass
-        self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
-
-    def _get(self, remote_filename, local_path):
-        files = self.client.get_files()
-        entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
-        if len(entries):
-            # get first matching remote file
-            entry = entries.keys()[0]
-            self.client.download((entry, entries[entry]), dest_filename=local_path.name)
-        else:
-            raise BackendException("Failed to find file '%s' in remote folder '%s'"
-                                   % (remote_filename, self.__get_node_name(self.folder)),
-                                   code=log.ErrorCode.backend_not_found)
-
-    def _list(self):
-        entries = self.client.get_files_in_node(self.folder)
-        return [self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
-
-    def _delete(self, filename):
-        files = self.client.get_files()
-        entries = self.__filter_entries(files, self.folder, filename, 'file')
-        if len(entries):
-            self.client.destroy(entries.keys()[0])
-        else:
-            raise BackendException("Failed to find file '%s' in remote folder '%s'"
-                                   % (filename, self.__get_node_name(self.folder)),
-                                   code=log.ErrorCode.backend_not_found)
+    @retry
+    def put(self, source_path, remote_filename=None, raise_errors=False):
+        """Transfer source_path to remote_filename"""
+        # Default remote file name.
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
+        try:
+            # If remote file already exists in destination folder, remove it.
+            files = self.client.get_files()
+            entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
+
+            for entry in entries:
+                self.client.delete(entry)
+
+            self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
+
+        except Exception, e:
+            self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
+                                % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors)
+
+    @retry
+    def get(self, remote_filename, local_path, raise_errors=False):
+        """Get remote filename, saving it to local_path"""
+        try:
+            files = self.client.get_files()
+            entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
+
+            if len(entries):
+                # get first matching remote file
+                entry = entries.keys()[0]
+                self.client.download((entry, entries[entry]), dest_filename=local_path.name)
+                local_path.setdata()
+                return
+            else:
+                self.__handle_error("Failed to find file '%s' in remote folder '%s'"
+                                    % (remote_filename, self.__get_node_name(self.folder)), raise_errors)
+        except Exception, e:
+            self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
+                                 % (remote_filename, self.__get_node_name(self.folder), str(e)), raise_errors)
+
+    @retry
+    def _list(self, raise_errors=False):
+        """List files in folder"""
+        try:
+            entries = self.client.get_files_in_node(self.folder)
+            return [ self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
+        except Exception, e:
+            self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
+                                % (self.__get_node_name(self.folder), str(e)), raise_errors)
+
+    @retry
+    def delete(self, filename_list, raise_errors=False):
+        """Delete files in filename_list"""
+        files = self.client.get_files()
+        for filename in filename_list:
+            entries = self.__filter_entries(files, self.folder, filename)
+            try:
+                if len(entries) > 0:
+                    for entry in entries:
+                        if self.client.destroy(entry):
+                            self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
+                                % (filename, self.__get_node_name(self.folder)), raise_errors)
+                else:
+                    log.Warn("Failed to fetch file '%s' in remote folder '%s'"
+                             % (filename, self.__get_node_name(self.folder)))
+            except Exception, e:
+                self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
+                                    % (filename, self.__get_node_name(self.folder), str(e)), raise_errors)
 
     def __get_node_name(self, handle):
         """get node name from public handle"""
         files = self.client.get_files()
         return self.client.get_name_from_file({handle:files[handle]})
+        
+    def __handle_error(self, message, raise_errors=True):
+        if raise_errors:
+            raise BackendException(message)
+        else:
+            log.FatalError(message, log.ErrorCode.backend_error)
 
     def __authorize(self, email, password):
-        self.client.login(email, password)
+        try:
+            self.client.login(email, password)
+        except Exception, e:
+            self.__handle_error('Error while authenticating client: %s.' % str(e))
 
     def __filter_entries(self, entries, parent_id=None, title=None, type=None):
         result = {}
         type_map = { 'folder': 1, 'file': 0 }
 
-        for k, v in entries.items():
-            try:
-                if parent_id != None:
-                    assert(v['p'] == parent_id)
-                if title != None:
-                    assert(v['a']['n'] == title)
-                if type != None:
-                    assert(v['t'] == type_map[type])
-            except AssertionError:
-                continue
-
-            result.update({k:v})
-
-        return result
+        try:
+            for k, v in entries.items():
+                try:
+                    if parent_id != None:
+                        assert(v['p'] == parent_id)
+                    if title != None:
+                        assert(v['a']['n'] == title)
+                    if type != None:
+                        assert(v['t'] == type_map[type])
+                except AssertionError:
+                    continue
+
+                result.update({k:v})
+
+            return result
+        except Exception, e:
+            self.__handle_error('Error while fetching remote entries: %s.' % str(e))
 
 duplicity.backend.register_backend('mega', MegaBackend)

=== modified file 'duplicity/backends/rsyncbackend.py'
--- duplicity/backends/rsyncbackend.py	2014-04-26 12:54:37 +0000
+++ duplicity/backends/rsyncbackend.py	2014-06-14 13:58:30 +0000
@@ -23,7 +23,7 @@
 import tempfile
 
 import duplicity.backend
-from duplicity.errors import InvalidBackendURL
+from duplicity.errors import * #@UnusedWildImport
 from duplicity import globals, tempdir, util
 
 class RsyncBackend(duplicity.backend.Backend):
@@ -58,13 +58,12 @@
             if port:
                 port = " --port=%s" % port
         else:
-            host_string = host + ":" if host else ""
             if parsed_url.path.startswith("//"):
                 # its an absolute path
-                self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/'))
+                self.url_string = "%s:/%s" % (host, parsed_url.path.lstrip('/'))
             else:
                 # its a relative path
-                self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/'))
+                self.url_string = "%s:%s" % (host, parsed_url.path.lstrip('/'))
             if parsed_url.port:
                 port = " -p %s" % parsed_url.port
         # add trailing slash if missing
@@ -106,17 +105,29 @@
         raise InvalidBackendURL("Could not determine rsync path: %s"
                                     "" % self.munge_password( url ) )
 
-    def _put(self, source_path, remote_filename):
+    def run_command(self, commandline):
+        result, stdout, stderr = self.subprocess_popen_persist(commandline)
+        return result, stdout
+
+    def put(self, source_path, remote_filename = None):
+        """Use rsync to copy source_dir/filename to remote computer"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         remote_path = os.path.join(self.url_string, remote_filename)
         commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)
-        self.subprocess_popen(commandline)
+        self.run_command(commandline)
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
+        """Use rsync to get a remote file"""
         remote_path = os.path.join (self.url_string, remote_filename)
         commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)
-        self.subprocess_popen(commandline)
+        self.run_command(commandline)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found" % local_path.name)
 
-    def _list(self):
+    def list(self):
+        """List files"""
         def split (str):
             line = str.split ()
             if len (line) > 4 and line[4] != '.':
@@ -124,17 +135,20 @@
             else:
                 return None
         commandline = "%s %s" % (self.cmd, self.url_string)
-        result, stdout, stderr = self.subprocess_popen(commandline)
-        return [x for x in map (split, stdout.split('\n')) if x]
+        result, stdout = self.run_command(commandline)
+        return filter(lambda x: x, map (split, stdout.split('\n')))
 
-    def _delete_list(self, filename_list):
+    def delete(self, filename_list):
+        """Delete files."""
         delete_list = filename_list
         dont_delete_list = []
-        for file in self._list ():
+        for file in self.list ():
             if file in delete_list:
                 delete_list.remove (file)
             else:
                 dont_delete_list.append (file)
+        if len (delete_list) > 0:
+            raise BackendException("Files %s not found" % str (delete_list))
 
         dir = tempfile.mkdtemp()
         exclude, exclude_name = tempdir.default().mkstemp_file()
@@ -148,7 +162,7 @@
         exclude.close()
         commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %
                                    (self.cmd, exclude_name, dir, self.url_string))
-        self.subprocess_popen(commandline)
+        self.run_command(commandline)
         for file in to_delete:
             util.ignore_missing(os.unlink, file)
         os.rmdir (dir)

=== modified file 'duplicity/backends/sshbackend.py'
--- duplicity/backends/sshbackend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/sshbackend.py	2014-06-14 13:58:30 +0000
@@ -18,7 +18,6 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import duplicity.backend
 from duplicity import globals, log
 
 def warn_option(option, optionvar):
@@ -27,15 +26,11 @@
 
 if (globals.ssh_backend and
     globals.ssh_backend.lower().strip() == 'pexpect'):
-    from ._ssh_pexpect import SSHPExpectBackend as SSHBackend
+    import _ssh_pexpect
 else:
     # take user by the hand to prevent typo driven bug reports
     if globals.ssh_backend.lower().strip() != 'paramiko':
         log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend)
     warn_option("--scp-command", globals.scp_command)
     warn_option("--sftp-command", globals.sftp_command)
-    from ._ssh_paramiko import SSHParamikoBackend as SSHBackend
-
-duplicity.backend.register_backend("sftp", SSHBackend)
-duplicity.backend.register_backend("scp", SSHBackend)
-duplicity.backend.register_backend("ssh", SSHBackend)
+    import _ssh_paramiko

=== modified file 'duplicity/backends/swiftbackend.py'
--- duplicity/backends/swiftbackend.py	2014-04-29 23:49:01 +0000
+++ duplicity/backends/swiftbackend.py	2014-06-14 13:58:30 +0000
@@ -19,12 +19,14 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
+import time
 
 import duplicity.backend
+from duplicity import globals
 from duplicity import log
-from duplicity import util
-from duplicity.errors import BackendException
-
+from duplicity.errors import * #@UnusedWildImport
+from duplicity.util import exception_traceback
+from duplicity.backend import retry
 
 class SwiftBackend(duplicity.backend.Backend):
     """
@@ -42,20 +44,20 @@
         conn_kwargs = {}
 
         # if the user has already authenticated
-        if 'SWIFT_PREAUTHURL' in os.environ and 'SWIFT_PREAUTHTOKEN' in os.environ:
+        if os.environ.has_key('SWIFT_PREAUTHURL') and os.environ.has_key('SWIFT_PREAUTHTOKEN'):
             conn_kwargs['preauthurl'] = os.environ['SWIFT_PREAUTHURL']
             conn_kwargs['preauthtoken'] = os.environ['SWIFT_PREAUTHTOKEN']           
         
         else:
-            if 'SWIFT_USERNAME' not in os.environ:
+            if not os.environ.has_key('SWIFT_USERNAME'):
                 raise BackendException('SWIFT_USERNAME environment variable '
                                        'not set.')
 
-            if 'SWIFT_PASSWORD' not in os.environ:
+            if not os.environ.has_key('SWIFT_PASSWORD'):
                 raise BackendException('SWIFT_PASSWORD environment variable '
                                        'not set.')
 
-            if 'SWIFT_AUTHURL' not in os.environ:
+            if not os.environ.has_key('SWIFT_AUTHURL'):
                 raise BackendException('SWIFT_AUTHURL environment variable '
                                        'not set.')
 
@@ -63,11 +65,11 @@
             conn_kwargs['key'] = os.environ['SWIFT_PASSWORD']
             conn_kwargs['authurl'] = os.environ['SWIFT_AUTHURL']
 
-        if 'SWIFT_AUTHVERSION' in os.environ:
+        if os.environ.has_key('SWIFT_AUTHVERSION'):
             conn_kwargs['auth_version'] = os.environ['SWIFT_AUTHVERSION']
         else:
             conn_kwargs['auth_version'] = '1'
-        if 'SWIFT_TENANTNAME' in os.environ:
+        if os.environ.has_key('SWIFT_TENANTNAME'):
             conn_kwargs['tenant_name'] = os.environ['SWIFT_TENANTNAME']
             
         self.container = parsed_url.path.lstrip('/')
@@ -75,35 +77,126 @@
         try:
             self.conn = Connection(**conn_kwargs)
             self.conn.put_container(self.container)
-        except Exception as e:
+        except Exception, e:
             log.FatalError("Connection failed: %s %s"
-                           % (e.__class__.__name__, util.uexc(e)),
+                           % (e.__class__.__name__, str(e)),
                            log.ErrorCode.connection_failed)
 
-    def _error_code(self, operation, e):
-        if isinstance(e, self.resp_exc):
-            if e.http_status == 404:
-                return log.ErrorCode.backend_not_found
-
-    def _put(self, source_path, remote_filename):
-        self.conn.put_object(self.container, remote_filename,
-                             file(source_path.name))
-
-    def _get(self, remote_filename, local_path):
-        headers, body = self.conn.get_object(self.container, remote_filename)
-        with open(local_path.name, 'wb') as f:
-            for chunk in body:
-                f.write(chunk)
+    def put(self, source_path, remote_filename = None):
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+
+        for n in range(1, globals.num_retries+1):
+            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
+            try:
+                self.conn.put_object(self.container,
+                                     remote_filename, 
+                                     file(source_path.name))
+                return
+            except self.resp_exc, error:
+                log.Warn("Upload of '%s' failed (attempt %d): Swift server returned: %s %s"
+                         % (remote_filename, n, error.http_status, error.message))
+            except Exception, e:
+                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
+                        % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up uploading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error uploading '%s'" % remote_filename)
+
+    def get(self, remote_filename, local_path):
+        for n in range(1, globals.num_retries+1):
+            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
+            try:
+                headers, body = self.conn.get_object(self.container,
+                                                     remote_filename)
+                f = open(local_path.name, 'w')
+                for chunk in body:
+                    f.write(chunk)
+                local_path.setdata()
+                return
+            except self.resp_exc, resperr:
+                log.Warn("Download of '%s' failed (attempt %s): Swift server returned: %s %s"
+                         % (remote_filename, n, resperr.http_status, resperr.message))
+            except Exception, e:
+                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up downloading '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error downloading '%s/%s'"
+                               % (self.container, remote_filename))
 
     def _list(self):
-        headers, objs = self.conn.get_container(self.container)
-        return [ o['name'] for o in objs ]
-
-    def _delete(self, filename):
-        self.conn.delete_object(self.container, filename)
-
-    def _query(self, filename):
-        sobject = self.conn.head_object(self.container, filename)
-        return {'size': int(sobject['content-length'])}
+        for n in range(1, globals.num_retries+1):
+            log.Info("Listing '%s'" % (self.container))
+            try:
+                # Cloud Files will return a max of 10,000 objects.  We have
+                # to make multiple requests to get them all.
+                headers, objs = self.conn.get_container(self.container)
+                return [ o['name'] for o in objs ]
+            except self.resp_exc, resperr:
+                log.Warn("Listing of '%s' failed (attempt %s): Swift server returned: %s %s"
+                         % (self.container, n, resperr.http_status, resperr.message))
+            except Exception, e:
+                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
+                         % (self.container, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up listing of '%s' after %s attempts"
+                 % (self.container, globals.num_retries))
+        raise BackendException("Error listing '%s'"
+                               % (self.container))
+
+    def delete_one(self, remote_filename):
+        for n in range(1, globals.num_retries+1):
+            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
+            try:
+                self.conn.delete_object(self.container, remote_filename)
+                return
+            except self.resp_exc, resperr:
+                if n > 1 and resperr.http_status == 404:
+                    # We failed on a timeout, but delete succeeded on the server
+                    log.Warn("Delete of '%s' missing after retry - must have succeded earlier" % remote_filename )
+                    return
+                log.Warn("Delete of '%s' failed (attempt %s): Swift server returned: %s %s"
+                         % (remote_filename, n, resperr.http_status, resperr.message))
+            except Exception, e:
+                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
+                         % (remote_filename, n, e.__class__.__name__, str(e)))
+                log.Debug("Backtrace of previous error: %s"
+                          % exception_traceback())
+            time.sleep(30)
+        log.Warn("Giving up deleting '%s' after %s attempts"
+                 % (remote_filename, globals.num_retries))
+        raise BackendException("Error deleting '%s/%s'"
+                               % (self.container, remote_filename))
+
+    def delete(self, filename_list):
+        for file in filename_list:
+            self.delete_one(file)
+            log.Debug("Deleted '%s/%s'" % (self.container, file))
+
+    @retry
+    def _query_file_info(self, filename, raise_errors=False):
+        try:
+            sobject = self.conn.head_object(self.container, filename)
+            return {'size': long(sobject['content-length'])}
+        except self.resp_exc:
+            return {'size': -1}
+        except Exception, e:
+            log.Warn("Error querying '%s/%s': %s"
+                     "" % (self.container,
+                           filename,
+                           str(e)))
+            if raise_errors:
+                raise e
+            else:
+                return {'size': None}
 
 duplicity.backend.register_backend("swift", SwiftBackend)

=== modified file 'duplicity/backends/tahoebackend.py'
--- duplicity/backends/tahoebackend.py	2014-04-22 15:33:00 +0000
+++ duplicity/backends/tahoebackend.py	2014-06-14 13:58:30 +0000
@@ -20,8 +20,9 @@
 
 import duplicity.backend
 from duplicity import log
-from duplicity.errors import BackendException
+from duplicity.errors import * #@UnusedWildImport
 
+from commands import getstatusoutput
 
 class TAHOEBackend(duplicity.backend.Backend):
     """
@@ -35,8 +36,10 @@
 
         self.alias = url[0]
 
-        if len(url) > 1:
+        if len(url) > 2:
             self.directory = "/".join(url[1:])
+        elif len(url) == 2:
+            self.directory = url[1]
         else:
             self.directory = ""
 
@@ -56,20 +59,28 @@
 
     def run(self, *args):
         cmd = " ".join(args)
-        _, output, _ = self.subprocess_popen(cmd)
-        return output
-
-    def _put(self, source_path, remote_filename):
+        log.Debug("tahoe execute: %s" % cmd)
+        (status, output) = getstatusoutput(cmd)
+
+        if status != 0:
+            raise BackendException("Error running %s" % cmd)
+        else:
+            return output
+
+    def put(self, source_path, remote_filename=None):
         self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))
 
-    def _get(self, remote_filename, local_path):
+    def get(self, remote_filename, local_path):
         self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)
+        local_path.setdata()
 
     def _list(self):
-        output = self.run("tahoe", "ls", self.get_remote_path())
-        return output.split('\n') if output else []
+        log.Debug("tahoe: List")
+        return self.run("tahoe", "ls", self.get_remote_path()).split('\n')
 
-    def _delete(self, filename):
-        self.run("tahoe", "rm", self.get_remote_path(filename))
+    def delete(self, filename_list):
+        log.Debug("tahoe: delete(%s)" % filename_list)
+        for filename in filename_list:
+            self.run("tahoe", "rm", self.get_remote_path(filename))
 
 duplicity.backend.register_backend("tahoe", TAHOEBackend)

=== modified file 'duplicity/backends/webdavbackend.py'
--- duplicity/backends/webdavbackend.py	2014-05-07 21:41:31 +0000
+++ duplicity/backends/webdavbackend.py	2014-06-14 13:58:30 +0000
@@ -26,14 +26,14 @@
 import re
 import urllib
 import urllib2
-import urlparse
 import xml.dom.minidom
 
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity import util
-from duplicity.errors import BackendException, FatalBackendException
+from duplicity.errors import * #@UnusedWildImport
+from duplicity import urlparse_2_5 as urlparser
+from duplicity.backend import retry_fatal
 
 class CustomMethodRequest(urllib2.Request):
     """
@@ -54,7 +54,7 @@
                 global socket, ssl
                 import socket, ssl
             except ImportError:
-                raise FatalBackendException("Missing socket or ssl libraries.")
+                raise FatalBackendError("Missing socket or ssl libraries.")
 
             httplib.HTTPSConnection.__init__(self, *args, **kwargs)
 
@@ -71,21 +71,21 @@
                         break
             # still no cacert file, inform user
             if not self.cacert_file:
-                raise FatalBackendException("""For certificate verification a cacert database file is needed in one of these locations: %s
+                raise FatalBackendError("""For certificate verification a cacert database file is needed in one of these locations: %s
 Hints:
   Consult the man page, chapter 'SSL Certificate Verification'.
   Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) )
             # check if file is accessible (libssl errors are not very detailed)
             if not os.access(self.cacert_file, os.R_OK):
-                raise FatalBackendException("Cacert database file '%s' is not readable." % cacert_file)
+                raise FatalBackendError("Cacert database file '%s' is not readable." % cacert_file)
 
         def connect(self):
             # create new socket
             sock = socket.create_connection((self.host, self.port),
                                             self.timeout)
-            if self.tunnel_host:
+            if self._tunnel_host:
                 self.sock = sock
-                self.tunnel()
+                self._tunnel()
 
             # wrap the socket in ssl using verification
             self.sock = ssl.wrap_socket(sock,
@@ -96,9 +96,9 @@
         def request(self, *args, **kwargs):
             try:
                 return httplib.HTTPSConnection.request(self, *args, **kwargs)
-            except ssl.SSLError as e:
+            except ssl.SSLError, e:
                 # encapsulate ssl errors
-                raise BackendException("SSL failed: %s" % util.uexc(e),log.ErrorCode.backend_error)
+                raise BackendException("SSL failed: %s" % str(e),log.ErrorCode.backend_error)
 
 
 class WebDAVBackend(duplicity.backend.Backend):
@@ -126,7 +126,7 @@
 
         self.username = parsed_url.username
         self.password = self.get_password()
-        self.directory = self.sanitize_path(parsed_url.path)
+        self.directory = self._sanitize_path(parsed_url.path)
 
         log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,))
         log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port))
@@ -134,33 +134,30 @@
 
         self.conn = None
 
-    def sanitize_path(self,path):
+    def _sanitize_path(self,path):
         if path:
             foldpath = re.compile('/+')
             return foldpath.sub('/', path + '/' )
         else:
             return '/'
 
-    def getText(self,nodelist):
+    def _getText(self,nodelist):
         rc = ""
         for node in nodelist:
             if node.nodeType == node.TEXT_NODE:
                 rc = rc + node.data
         return rc
 
-    def _retry_cleanup(self):
-        self.connect(forced=True)
-
-    def connect(self, forced=False):
+    def _connect(self, forced=False):
         """
         Connect or re-connect to the server, updates self.conn
         # reconnect on errors as a precaution, there are errors e.g.
         # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable
         """
-        if not forced and self.conn \
+        if self.retry_count<=1 and self.conn \
             and self.conn.host == self.parsed_url.hostname: return
 
-        log.Info("WebDAV create connection on '%s'" % (self.parsed_url.hostname))
+        log.Info("WebDAV create connection on '%s' (retry %s) " % (self.parsed_url.hostname,self.retry_count) )
         if self.conn: self.conn.close()
         # http schemes needed for redirect urls from servers
         if self.parsed_url.scheme in ['webdav','http']:
@@ -171,9 +168,9 @@
             else:
                 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
         else:
-            raise FatalBackendException("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
+            raise FatalBackendError("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
 
-    def _close(self):
+    def close(self):
         self.conn.close()
 
     def request(self, method, path, data=None, redirected=0):
@@ -181,7 +178,7 @@
         Wraps the connection.request method to retry once if authentication is
         required
         """
-        self.connect()
+        self._connect()
 
         quoted_path = urllib.quote(path,"/:~")
 
@@ -200,12 +197,12 @@
             if redirect_url:
                 log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) )
                 if redirected > 10:
-                    raise FatalBackendException("WebDAV redirected 10 times. Giving up.")
+                    raise FatalBackendError("WebDAV redirected 10 times. Giving up.")
                 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)
-                self.directory = self.sanitize_path(self.parsed_url.path)
+                self.directory = self._sanitize_path(self.parsed_url.path)
                 return self.request(method,self.directory,data,redirected+1)
             else:
-                raise FatalBackendException("WebDAV missing location header in redirect response.")
+                raise FatalBackendError("WebDAV missing location header in redirect response.")
         elif response.status == 401:
             response.read()
             response.close()
@@ -265,7 +262,10 @@
         auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge)
         return 'Digest %s' % auth_string
 
+    @retry_fatal
     def _list(self):
+        """List files in directory"""
+        log.Info("Listing directory %s on WebDAV server" % (self.directory,))
         response = None
         try:
             self.headers['Depth'] = "1"
@@ -290,14 +290,14 @@
             dom = xml.dom.minidom.parseString(document)
             result = []
             for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):
-                filename = self.taste_href(href)
+                filename = self.__taste_href(href)
                 if filename:
                     result.append(filename)
+            if response: response.close()
             return result
-        except Exception as e:
+        except Exception, e:
+            if response: response.close()
             raise e
-        finally:
-            if response: response.close()
 
     def makedir(self):
         """Make (nested) directories on the server."""
@@ -309,7 +309,7 @@
         for i in range(1,len(dirs)):
             d="/".join(dirs[0:i+1])+"/"
 
-            self._close() # or we get previous request's data or exception
+            self.close() # or we get previous request's data or exception
             self.headers['Depth'] = "1"
             response = self.request("PROPFIND", d)
             del self.headers['Depth']
@@ -318,22 +318,22 @@
 
             if response.status == 404:
                 log.Info("Creating missing directory %s" % d)
-                self._close() # or we get previous request's data or exception
+                self.close() # or we get previous request's data or exception
 
                 res = self.request("MKCOL", d)
                 if res.status != 201:
                     raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason))
-                self._close()
+                self.close()
 
-    def taste_href(self, href):
+    def __taste_href(self, href):
         """
         Internal helper to taste the given href node and, if
         it is a duplicity file, collect it as a result file.
 
         @return: A matching filename, or None if the href did not match.
         """
-        raw_filename = self.getText(href.childNodes).strip()
-        parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))
+        raw_filename = self._getText(href.childNodes).strip()
+        parsed_url = urlparser.urlparse(urllib.unquote(raw_filename))
         filename = parsed_url.path
         log.Debug("webdav path decoding and translation: "
                   "%s -> %s" % (raw_filename, filename))
@@ -362,8 +362,11 @@
         else:
             return None
 
-    def _get(self, remote_filename, local_path):
+    @retry_fatal
+    def get(self, remote_filename, local_path):
+        """Get remote filename, saving it to local_path"""
         url = self.directory + remote_filename
+        log.Info("Retrieving %s from WebDAV server" % (url ,))
         response = None
         try:
             target_file = local_path.open("wb")
@@ -374,24 +377,31 @@
                 #import hashlib
                 #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) )
                 assert not target_file.close()
+                local_path.setdata()
                 response.close()
             else:
                 status = response.status
                 reason = response.reason
                 response.close()
                 raise BackendException("Bad status code %s reason %s." % (status,reason))
-        except Exception as e:
+            if response: response.close()
+        except Exception, e:
+            if response: response.close()
             raise e
-        finally:
-            if response: response.close()
 
-    def _put(self, source_path, remote_filename):
+    @retry_fatal
+    def put(self, source_path, remote_filename = None):
+        """Transfer source_path to remote_filename"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
         url = self.directory + remote_filename
+        log.Info("Saving %s on WebDAV server" % (url ,))
         response = None
         try:
             source_file = source_path.open("rb")
             response = self.request("PUT", url, source_file.read())
-            if response.status in [201, 204]:
+            # 200 is returned if a file is overwritten during restarting
+            if response.status in [200, 201, 204]:
                 response.read()
                 response.close()
             else:
@@ -399,28 +409,32 @@
                 reason = response.reason
                 response.close()
                 raise BackendException("Bad status code %s reason %s." % (status,reason))
-        except Exception as e:
+            if response: response.close()
+        except Exception, e:
+            if response: response.close()
             raise e
-        finally:
-            if response: response.close()
 
-    def _delete(self, filename):
-        url = self.directory + filename
-        response = None
-        try:
-            response = self.request("DELETE", url)
-            if response.status in [200, 204]:
-                response.read()
-                response.close()
-            else:
-                status = response.status
-                reason = response.reason
-                response.close()
-                raise BackendException("Bad status code %s reason %s." % (status,reason))
-        except Exception as e:
-            raise e
-        finally:
-            if response: response.close()
+    @retry_fatal
+    def delete(self, filename_list):
+        """Delete files in filename_list"""
+        for filename in filename_list:
+            url = self.directory + filename
+            log.Info("Deleting %s from WebDAV server" % (url ,))
+            response = None
+            try:
+                response = self.request("DELETE", url)
+                if response.status in [200, 204]:
+                    response.read()
+                    response.close()
+                else:
+                    status = response.status
+                    reason = response.reason
+                    response.close()
+                    raise BackendException("Bad status code %s reason %s." % (status,reason))
+                if response: response.close()
+            except Exception, e:
+                if response: response.close()
+                raise e
 
 duplicity.backend.register_backend("webdav", WebDAVBackend)
 duplicity.backend.register_backend("webdavs", WebDAVBackend)

=== renamed file 'duplicity/backends/par2backend.py' => 'duplicity/backends/~par2wrapperbackend.py'
--- duplicity/backends/par2backend.py	2014-04-28 02:49:39 +0000
+++ duplicity/backends/~par2wrapperbackend.py	2014-06-14 13:58:30 +0000
@@ -16,16 +16,15 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-from future_builtins import filter
-
 import os
 import re
 from duplicity import backend
-from duplicity.errors import BackendException
+from duplicity.errors import UnsupportedBackendScheme, BackendException
+from duplicity.pexpect import run
 from duplicity import log
 from duplicity import globals
 
-class Par2Backend(backend.Backend):
+class Par2WrapperBackend(backend.Backend):
     """This backend wrap around other backends and create Par2 recovery files
     before the file and the Par2 files are transfered with the wrapped backend.
     
@@ -39,15 +38,13 @@
         except AttributeError:
             self.redundancy = 10
 
-        self.wrapped_backend = backend.get_backend_object(parsed_url.url_string)
-
-        for attr in ['_get', '_put', '_list', '_delete', '_delete_list',
-                     '_query', '_query_list', '_retry_cleanup', '_error_code',
-                     '_move', '_close']:
-            if hasattr(self.wrapped_backend, attr):
-                setattr(self, attr, getattr(self, attr[1:]))
-
-    def transfer(self, method, source_path, remote_filename):
+        try:
+            url_string = self.parsed_url.url_string.lstrip('par2+')
+            self.wrapped_backend = backend.get_backend(url_string)
+        except:
+            raise UnsupportedBackendScheme(self.parsed_url.url_string)
+
+    def put(self, source_path, remote_filename = None):
         """create Par2 files and transfer the given file and the Par2 files
         with the wrapped backend.
         
@@ -55,37 +52,34 @@
         temp-filename later on. So first of all create a tempdir and symlink
         the soure_path with remote_filename into this. 
         """
-        import pexpect
+        if remote_filename is None:
+            remote_filename = source_path.get_filename()
 
         par2temp = source_path.get_temp_in_same_dir()
         par2temp.mkdir()
         source_symlink = par2temp.append(remote_filename)
-        source_target = source_path.get_canonical()
-        if not os.path.isabs(source_target):
-            source_target = os.path.join(os.getcwd(), source_target)
-        os.symlink(source_target, source_symlink.get_canonical())
+        os.symlink(source_path.get_canonical(), source_symlink.get_canonical())
         source_symlink.setdata()
 
         log.Info("Create Par2 recovery files")
         par2create = 'par2 c -r%d -n1 -q -q %s' % (self.redundancy, source_symlink.get_canonical())
-        out, returncode = pexpect.run(par2create, -1, True)
+        out, returncode = run(par2create, -1, True)
         source_symlink.delete()
         files_to_transfer = []
         if not returncode:
             for file in par2temp.listdir():
                 files_to_transfer.append(par2temp.append(file))
 
-        method(source_path, remote_filename)
+        ret = self.wrapped_backend.put(source_path, remote_filename)
         for file in files_to_transfer:
-            method(file, file.get_filename())
+            self.wrapped_backend.put(file, file.get_filename())
 
         par2temp.deltree()
-
-    def put(self, local, remote):
-        self.transfer(self.wrapped_backend._put, local, remote)
-
-    def move(self, local, remote):
-        self.transfer(self.wrapped_backend._move, local, remote)
+        return ret
+
+    def move(self, source_path, remote_filename = None):
+        self.put(source_path, remote_filename)
+        source_path.delete()
 
     def get(self, remote_filename, local_path):
         """transfer remote_filename and the related .par2 file into
@@ -95,31 +89,29 @@
         If "par2 verify" detect an error transfer the Par2-volumes into the
         temp-dir and try to repair.
         """
-        import pexpect
         par2temp = local_path.get_temp_in_same_dir()
         par2temp.mkdir()
         local_path_temp = par2temp.append(remote_filename)
 
-        self.wrapped_backend._get(remote_filename, local_path_temp)
+        ret = self.wrapped_backend.get(remote_filename, local_path_temp)
 
         try:
             par2file = par2temp.append(remote_filename + '.par2')
-            self.wrapped_backend._get(par2file.get_filename(), par2file)
+            self.wrapped_backend.get(par2file.get_filename(), par2file)
 
             par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
-            out, returncode = pexpect.run(par2verify, -1, True)
+            out, returncode = run(par2verify, -1, True)
 
             if returncode:
                 log.Warn("File is corrupt. Try to repair %s" % remote_filename)
-                par2volumes = filter(re.compile((r'%s\.vol[\d+]*\.par2' % remote_filename).match,
-                                     self.wrapped_backend._list()))
+                par2volumes = self.list(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename))
 
                 for filename in par2volumes:
                     file = par2temp.append(filename)
-                    self.wrapped_backend._get(filename, file)
+                    self.wrapped_backend.get(filename, file)
 
                 par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
-                out, returncode = pexpect.run(par2repair, -1, True)
+                out, returncode = run(par2repair, -1, True)
 
                 if returncode:
                     log.Error("Failed to repair %s" % remote_filename)
@@ -128,26 +120,27 @@
         except BackendException:
             #par2 file not available
             pass
-        finally:
-            local_path_temp.rename(local_path)
-            par2temp.deltree()
+        local_path_temp.rename(local_path)
+        par2temp.deltree()
+        return ret
 
-    def delete(self, filename):
-        """delete given filename and its .par2 files
+    def list(self, filter = re.compile(r'(?!.*\.par2$)')):
+        """default filter all files that ends with ".par"
+        filter can be a re.compile instance or False for all remote files
         """
-        self.wrapped_backend._delete(filename)
-
-        remote_list = self.list()
-        filename_list = [filename]
-        c =  re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
-        for remote_filename in remote_list:
-            if c.match(remote_filename):
-                self.wrapped_backend._delete(remote_filename)
-
-    def delete_list(self, filename_list):
+        list = self.wrapped_backend.list()
+        if not filter:
+            return list
+        filtered_list = []
+        for item in list:
+            if filter.match(item):
+                filtered_list.append(item)
+        return filtered_list
+
+    def delete(self, filename_list):
         """delete given filename_list and all .par2 files that belong to them
         """
-        remote_list = self.list()
+        remote_list = self.list(False)
 
         for filename in filename_list[:]:
             c =  re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
@@ -155,25 +148,46 @@
                 if c.match(remote_filename):
                     filename_list.append(remote_filename)
 
-        return self.wrapped_backend._delete_list(filename_list)
-
-
-    def list(self):
-        return self.wrapped_backend._list()
-
-    def retry_cleanup(self):
-        self.wrapped_backend._retry_cleanup()
-
-    def error_code(self, operation, e):
-        return self.wrapped_backend._error_code(operation, e)
-
-    def query(self, filename):
-        return self.wrapped_backend._query(filename)
-
-    def query_list(self, filename_list):
-        return self.wrapped_backend._query(filename_list)
+        return self.wrapped_backend.delete(filename_list)
+
+    """just return the output of coresponding wrapped backend
+    for all other functions
+    """
+    def query_info(self, filename_list, raise_errors=True):
+        return self.wrapped_backend.query_info(filename_list, raise_errors)
+
+    def get_password(self):
+        return self.wrapped_backend.get_password()
+
+    def munge_password(self, commandline):
+        return self.wrapped_backend.munge_password(commandline)
+
+    def run_command(self, commandline):
+        return self.wrapped_backend.run_command(commandline)
+    def run_command_persist(self, commandline):
+        return self.wrapped_backend.run_command_persist(commandline)
+
+    def popen(self, commandline):
+        return self.wrapped_backend.popen(commandline)
+    def popen_persist(self, commandline):
+        return self.wrapped_backend.popen_persist(commandline)
+
+    def _subprocess_popen(self, commandline):
+        return self.wrapped_backend._subprocess_popen(commandline)
+
+    def subprocess_popen(self, commandline):
+        return self.wrapped_backend.subprocess_popen(commandline)
+
+    def subprocess_popen_persist(self, commandline):
+        return self.wrapped_backend.subprocess_popen_persist(commandline)
 
     def close(self):
-        self.wrapped_backend._close()
-
-backend.register_backend_prefix('par2', Par2Backend)
+        return self.wrapped_backend.close()
+
+"""register this backend with leading "par2+" for all already known backends
+
+files must be sorted in duplicity.backend.import_backends to catch
+all supported backends
+"""
+for item in backend._backends.keys():
+    backend.register_backend('par2+' + item, Par2WrapperBackend)

=== modified file 'duplicity/cached_ops.py'
--- duplicity/cached_ops.py	2014-04-17 20:50:57 +0000
+++ duplicity/cached_ops.py	2014-06-14 13:58:30 +0000
@@ -34,7 +34,7 @@
     def __call__(self, *args):
         try:
             return self.cache[args]
-        except (KeyError, TypeError) as e:
+        except (KeyError, TypeError), e:
             result = self.f(*args)
             if not isinstance(e, TypeError):
                 # TypeError most likely means that args is not hashable

=== modified file 'duplicity/collections.py'
--- duplicity/collections.py	2014-04-25 23:53:46 +0000
+++ duplicity/collections.py	2014-06-14 13:58:30 +0000
@@ -21,8 +21,6 @@
 
 """Classes and functions on collections of backup volumes"""
 
-from future_builtins import filter, map
-
 import types
 import gettext
 
@@ -98,7 +96,7 @@
             self.set_manifest(filename)
         else:
             assert pr.volume_number is not None
-            assert pr.volume_number not in self.volume_name_dict, \
+            assert not self.volume_name_dict.has_key(pr.volume_number), \
                    (self.volume_name_dict, filename)
             self.volume_name_dict[pr.volume_number] = filename
 
@@ -149,7 +147,7 @@
         try:
             self.backend.delete(rfn)
         except Exception:
-            log.Debug(_("BackupSet.delete: missing %s") % [util.ufn(f) for f in rfn])
+            log.Debug(_("BackupSet.delete: missing %s") % map(util.ufn, rfn))
             pass
         for lfn in globals.archive_dir.listdir():
             pr = file_naming.parse(lfn)
@@ -160,7 +158,7 @@
                 try:
                     globals.archive_dir.append(lfn).delete()
                 except Exception:
-                    log.Debug(_("BackupSet.delete: missing %s") % [util.ufn(f) for f in lfn])
+                    log.Debug(_("BackupSet.delete: missing %s") % map(util.ufn, lfn))
                     pass
         util.release_lockfile()
 
@@ -224,7 +222,7 @@
         # public key w/o secret key
         try:
             manifest_buffer = self.backend.get_data(self.remote_manifest_name)
-        except GPGError as message:
+        except GPGError, message:
             #TODO: We check for gpg v1 and v2 messages, should be an error code.
             if ("secret key not available" in message.args[0] or
                 "No secret key" in message.args[0]):
@@ -249,7 +247,8 @@
         assert self.info_set
         volume_num_list = self.volume_name_dict.keys()
         volume_num_list.sort()
-        volume_filenames = [self.volume_name_dict[x] for x in volume_num_list]
+        volume_filenames = map(lambda x: self.volume_name_dict[x],
+                               volume_num_list)
         if self.remote_manifest_name:
             # For convenience of implementation for restart support, we treat
             # local partial manifests as this set's remote manifest.  But
@@ -339,7 +338,7 @@
         """
         Return a list of sets in chain earlier or equal to time
         """
-        older_incsets = [s for s in self.incset_list if s.end_time <= time]
+        older_incsets = filter(lambda s: s.end_time <= time, self.incset_list)
         return [self.fullset] + older_incsets
 
     def get_last(self):
@@ -528,7 +527,7 @@
                 return sig_dp.filtered_open("rb")
         else:
             filename_to_fileobj = self.backend.get_fileobj_read
-        return [filename_to_fileobj(f) for f in self.get_filenames(time)]
+        return map(filename_to_fileobj, self.get_filenames(time))
 
     def delete(self, keep_full=False):
         """
@@ -799,7 +798,7 @@
         missing files.
         """
         log.Debug(_("Extracting backup chains from list of files: %s")
-                  % [util.ufn(f) for f in filename_list])
+                  % map(util.ufn, filename_list))
         # First put filenames in set form
         sets = []
         def add_to_sets(filename):
@@ -817,8 +816,7 @@
                     sets.append(new_set)
                 else:
                     log.Debug(_("Ignoring file (rejected by backup set) '%s'") % util.ufn(filename))
-        for f in filename_list:
-            add_to_sets(f)
+        map(add_to_sets, filename_list)
         sets, incomplete_sets = self.get_sorted_sets(sets)
 
         chains, orphaned_sets = [], []
@@ -841,8 +839,7 @@
                 else:
                     log.Debug(_("Found orphaned set %s") % (set.get_timestr(),))
                     orphaned_sets.append(set)
-        for s in sets:
-            add_to_chains(s)
+        map(add_to_chains, sets)
         return (chains, orphaned_sets, incomplete_sets)
 
     def get_sorted_sets(self, set_list):
@@ -858,7 +855,7 @@
             else:
                 time_set_pairs.append((set.end_time, set))
         time_set_pairs.sort()
-        return ([p[1] for p in time_set_pairs], incomplete_sets)
+        return (map(lambda p: p[1], time_set_pairs), incomplete_sets)
 
     def get_signature_chains(self, local, filelist = None):
         """
@@ -919,7 +916,7 @@
         # Build dictionary from end_times to lists of corresponding chains
         endtime_chain_dict = {}
         for chain in chain_list:
-            if chain.end_time in endtime_chain_dict:
+            if endtime_chain_dict.has_key(chain.end_time):
                 endtime_chain_dict[chain.end_time].append(chain)
             else:
                 endtime_chain_dict[chain.end_time] = [chain]
@@ -954,14 +951,15 @@
         if not self.all_backup_chains:
             raise CollectionsError("No backup chains found")
 
-        covering_chains = [c for c in self.all_backup_chains
-                           if c.start_time <= time <= c.end_time]
+        covering_chains = filter(lambda c: c.start_time <= time <= c.end_time,
+                                 self.all_backup_chains)
         if len(covering_chains) > 1:
             raise CollectionsError("Two chains cover the given time")
         elif len(covering_chains) == 1:
             return covering_chains[0]
 
-        old_chains = [c for c in self.all_backup_chains if c.end_time < time]
+        old_chains = filter(lambda c: c.end_time < time,
+                            self.all_backup_chains)
         if old_chains:
             return old_chains[-1]
         else:
@@ -978,12 +976,13 @@
         if not self.all_sig_chains:
             raise CollectionsError("No signature chains found")
 
-        covering_chains = [c for c in self.all_sig_chains
-                           if c.start_time <= time <= c.end_time]
+        covering_chains = filter(lambda c: c.start_time <= time <= c.end_time,
+                                 self.all_sig_chains)
         if covering_chains:
             return covering_chains[-1] # prefer local if multiple sig chains
 
-        old_chains = [c for c in self.all_sig_chains if c.end_time < time]
+        old_chains = filter(lambda c: c.end_time < time,
+                            self.all_sig_chains)
         if old_chains:
             return old_chains[-1]
         else:
@@ -1025,9 +1024,9 @@
 
     def sort_sets(self, setlist):
         """Return new list containing same elems of setlist, sorted by time"""
-        pairs = [(s.get_time(), s) for s in setlist]
+        pairs = map(lambda s: (s.get_time(), s), setlist)
         pairs.sort()
-        return [p[1] for p in pairs]
+        return map(lambda p: p[1], pairs)
 
     def get_chains_older_than(self, t):
         """

=== modified file 'duplicity/commandline.py'
--- duplicity/commandline.py	2014-04-28 02:49:39 +0000
+++ duplicity/commandline.py	2014-06-14 13:58:30 +0000
@@ -21,8 +21,6 @@
 
 """Parse command line, check for consistency, and set globals"""
 
-from future_builtins import filter
-
 from copy import copy
 import optparse
 import os
@@ -112,7 +110,7 @@
 def check_time(option, opt, value):
     try:
         return dup_time.genstrtotime(value)
-    except dup_time.TimeException as e:
+    except dup_time.TimeException, e:
         raise optparse.OptionValueError(str(e))
 
 def check_verbosity(option, opt, value):
@@ -210,6 +208,13 @@
     global select_opts, select_files, full_backup
     global list_current, collection_status, cleanup, remove_time, verify
 
+    def use_gio(*args):
+        try:
+            import duplicity.backends.giobackend
+            backend.force_backend(duplicity.backends.giobackend.GIOBackend)
+        except ImportError:
+            log.FatalError(_("Unable to load gio backend: %s") % str(sys.exc_info()[1]), log.ErrorCode.gio_not_available)
+
     def set_log_fd(fd):
         if fd < 1:
             raise optparse.OptionValueError("log-fd must be greater than zero.")
@@ -358,9 +363,7 @@
     # the time specified
     parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time"))
 
-    parser.add_option("--gio",action = "callback", dest = "use_gio",
-                      callback = lambda o, s, v, p: (setattr(p.values, o.dest, True),
-                                                     old_fn_deprecation(s)))
+    parser.add_option("--gio", action = "callback", callback = use_gio)
 
     parser.add_option("--gpg-options", action = "extend", metavar = _("options"))
 
@@ -505,7 +508,9 @@
     parser.add_option("--s3_multipart_max_timeout", type="int", metavar=_("number"))
 
     # Option to allow the s3/boto backend use the multiprocessing version.
-    parser.add_option("--s3-use-multiprocessing", action = "store_true")
+    # By default it is off since it does not work for Python 2.4 or 2.5.
+    if sys.version_info[:2] >= (2, 6):
+        parser.add_option("--s3-use-multiprocessing", action = "store_true")
 
     # Option to allow use of server side encryption in s3
     parser.add_option("--s3-use-server-side-encryption", action="store_true", dest="s3_use_sse")
@@ -516,7 +521,7 @@
     # sftp command to use (ssh pexpect backend)
     parser.add_option("--sftp-command", metavar = _("command"))
 
-    # allow the user to switch cloudfiles backend
+    # sftp command to use (ssh pexpect backend)
     parser.add_option("--cf-backend", metavar = _("pyrax|cloudfiles"))
 
     # If set, use short (< 30 char) filenames for all the remote files.

=== modified file 'duplicity/diffdir.py'
--- duplicity/diffdir.py	2014-04-25 23:53:46 +0000
+++ duplicity/diffdir.py	2014-06-14 13:58:30 +0000
@@ -27,8 +27,6 @@
 the second, the ROPath iterator is put into tar block form.
 """
 
-from future_builtins import map
-
 import cStringIO, types, math
 from duplicity import statistics
 from duplicity import util
@@ -81,8 +79,8 @@
     global stats
     stats = statistics.StatsDeltaProcess()
     if type(dirsig_fileobj_list) is types.ListType:
-        sig_iter = combine_path_iters([sigtar2path_iter(x) for x
-                                       in dirsig_fileobj_list])
+        sig_iter = combine_path_iters(map(sigtar2path_iter,
+                                          dirsig_fileobj_list))
     else:
         sig_iter = sigtar2path_iter(dirsig_fileobj_list)
     delta_iter = get_delta_iter(path_iter, sig_iter)
@@ -344,7 +342,8 @@
             else:
                 break # assumed triple_list sorted, so can exit now
 
-    triple_list = [x for x in map(get_triple, range(len(path_iter_list))) if x]
+    triple_list = filter(lambda x: x, map(get_triple,
+                                          range(len(path_iter_list))))
     while triple_list:
         triple_list.sort()
         yield triple_list[0][2]
@@ -376,7 +375,7 @@
     """
     Return path iter combining signatures in list of open sig files
     """
-    return combine_path_iters([sigtar2path_iter(x) for x in sig_infp_list])
+    return combine_path_iters(map(sigtar2path_iter, sig_infp_list))
 
 
 class FileWithReadCounter:
@@ -390,7 +389,7 @@
     def read(self, length = -1):
         try:
             buf = self.infile.read(length)
-        except IOError as ex:
+        except IOError, ex:
             buf = ""
             log.Warn(_("Error %s getting delta for %s") % (str(ex), util.ufn(self.infile.name)))
         if stats:
@@ -462,7 +461,7 @@
         TarBlockIter initializer
         """
         self.input_iter = input_iter
-        self.offset = 0                     # total length of data read
+        self.offset = 0l                    # total length of data read
         self.process_waiting = False        # process_continued has more blocks
         self.process_next_vol_number = None # next volume number to write in multivol
         self.previous_index = None          # holds index of last block returned
@@ -565,7 +564,7 @@
         Return closing string for tarfile, reset offset
         """
         blocks, remainder = divmod(self.offset, tarfile.RECORDSIZE) #@UnusedVariable
-        self.offset = 0
+        self.offset = 0l
         return '\0' * (tarfile.RECORDSIZE - remainder) # remainder can be 0
 
     def __iter__(self):
@@ -737,5 +736,5 @@
         return 512 # set minimum of 512 bytes
     else:
         # Split file into about 2000 pieces, rounding to 512
-        file_blocksize = int((file_len / (2000 * 512)) * 512)
+        file_blocksize = long((file_len / (2000 * 512)) * 512)
         return min(file_blocksize, globals.max_blocksize)

=== modified file 'duplicity/dup_temp.py'
--- duplicity/dup_temp.py	2014-04-17 20:53:21 +0000
+++ duplicity/dup_temp.py	2014-06-14 13:58:30 +0000
@@ -179,9 +179,9 @@
         tgt = self.dirpath.append(self.remname)
         src_iter = SrcIter(src)
         if pr.compressed:
-            gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxsize)
+            gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint)
         elif pr.encrypted:
-            gpg.GPGWriteFile(src_iter, tgt.name, globals.gpg_profile, size = sys.maxsize)
+            gpg.GPGWriteFile(src_iter, tgt.name, globals.gpg_profile, size = sys.maxint)
         else:
             os.system("cp -p \"%s\" \"%s\"" % (src.name, tgt.name))
         globals.backend.move(tgt) #@UndefinedVariable
@@ -195,7 +195,7 @@
         src_iter = SrcIter(src)
         pr = file_naming.parse(self.permname)
         if pr.compressed:
-            gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxsize)
+            gpg.GzipWriteFile(src_iter, tgt.name, size = sys.maxint)
             os.unlink(src.name)
         else:
             os.rename(src.name, tgt.name)

=== modified file 'duplicity/dup_threading.py'
--- duplicity/dup_threading.py	2014-04-17 21:13:48 +0000
+++ duplicity/dup_threading.py	2014-06-14 13:58:30 +0000
@@ -192,7 +192,7 @@
             if state['error'] is None:
                 return state['value']
             else:
-                raise state['error'].with_traceback(state['trace'])
+                raise state['error'], None, state['trace']
         finally:
             cv.release()
 
@@ -207,7 +207,7 @@
             cv.release()
 
             return (True, waiter)
-        except Exception as e:
+        except Exception, e:
             cv.acquire()
             state['done'] = True
             state['error'] = e

=== modified file 'duplicity/dup_time.py'
--- duplicity/dup_time.py	2014-04-25 23:53:46 +0000
+++ duplicity/dup_time.py	2014-06-14 13:58:30 +0000
@@ -21,8 +21,6 @@
 
 """Provide time related exceptions and functions"""
 
-from future_builtins import map
-
 import time, types, re, calendar
 from duplicity import globals
 
@@ -64,7 +62,7 @@
 def setcurtime(time_in_secs = None):
     """Sets the current time in curtime and curtimestr"""
     global curtime, curtimestr
-    t = time_in_secs or int(time.time())
+    t = time_in_secs or long(time.time())
     assert type(t) in (types.LongType, types.IntType)
     curtime, curtimestr = t, timetostring(t)
 
@@ -139,9 +137,9 @@
         # even when we're not in the same timezone that wrote the
         # string
         if len(timestring) == 16:
-            return int(utc_in_secs)
+            return long(utc_in_secs)
         else:
-            return int(utc_in_secs + tzdtoseconds(timestring[19:]))
+            return long(utc_in_secs + tzdtoseconds(timestring[19:]))
     except (TypeError, ValueError, AssertionError):
         return None
 
@@ -171,7 +169,7 @@
     if seconds == 1:
         partlist.append("1 second")
     elif not partlist or seconds > 1:
-        if isinstance(seconds, (types.LongType, types.IntType)):
+        if isinstance(seconds, int) or isinstance(seconds, long):
             partlist.append("%s seconds" % seconds)
         else:
             partlist.append("%.2f seconds" % seconds)

=== modified file 'duplicity/errors.py'
--- duplicity/errors.py	2014-04-21 19:21:45 +0000
+++ duplicity/errors.py	2014-06-14 13:58:30 +0000
@@ -23,8 +23,6 @@
 Error/exception classes that do not fit naturally anywhere else.
 """
 
-from duplicity import log
-
 class DuplicityError(Exception):
     pass
 
@@ -70,11 +68,9 @@
     """
     Raised to indicate a backend specific problem.
     """
-    def __init__(self, msg, code=log.ErrorCode.backend_error):
-        super(BackendException, self).__init__(msg)
-        self.code = code
+    pass
 
-class FatalBackendException(BackendException):
+class FatalBackendError(DuplicityError):
     """
     Raised to indicate a backend failed fatally.
     """

=== modified file 'duplicity/file_naming.py'
--- duplicity/file_naming.py	2014-04-17 21:49:37 +0000
+++ duplicity/file_naming.py	2014-06-14 13:58:30 +0000
@@ -158,7 +158,7 @@
     """
     Convert string s in base 36 to long int
     """
-    total = 0
+    total = 0L
     for i in range(len(s)):
         total *= 36
         digit_ord = ord(s[i])

=== modified file 'duplicity/globals.py'
--- duplicity/globals.py	2014-04-22 15:01:38 +0000
+++ duplicity/globals.py	2014-06-14 13:58:30 +0000
@@ -87,7 +87,7 @@
 gpg_options = ''
 
 # Maximum file blocksize
-max_blocksize = 2048
+max_blocksize = 2048L
 
 # If true, filelists and directory statistics will be split on
 # nulls instead of newlines.
@@ -284,6 +284,3 @@
 
 # Level of Redundancy in % for Par2 files
 par2_redundancy = 10
-
-# Whether to enable gio backend
-use_gio = False

=== modified file 'duplicity/gpg.py'
--- duplicity/gpg.py	2014-04-20 06:06:34 +0000
+++ duplicity/gpg.py	2014-06-14 13:58:30 +0000
@@ -215,7 +215,7 @@
                 msg += unicode(line.strip(), locale.getpreferredencoding(), 'replace') + u"\n"
         msg += u"===== End GnuPG log =====\n"
         if not (msg.find(u"invalid packet (ctb=14)") > -1):
-            raise GPGError(msg)
+            raise GPGError, msg
         else:
             return ""
 

=== modified file 'duplicity/gpginterface.py'
--- duplicity/gpginterface.py	2014-04-17 22:03:10 +0000
+++ duplicity/gpginterface.py	2014-06-14 13:58:30 +0000
@@ -353,14 +353,14 @@
         if attach_fhs == None: attach_fhs = {}
 
         for std in _stds:
-            if std not in attach_fhs \
+            if not attach_fhs.has_key(std) \
                and std not in create_fhs:
                 attach_fhs.setdefault(std, getattr(sys, std))
 
         handle_passphrase = 0
 
         if self.passphrase != None \
-           and 'passphrase' not in attach_fhs \
+           and not attach_fhs.has_key('passphrase') \
            and 'passphrase' not in create_fhs:
             handle_passphrase = 1
             create_fhs.append('passphrase')
@@ -384,18 +384,18 @@
         process = Process()
 
         for fh_name in create_fhs + attach_fhs.keys():
-            if fh_name not in _fd_modes:
-                raise KeyError(
+            if not _fd_modes.has_key(fh_name):
+                raise KeyError, \
                       "unrecognized filehandle name '%s'; must be one of %s" \
-                      % (fh_name, _fd_modes.keys()))
+                      % (fh_name, _fd_modes.keys())
 
         for fh_name in create_fhs:
             # make sure the user doesn't specify a filehandle
             # to be created *and* attached
-            if fh_name in attach_fhs:
-                raise ValueError(
+            if attach_fhs.has_key(fh_name):
+                raise ValueError, \
                       "cannot have filehandle '%s' in both create_fhs and attach_fhs" \
-                      % fh_name)
+                      % fh_name
 
             pipe = os.pipe()
             # fix by drt@xxxxxxxxxxxxx noting
@@ -660,7 +660,7 @@
         if self.returned == None:
             self.thread.join()
         if self.returned != 0:
-            raise IOError("GnuPG exited non-zero, with code %d" % (self.returned >> 8))
+            raise IOError, "GnuPG exited non-zero, with code %d" % (self.returned >> 8)
 
 
 def threaded_waitpid(process):

=== modified file 'duplicity/lazy.py'
--- duplicity/lazy.py	2014-04-18 14:32:30 +0000
+++ duplicity/lazy.py	2014-06-14 13:58:30 +0000
@@ -23,51 +23,46 @@
 
 import os
 
+from duplicity.static import * #@UnusedWildImport
+
 
 class Iter:
     """Hold static methods for the manipulation of lazy iterators"""
 
-    @staticmethod
     def filter(predicate, iterator): #@NoSelf
         """Like filter in a lazy functional programming language"""
         for i in iterator:
             if predicate(i):
                 yield i
 
-    @staticmethod
     def map(function, iterator): #@NoSelf
         """Like map in a lazy functional programming language"""
         for i in iterator:
             yield function(i)
 
-    @staticmethod
     def foreach(function, iterator): #@NoSelf
         """Run function on each element in iterator"""
         for i in iterator:
             function(i)
 
-    @staticmethod
     def cat(*iters): #@NoSelf
         """Lazily concatenate iterators"""
         for iter in iters:
             for i in iter:
                 yield i
 
-    @staticmethod
     def cat2(iter_of_iters): #@NoSelf
         """Lazily concatenate iterators, iterated by big iterator"""
         for iter in iter_of_iters:
             for i in iter:
                 yield i
 
-    @staticmethod
     def empty(iter): #@NoSelf
         """True if iterator has length 0"""
         for i in iter: #@UnusedVariable
             return None
         return 1
 
-    @staticmethod
     def equal(iter1, iter2, verbose = None, operator = lambda x, y: x == y): #@NoSelf
         """True if iterator 1 has same elements as iterator 2
 
@@ -93,7 +88,6 @@
             print "End when i2 = %s" % (i2,)
         return None
 
-    @staticmethod
     def Or(iter): #@NoSelf
         """True if any element in iterator is true.  Short circuiting"""
         i = None
@@ -102,7 +96,6 @@
                 return i
         return i
 
-    @staticmethod
     def And(iter): #@NoSelf
         """True if all elements in iterator are true.  Short circuiting"""
         i = 1
@@ -111,7 +104,6 @@
                 return i
         return i
 
-    @staticmethod
     def len(iter): #@NoSelf
         """Return length of iterator"""
         i = 0
@@ -122,7 +114,6 @@
                 return i
             i = i+1
 
-    @staticmethod
     def foldr(f, default, iter): #@NoSelf
         """foldr the "fundamental list recursion operator"?"""
         try:
@@ -131,7 +122,6 @@
             return default
         return f(next, Iter.foldr(f, default, iter))
 
-    @staticmethod
     def foldl(f, default, iter): #@NoSelf
         """the fundamental list iteration operator.."""
         while 1:
@@ -141,7 +131,6 @@
                 return default
             default = f(default, next)
 
-    @staticmethod
     def multiplex(iter, num_of_forks, final_func = None, closing_func = None): #@NoSelf
         """Split a single iterater into a number of streams
 
@@ -202,6 +191,8 @@
 
         return tuple(map(make_iterator, range(num_of_forks)))
 
+MakeStatic(Iter)
+
 
 class IterMultiplex2:
     """Multiplex an iterator into 2 parts

=== modified file 'duplicity/librsync.py'
--- duplicity/librsync.py	2014-04-17 21:54:04 +0000
+++ duplicity/librsync.py	2014-06-14 13:58:30 +0000
@@ -26,7 +26,7 @@
 
 """
 
-from . import _librsync
+import _librsync
 import types, array
 
 blocksize = _librsync.RS_JOB_BLOCKSIZE
@@ -90,7 +90,7 @@
             self._add_to_inbuf()
         try:
             self.eof, len_inbuf_read, cycle_out = self.maker.cycle(self.inbuf)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
         self.inbuf = self.inbuf[len_inbuf_read:]
         self.outbuf.fromstring(cycle_out)
@@ -126,7 +126,7 @@
         LikeFile.__init__(self, infile)
         try:
             self.maker = _librsync.new_sigmaker(blocksize)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
 
 class DeltaFile(LikeFile):
@@ -148,7 +148,7 @@
             assert not signature.close()
         try:
             self.maker = _librsync.new_deltamaker(sig_string)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
 
 
@@ -167,7 +167,7 @@
             raise TypeError("basis_file must be a (true) file")
         try:
             self.maker = _librsync.new_patchmaker(basis_file)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
 
 
@@ -182,7 +182,7 @@
         """Return new signature instance"""
         try:
             self.sig_maker = _librsync.new_sigmaker(blocksize)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
         self.gotsig = None
         self.buffer = ""
@@ -201,7 +201,7 @@
         """Run self.buffer through sig_maker, add to self.sig_string"""
         try:
             eof, len_buf_read, cycle_out = self.sig_maker.cycle(self.buffer)
-        except _librsync.librsyncError as e:
+        except _librsync.librsyncError, e:
             raise librsyncError(str(e))
         self.buffer = self.buffer[len_buf_read:]
         self.sigstring_list.append(cycle_out)

=== modified file 'duplicity/log.py'
--- duplicity/log.py	2014-04-16 20:45:09 +0000
+++ duplicity/log.py	2014-06-14 13:58:30 +0000
@@ -49,6 +49,7 @@
     return DupToLoggerLevel(verb)
 
 def LevelName(level):
+    level = LoggerToDupLevel(level)
     if   level >= 9: return "DEBUG"
     elif level >= 5: return "INFO"
     elif level >= 3: return "NOTICE"
@@ -58,10 +59,12 @@
 def Log(s, verb_level, code=1, extra=None, force_print=False):
     """Write s to stderr if verbosity level low enough"""
     global _logger
+    # controlLine is a terrible hack until duplicity depends on Python 2.5
+    # and its logging 'extra' keyword that allows a custom record dictionary.
     if extra:
-        controlLine = '%d %s' % (code, extra)
+        _logger.controlLine = '%d %s' % (code, extra)
     else:
-        controlLine = '%d' % (code)
+        _logger.controlLine = '%d' % (code)
     if not s:
         s = '' # If None is passed, standard logging would render it as 'None'
 
@@ -76,9 +79,8 @@
     if not isinstance(s, unicode):
         s = s.decode("utf8", "replace")
 
-    _logger.log(DupToLoggerLevel(verb_level), s,
-                extra={'levelName': LevelName(verb_level),
-                       'controlLine': controlLine})
+    _logger.log(DupToLoggerLevel(verb_level), s)
+    _logger.controlLine = None
 
     if force_print:
         _logger.setLevel(initial_level)
@@ -303,6 +305,22 @@
     shutdown()
     sys.exit(code)
 
+class DupLogRecord(logging.LogRecord):
+    """Custom log record that holds a message code"""
+    def __init__(self, controlLine, *args, **kwargs):
+        global _logger
+        logging.LogRecord.__init__(self, *args, **kwargs)
+        self.controlLine = controlLine
+        self.levelName = LevelName(self.levelno)
+
+class DupLogger(logging.Logger):
+    """Custom logger that creates special code-bearing records"""
+    # controlLine is a terrible hack until duplicity depends on Python 2.5
+    # and its logging 'extra' keyword that allows a custom record dictionary.
+    controlLine = None
+    def makeRecord(self, name, lvl, fn, lno, msg, args, exc_info, *argv, **kwargs):
+        return DupLogRecord(self.controlLine, name, lvl, fn, lno, msg, args, exc_info)
+
 class OutFilter(logging.Filter):
     """Filter that only allows warning or less important messages"""
     def filter(self, record):
@@ -319,6 +337,7 @@
     if _logger:
         return
 
+    logging.setLoggerClass(DupLogger)
     _logger = logging.getLogger("duplicity")
 
     # Default verbosity allows notices and above

=== modified file 'duplicity/manifest.py'
--- duplicity/manifest.py	2014-04-25 23:20:12 +0000
+++ duplicity/manifest.py	2014-06-14 13:58:30 +0000
@@ -21,8 +21,6 @@
 
 """Create and edit manifest for session contents"""
 
-from future_builtins import filter
-
 import re
 
 from duplicity import log

=== modified file 'duplicity/patchdir.py'
--- duplicity/patchdir.py	2014-04-29 23:49:01 +0000
+++ duplicity/patchdir.py	2014-06-14 13:58:30 +0000
@@ -19,8 +19,6 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-from future_builtins import filter, map
-
 import re #@UnusedImport
 import types
 import os
@@ -505,7 +503,7 @@
             if final_ropath.exists():
                 # otherwise final patch was delete
                 yield final_ropath
-        except Exception as e:
+        except Exception, e:
             filename = normalized[-1].get_ropath().get_relative_path()
             log.Warn(_("Error '%s' patching %s") % 
                      (util.uexc(e), util.ufn(filename)),
@@ -519,10 +517,11 @@
     the restrict_index.
 
     """
-    diff_iters = [difftar2path_iter(x) for x in tarfile_list]
+    diff_iters = map( difftar2path_iter, tarfile_list )
     if restrict_index:
         # Apply filter before integration
-        diff_iters = [filter_path_iter(x, restrict_index) for x in diff_iters]
+        diff_iters = map( lambda i: filter_path_iter( i, restrict_index ),
+                         diff_iters )
     return integrate_patch_iters( diff_iters )
 
 def Write_ROPaths( base_path, rop_iter ):

=== modified file 'duplicity/path.py'
--- duplicity/path.py	2014-04-25 23:20:12 +0000
+++ duplicity/path.py	2014-06-14 13:58:30 +0000
@@ -26,8 +26,6 @@
 
 """
 
-from future_builtins import filter
-
 import stat, errno, socket, time, re, gzip
 
 from duplicity import tarfile
@@ -502,7 +500,7 @@
         """Refresh stat cache"""
         try:
             self.stat = os.lstat(self.name)
-        except OSError as e:
+        except OSError, e:
             err_string = errno.errorcode[e[0]]
             if err_string in ["ENOENT", "ENOTDIR", "ELOOP", "ENOTCONN"]:
                 self.stat, self.type = None, None # file doesn't exist

=== added file 'duplicity/pexpect.py'
--- duplicity/pexpect.py	1970-01-01 00:00:00 +0000
+++ duplicity/pexpect.py	2014-06-14 13:58:30 +0000
@@ -0,0 +1,1845 @@
+"""Pexpect is a Python module for spawning child applications and controlling
+them automatically. Pexpect can be used for automating interactive applications
+such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup
+scripts for duplicating software package installations on different servers. It
+can be used for automated software testing. Pexpect is in the spirit of Don
+Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python
+require TCL and Expect or require C extensions to be compiled. Pexpect does not
+use C, Expect, or TCL extensions. It should work on any platform that supports
+the standard Python pty module. The Pexpect interface focuses on ease of use so
+that simple tasks are easy.
+
+There are two main interfaces to Pexpect -- the function, run() and the class,
+spawn. You can call the run() function to execute a command and return the
+output. This is a handy replacement for os.system().
+
+For example::
+
+    pexpect.run('ls -la')
+
+The more powerful interface is the spawn class. You can use this to spawn an
+external child command and then interact with the child by sending lines and
+expecting responses.
+
+For example::
+
+    child = pexpect.spawn('scp foo myname@xxxxxxxxxxxxxxxx:.')
+    child.expect ('Password:')
+    child.sendline (mypassword)
+
+This works even for commands that ask for passwords or other input outside of
+the normal stdio streams.
+
+Credits: Noah Spurrier, Richard Holden, Marco Molteni, Kimberley Burchett,
+Robert Stone, Hartmut Goebel, Chad Schroeder, Erick Tryzelaar, Dave Kirby, Ids
+vander Molen, George Todd, Noel Taylor, Nicolas D. Cesar, Alexander Gattin,
+Geoffrey Marshall, Francisco Lourenco, Glen Mabey, Karthik Gurusamy, Fernando
+Perez, Corey Minyard, Jon Cohen, Guillaume Chazarain, Andrew Ryan, Nick
+Craig-Wood, Andrew Stone, Jorgen Grahn (Let me know if I forgot anyone.)
+
+Free, open source, and all that good stuff.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+Pexpect Copyright (c) 2008 Noah Spurrier
+http://pexpect.sourceforge.net/
+
+$Id: pexpect.py,v 1.1 2009/01/06 22:11:37 loafman Exp $
+"""
+
+try:
+    import os, sys, time
+    import select
+    import string
+    import re
+    import struct
+    import resource
+    import types
+    import pty
+    import tty
+    import termios
+    import fcntl
+    import errno
+    import traceback
+    import signal
+except ImportError, e:
+    raise ImportError (str(e) + """
+
+A critical module was not found. Probably this operating system does not
+support it. Pexpect is intended for UNIX-like operating systems.""")
+
+__version__ = '2.3'
+__revision__ = '$Revision: 1.1 $'
+__all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'run', 'which',
+    'split_command_line', '__version__', '__revision__']
+
+# Exception classes used by this module.
+class ExceptionPexpect(Exception):
+
+    """Base class for all exceptions raised by this module.
+    """
+
+    def __init__(self, value):
+
+        self.value = value
+
+    def __str__(self):
+
+        return str(self.value)
+
+    def get_trace(self):
+
+        """This returns an abbreviated stack trace with lines that only concern
+        the caller. In other words, the stack trace inside the Pexpect module
+        is not included. """
+
+        tblist = traceback.extract_tb(sys.exc_info()[2])
+        #tblist = filter(self.__filter_not_pexpect, tblist)
+        tblist = [item for item in tblist if self.__filter_not_pexpect(item)]
+        tblist = traceback.format_list(tblist)
+        return ''.join(tblist)
+
+    def __filter_not_pexpect(self, trace_list_item):
+
+        """This returns True if list item 0 the string 'pexpect.py' in it. """
+
+        if trace_list_item[0].find('pexpect.py') == -1:
+            return True
+        else:
+            return False
+
+class EOF(ExceptionPexpect):
+
+    """Raised when EOF is read from a child. This usually means the child has exited."""
+
+class TIMEOUT(ExceptionPexpect):
+
+    """Raised when a read time exceeds the timeout. """
+
+##class TIMEOUT_PATTERN(TIMEOUT):
+##    """Raised when the pattern match time exceeds the timeout.
+##    This is different than a read TIMEOUT because the child process may
+##    give output, thus never give a TIMEOUT, but the output
+##    may never match a pattern.
+##    """
+##class MAXBUFFER(ExceptionPexpect):
+##    """Raised when a scan buffer fills before matching an expected pattern."""
+
+def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None):
+
+    """
+    This function runs the given command; waits for it to finish; then
+    returns all output as a string. STDERR is included in output. If the full
+    path to the command is not given then the path is searched.
+
+    Note that lines are terminated by CR/LF (\\r\\n) combination even on
+    UNIX-like systems because this is the standard for pseudo ttys. If you set
+    'withexitstatus' to true, then run will return a tuple of (command_output,
+    exitstatus). If 'withexitstatus' is false then this returns just
+    command_output.
+
+    The run() function can often be used instead of creating a spawn instance.
+    For example, the following code uses spawn::
+
+        from pexpect import * #@UnusedWildImport
+        child = spawn('scp foo myname@xxxxxxxxxxxxxxxx:.')
+        child.expect ('(?i)password')
+        child.sendline (mypassword)
+
+    The previous code can be replace with the following::
+
+        from pexpect import * #@UnusedWildImport
+        run ('scp foo myname@xxxxxxxxxxxxxxxx:.', events={'(?i)password': mypassword})
+
+    Examples
+    ========
+
+    Start the apache daemon on the local machine::
+
+        from pexpect import * #@UnusedWildImport
+        run ("/usr/local/apache/bin/apachectl start")
+
+    Check in a file using SVN::
+
+        from pexpect import * #@UnusedWildImport
+        run ("svn ci -m 'automatic commit' my_file.py")
+
+    Run a command and capture exit status::
+
+        from pexpect import * #@UnusedWildImport
+        (command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)
+
+    Tricky Examples
+    ===============
+
+    The following will run SSH and execute 'ls -l' on the remote machine. The
+    password 'secret' will be sent if the '(?i)password' pattern is ever seen::
+
+        run ("ssh username@xxxxxxxxxxxxxxxxxxx 'ls -l'", events={'(?i)password':'secret\\n'})
+
+    This will start mencoder to rip a video from DVD. This will also display
+    progress ticks every 5 seconds as it runs. For example::
+
+        from pexpect import * #@UnusedWildImport
+        def print_ticks(d):
+            print d['event_count'],
+        run ("mencoder dvd://1 -o video.avi -oac copy -ovc copy", events={TIMEOUT:print_ticks}, timeout=5)
+
+    The 'events' argument should be a dictionary of patterns and responses.
+    Whenever one of the patterns is seen in the command out run() will send the
+    associated response string. Note that you should put newlines in your
+    string if Enter is necessary. The responses may also contain callback
+    functions. Any callback is function that takes a dictionary as an argument.
+    The dictionary contains all the locals from the run() function, so you can
+    access the child spawn object or any other variable defined in run()
+    (event_count, child, and extra_args are the most useful). A callback may
+    return True to stop the current run process otherwise run() continues until
+    the next event. A callback may also return a string which will be sent to
+    the child. 'extra_args' is not used by directly run(). It provides a way to
+    pass data to a callback function through run() through the locals
+    dictionary passed to a callback. """
+
+    if timeout == -1:
+        child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env)
+    else:
+        child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile, cwd=cwd, env=env)
+    if events is not None:
+        patterns = events.keys()
+        responses = events.values()
+    else:
+        patterns=None # We assume that EOF or TIMEOUT will save us.
+        responses=None
+    child_result_list = []
+    event_count = 0
+    while 1:
+        try:
+            index = child.expect (patterns)
+            if type(child.after) in types.StringTypes:
+                child_result_list.append(child.before + child.after)
+            else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
+                child_result_list.append(child.before)
+            if type(responses[index]) in types.StringTypes:
+                child.send(responses[index])
+            elif type(responses[index]) is types.FunctionType:
+                callback_result = responses[index](locals())
+                sys.stdout.flush()
+                if type(callback_result) in types.StringTypes:
+                    child.send(callback_result)
+                elif callback_result:
+                    break
+            else:
+                raise TypeError ('The callback must be a string or function type.')
+            event_count = event_count + 1
+        except TIMEOUT, e:
+            child_result_list.append(child.before)
+            break
+        except EOF, e:
+            child_result_list.append(child.before)
+            break
+    child_result = ''.join(child_result_list)
+    if withexitstatus:
+        child.close()
+        return (child_result, child.exitstatus)
+    else:
+        return child_result
+
+class spawn (object):
+
+    """This is the main class interface for Pexpect. Use this class to start
+    and control child applications. """
+
+    def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None):
+
+        """This is the constructor. The command parameter may be a string that
+        includes a command and any arguments to the command. For example::
+
+            child = pexpect.spawn ('/usr/bin/ftp')
+            child = pexpect.spawn ('/usr/bin/ssh user@xxxxxxxxxxx')
+            child = pexpect.spawn ('ls -latr /tmp')
+
+        You may also construct it with a list of arguments like so::
+
+            child = pexpect.spawn ('/usr/bin/ftp', [])
+            child = pexpect.spawn ('/usr/bin/ssh', ['user@xxxxxxxxxxx'])
+            child = pexpect.spawn ('ls', ['-latr', '/tmp'])
+
+        After this the child application will be created and will be ready to
+        talk to. For normal use, see expect() and send() and sendline().
+
+        Remember that Pexpect does NOT interpret shell meta characters such as
+        redirect, pipe, or wild cards (>, |, or *). This is a common mistake.
+        If you want to run a command and pipe it through another command then
+        you must also start a shell. For example::
+
+            child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > log_list.txt"')
+            child.expect(pexpect.EOF)
+
+        The second form of spawn (where you pass a list of arguments) is useful
+        in situations where you wish to spawn a command and pass it its own
+        argument list. This can make syntax more clear. For example, the
+        following is equivalent to the previous example::
+
+            shell_cmd = 'ls -l | grep LOG > log_list.txt'
+            child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
+            child.expect(pexpect.EOF)
+
+        The maxread attribute sets the read buffer size. This is maximum number
+        of bytes that Pexpect will try to read from a TTY at one time. Setting
+        the maxread size to 1 will turn off buffering. Setting the maxread
+        value higher may help performance in cases where large amounts of
+        output are read back from the child. This feature is useful in
+        conjunction with searchwindowsize.
+
+        The searchwindowsize attribute sets the how far back in the incomming
+        seach buffer Pexpect will search for pattern matches. Every time
+        Pexpect reads some data from the child it will append the data to the
+        incomming buffer. The default is to search from the beginning of the
+        imcomming buffer each time new data is read from the child. But this is
+        very inefficient if you are running a command that generates a large
+        amount of data where you want to match The searchwindowsize does not
+        effect the size of the incomming data buffer. You will still have
+        access to the full buffer after expect() returns.
+
+        The logfile member turns on or off logging. All input and output will
+        be copied to the given file object. Set logfile to None to stop
+        logging. This is the default. Set logfile to sys.stdout to echo
+        everything to standard output. The logfile is flushed after each write.
+
+        Example log input and output to a file::
+
+            child = pexpect.spawn('some_command')
+            fout = file('mylog.txt','w')
+            child.logfile = fout
+
+        Example log to stdout::
+
+            child = pexpect.spawn('some_command')
+            child.logfile = sys.stdout
+
+        The logfile_read and logfile_send members can be used to separately log
+        the input from the child and output sent to the child. Sometimes you
+        don't want to see everything you write to the child. You only want to
+        log what the child sends back. For example::
+
+            child = pexpect.spawn('some_command')
+            child.logfile_read = sys.stdout
+
+        To separately log output sent to the child use logfile_send::
+
+            self.logfile_send = fout
+
+        The delaybeforesend helps overcome a weird behavior that many users
+        were experiencing. The typical problem was that a user would expect() a
+        "Password:" prompt and then immediately call sendline() to send the
+        password. The user would then see that their password was echoed back
+        to them. Passwords don't normally echo. The problem is caused by the
+        fact that most applications print out the "Password" prompt and then
+        turn off stdin echo, but if you send your password before the
+        application turned off echo, then you get your password echoed.
+        Normally this wouldn't be a problem when interacting with a human at a
+        real keyboard. If you introduce a slight delay just before writing then
+        this seems to clear up the problem. This was such a common problem for
+        many users that I decided that the default pexpect behavior should be
+        to sleep just before writing to the child application. 1/20th of a
+        second (50 ms) seems to be enough to clear up the problem. You can set
+        delaybeforesend to 0 to return to the old behavior. Most Linux machines
+        don't like this to be below 0.03. I don't know why.
+
+        Note that spawn is clever about finding commands on your path.
+        It uses the same logic that "which" uses to find executables.
+
+        If you wish to get the exit status of the child you must call the
+        close() method. The exit or signal status of the child will be stored
+        in self.exitstatus or self.signalstatus. If the child exited normally
+        then exitstatus will store the exit return code and signalstatus will
+        be None. If the child was terminated abnormally with a signal then
+        signalstatus will store the signal value and exitstatus will be None.
+        If you need more detail you can also read the self.status member which
+        stores the status returned by os.waitpid. You can interpret this using
+        os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """
+
+        self.STDIN_FILENO = pty.STDIN_FILENO
+        self.STDOUT_FILENO = pty.STDOUT_FILENO
+        self.STDERR_FILENO = pty.STDERR_FILENO
+        self.stdin = sys.stdin
+        self.stdout = sys.stdout
+        self.stderr = sys.stderr
+
+        self.searcher = None
+        self.ignorecase = False
+        self.before = None
+        self.after = None
+        self.match = None
+        self.match_index = None
+        self.terminated = True
+        self.exitstatus = None
+        self.signalstatus = None
+        self.status = None # status returned by os.waitpid
+        self.flag_eof = False
+        self.pid = None
+        self.child_fd = -1 # initially closed
+        self.timeout = timeout
+        self.delimiter = EOF
+        self.logfile = logfile
+        self.logfile_read = None # input from child (read_nonblocking)
+        self.logfile_send = None # output to send (send, sendline)
+        self.maxread = maxread # max bytes to read at one time into buffer
+        self.buffer = '' # This is the read buffer. See maxread.
+        self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched.
+        # Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms).
+        self.delaybeforesend = 0.05 # Sets sleep time used just before sending data to child. Time in seconds.
+        self.delayafterclose = 0.1 # Sets delay in close() method to allow kernel time to update process status. Time in seconds.
+        self.delayafterterminate = 0.1 # Sets delay in terminate() method to allow kernel time to update process status. Time in seconds.
+        self.softspace = False # File-like object.
+        self.name = '<' + repr(self) + '>' # File-like object.
+        self.encoding = None # File-like object.
+        self.closed = True # File-like object.
+        self.cwd = cwd
+        self.env = env
+        self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix
+        # Solaris uses internal __fork_pty(). All others use pty.fork().
+        if (sys.platform.lower().find('solaris')>=0) or (sys.platform.lower().find('sunos5')>=0):
+            self.use_native_pty_fork = False
+        else:
+            self.use_native_pty_fork = True
+
+
+        # allow dummy instances for subclasses that may not use command or args.
+        if command is None:
+            self.command = None
+            self.args = None
+            self.name = '<pexpect factory incomplete>'
+        else:
+            self._spawn (command, args)
+
+    def __del__(self):
+
+        """This makes sure that no system resources are left open. Python only
+        garbage collects Python objects. OS file descriptors are not Python
+        objects, so they must be handled explicitly. If the child file
+        descriptor was opened outside of this class (passed to the constructor)
+        then this does not close it. """
+
+        if not self.closed:
+            # It is possible for __del__ methods to execute during the
+            # teardown of the Python VM itself. Thus self.close() may
+            # trigger an exception because os.close may be None.
+            # -- Fernando Perez
+            try:
+                self.close()
+            except AttributeError:
+                pass
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object. """
+
+        s = []
+        s.append(repr(self))
+        s.append('version: ' + __version__ + ' (' + __revision__ + ')')
+        s.append('command: ' + str(self.command))
+        s.append('args: ' + str(self.args))
+        s.append('searcher: ' + str(self.searcher))
+        s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:])
+        s.append('before (last 100 chars): ' + str(self.before)[-100:])
+        s.append('after: ' + str(self.after))
+        s.append('match: ' + str(self.match))
+        s.append('match_index: ' + str(self.match_index))
+        s.append('exitstatus: ' + str(self.exitstatus))
+        s.append('flag_eof: ' + str(self.flag_eof))
+        s.append('pid: ' + str(self.pid))
+        s.append('child_fd: ' + str(self.child_fd))
+        s.append('closed: ' + str(self.closed))
+        s.append('timeout: ' + str(self.timeout))
+        s.append('delimiter: ' + str(self.delimiter))
+        s.append('logfile: ' + str(self.logfile))
+        s.append('logfile_read: ' + str(self.logfile_read))
+        s.append('logfile_send: ' + str(self.logfile_send))
+        s.append('maxread: ' + str(self.maxread))
+        s.append('ignorecase: ' + str(self.ignorecase))
+        s.append('searchwindowsize: ' + str(self.searchwindowsize))
+        s.append('delaybeforesend: ' + str(self.delaybeforesend))
+        s.append('delayafterclose: ' + str(self.delayafterclose))
+        s.append('delayafterterminate: ' + str(self.delayafterterminate))
+        return '\n'.join(s)
+
+    def _spawn(self,command,args=[]):
+
+        """This starts the given command in a child process. This does all the
+        fork/exec type of stuff for a pty. This is called by __init__. If args
+        is empty then command will be parsed (split on spaces) and args will be
+        set to parsed arguments. """
+
+        # The pid and child_fd of this object get set by this method.
+        # Note that it is difficult for this method to fail.
+        # You cannot detect if the child process cannot start.
+        # So the only way you can tell if the child process started
+        # or not is to try to read from the file descriptor. If you get
+        # EOF immediately then it means that the child is already dead.
+        # That may not necessarily be bad because you may haved spawned a child
+        # that performs some task; creates no stdout output; and then dies.
+
+        # If command is an int type then it may represent a file descriptor.
+        if type(command) == type(0):
+            raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.')
+
+        if type (args) != type([]):
+            raise TypeError ('The argument, args, must be a list.')
+
+        if args == []:
+            self.args = split_command_line(command)
+            self.command = self.args[0]
+        else:
+            self.args = args[:] # work with a copy
+            self.args.insert (0, command)
+            self.command = command
+
+        command_with_path = which(self.command)
+        if command_with_path is None:
+            raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command)
+        self.command = command_with_path
+        self.args[0] = self.command
+
+        self.name = '<' + ' '.join (self.args) + '>'
+
+        assert self.pid is None, 'The pid member should be None.'
+        assert self.command is not None, 'The command member should not be None.'
+
+        if self.use_native_pty_fork:
+            try:
+                self.pid, self.child_fd = pty.fork()
+            except OSError, e:
+                raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e))
+        else: # Use internal __fork_pty
+            self.pid, self.child_fd = self.__fork_pty()
+
+        if self.pid == 0: # Child
+            try:
+                self.child_fd = sys.stdout.fileno() # used by setwinsize()
+                self.setwinsize(24, 80)
+            except Exception:
+                # Some platforms do not like setwinsize (Cygwin).
+                # This will cause problem when running applications that
+                # are very picky about window size.
+                # This is a serious limitation, but not a show stopper.
+                pass
+            # Do not allow child to inherit open file descriptors from parent.
+            max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
+            for i in range (3, max_fd):
+                try:
+                    os.close (i)
+                except OSError:
+                    pass
+
+            # I don't know why this works, but ignoring SIGHUP fixes a
+            # problem when trying to start a Java daemon with sudo
+            # (specifically, Tomcat).
+            signal.signal(signal.SIGHUP, signal.SIG_IGN)
+
+            if self.cwd is not None:
+                os.chdir(self.cwd)
+            if self.env is None:
+                os.execv(self.command, self.args)
+            else:
+                os.execvpe(self.command, self.args, self.env)
+
+        # Parent
+        self.terminated = False
+        self.closed = False
+
+    def __fork_pty(self):
+
+        """This implements a substitute for the forkpty system call. This
+        should be more portable than the pty.fork() function. Specifically,
+        this should work on Solaris.
+
+        Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to
+        resolve the issue with Python's pty.fork() not supporting Solaris,
+        particularly ssh. Based on patch to posixmodule.c authored by Noah
+        Spurrier::
+
+            http://mail.python.org/pipermail/python-dev/2003-May/035281.html
+
+        """
+
+        parent_fd, child_fd = os.openpty()
+        if parent_fd < 0 or child_fd < 0:
+            raise ExceptionPexpect, "Error! Could not open pty with os.openpty()."
+
+        pid = os.fork()
+        if pid < 0:
+            raise ExceptionPexpect, "Error! Failed os.fork()."
+        elif pid == 0:
+            # Child.
+            os.close(parent_fd)
+            self.__pty_make_controlling_tty(child_fd)
+
+            os.dup2(child_fd, 0)
+            os.dup2(child_fd, 1)
+            os.dup2(child_fd, 2)
+
+            if child_fd > 2:
+                os.close(child_fd)
+        else:
+            # Parent.
+            os.close(child_fd)
+
+        return pid, parent_fd
+
+    def __pty_make_controlling_tty(self, tty_fd):
+
+        """This makes the pseudo-terminal the controlling tty. This should be
+        more portable than the pty.fork() function. Specifically, this should
+        work on Solaris. """
+
+        child_name = os.ttyname(tty_fd)
+
+        # Disconnect from controlling tty if still connected.
+        fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
+        if fd >= 0:
+            os.close(fd)
+
+        os.setsid()
+
+        # Verify we are disconnected from controlling tty
+        try:
+            fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
+            if fd >= 0:
+                os.close(fd)
+                raise ExceptionPexpect, "Error! We are not disconnected from a controlling tty."
+        except Exception:
+            # Good! We are disconnected from a controlling tty.
+            pass
+
+        # Verify we can open child pty.
+        fd = os.open(child_name, os.O_RDWR);
+        if fd < 0:
+            raise ExceptionPexpect, "Error! Could not open child pty, " + child_name
+        else:
+            os.close(fd)
+
+        # Verify we now have a controlling tty.
+        fd = os.open("/dev/tty", os.O_WRONLY)
+        if fd < 0:
+            raise ExceptionPexpect, "Error! Could not open controlling tty, /dev/tty"
+        else:
+            os.close(fd)
+
+    def fileno (self):   # File-like object.
+
+        """This returns the file descriptor of the pty for the child.
+        """
+
+        return self.child_fd
+
+    def close (self, force=True):   # File-like object.
+
+        """This closes the connection with the child application. Note that
+        calling close() more than once is valid. This emulates standard Python
+        behavior with files. Set force to True if you want to make sure that
+        the child is terminated (SIGKILL is sent if the child ignores SIGHUP
+        and SIGINT). """
+
+        if not self.closed:
+            self.flush()
+            os.close (self.child_fd)
+            time.sleep(self.delayafterclose) # Give kernel time to update process status.
+            if self.isalive():
+                if not self.terminate(force):
+                    raise ExceptionPexpect ('close() could not terminate the child using terminate()')
+            self.child_fd = -1
+            self.closed = True
+            #self.pid = None
+
+    def flush (self):   # File-like object.
+
+        """This does nothing. It is here to support the interface for a
+        File-like object. """
+
+        pass
+
+    def isatty (self):   # File-like object.
+
+        """This returns True if the file descriptor is open and connected to a
+        tty(-like) device, else False. """
+
+        return os.isatty(self.child_fd)
+
+    def waitnoecho (self, timeout=-1):
+
+        """This waits until the terminal ECHO flag is set False. This returns
+        True if the echo mode is off. This returns False if the ECHO flag was
+        not set False before the timeout. This can be used to detect when the
+        child is waiting for a password. Usually a child application will turn
+        off echo mode when it is waiting for the user to enter a password. For
+        example, instead of expecting the "password:" prompt you can wait for
+        the child to set ECHO off::
+
+            p = pexpect.spawn ('ssh user@xxxxxxxxxxx')
+            p.waitnoecho()
+            p.sendline(mypassword)
+
+        If timeout is None then this method to block forever until ECHO flag is
+        False.
+
+        """
+
+        if timeout == -1:
+            timeout = self.timeout
+        if timeout is not None:
+            end_time = time.time() + timeout
+        while True:
+            if not self.getecho():
+                return True
+            if timeout < 0 and timeout is not None:
+                return False
+            if timeout is not None:
+                timeout = end_time - time.time()
+            time.sleep(0.1)
+
+    def getecho (self):
+
+        """This returns the terminal echo mode. This returns True if echo is
+        on or False if echo is off. Child applications that are expecting you
+        to enter a password often set ECHO False. See waitnoecho(). """
+
+        attr = termios.tcgetattr(self.child_fd)
+        if attr[3] & termios.ECHO:
+            return True
+        return False
+
+    def setecho (self, state):
+
+        """This sets the terminal echo mode on or off. Note that anything the
+        child sent before the echo will be lost, so you should be sure that
+        your input buffer is empty before you call setecho(). For example, the
+        following will work as expected::
+
+            p = pexpect.spawn('cat')
+            p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
+            p.expect (['1234'])
+            p.expect (['1234'])
+            p.setecho(False) # Turn off tty echo
+            p.sendline ('abcd') # We will set this only once (echoed by cat).
+            p.sendline ('wxyz') # We will set this only once (echoed by cat)
+            p.expect (['abcd'])
+            p.expect (['wxyz'])
+
+        The following WILL NOT WORK because the lines sent before the setecho
+        will be lost::
+
+            p = pexpect.spawn('cat')
+            p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
+            p.setecho(False) # Turn off tty echo
+            p.sendline ('abcd') # We will set this only once (echoed by cat).
+            p.sendline ('wxyz') # We will set this only once (echoed by cat)
+            p.expect (['1234'])
+            p.expect (['1234'])
+            p.expect (['abcd'])
+            p.expect (['wxyz'])
+        """
+
+        self.child_fd
+        attr = termios.tcgetattr(self.child_fd)
+        if state:
+            attr[3] = attr[3] | termios.ECHO
+        else:
+            attr[3] = attr[3] & ~termios.ECHO
+        # I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent
+        # and blocked on some platforms. TCSADRAIN is probably ideal if it worked.
+        termios.tcsetattr(self.child_fd, termios.TCSANOW, attr)
+
+    def read_nonblocking (self, size = 1, timeout = -1):
+
+        """This reads at most size characters from the child application. It
+        includes a timeout. If the read does not complete within the timeout
+        period then a TIMEOUT exception is raised. If the end of file is read
+        then an EOF exception will be raised. If a log file was set using
+        setlog() then all data will also be written to the log file.
+
+        If timeout is None then the read may block indefinitely. If timeout is -1
+        then the self.timeout value is used. If timeout is 0 then the child is
+        polled and if there was no data immediately ready then this will raise
+        a TIMEOUT exception.
+
+        The timeout refers only to the amount of time to read at least one
+        character. This is not effected by the 'size' parameter, so if you call
+        read_nonblocking(size=100, timeout=30) and only one character is
+        available right away then one character will be returned immediately.
+        It will not wait for 30 seconds for another 99 characters to come in.
+
+        This is a wrapper around os.read(). It uses select.select() to
+        implement the timeout. """
+
+        if self.closed:
+            raise ValueError ('I/O operation on closed file in read_nonblocking().')
+
+        if timeout == -1:
+            timeout = self.timeout
+
+        # Note that some systems such as Solaris do not give an EOF when
+        # the child dies. In fact, you can still try to read
+        # from the child_fd -- it will block forever or until TIMEOUT.
+        # For this case, I test isalive() before doing any reading.
+        # If isalive() is false, then I pretend that this is the same as EOF.
+        if not self.isalive():
+            r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll" @UnusedVariable
+            if not r:
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.')
+        elif self.__irix_hack:
+            # This is a hack for Irix. It seems that Irix requires a long delay before checking isalive.
+            # This adds a 2 second delay, but only when the child is terminated.
+            r, w, e = self.__select([self.child_fd], [], [], 2) #@UnusedVariable
+            if not r and not self.isalive():
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.')
+
+        r,w,e = self.__select([self.child_fd], [], [], timeout) #@UnusedVariable
+
+        if not r:
+            if not self.isalive():
+                # Some platforms, such as Irix, will claim that their processes are alive;
+                # then timeout on the select; and then finally admit that they are not alive.
+                self.flag_eof = True
+                raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.')
+            else:
+                raise TIMEOUT ('Timeout exceeded in read_nonblocking().')
+
+        if self.child_fd in r:
+            try:
+                s = os.read(self.child_fd, size)
+            except OSError, e: # Linux does this
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.')
+            if s == '': # BSD style
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.')
+
+            if self.logfile is not None:
+                self.logfile.write (s)
+                self.logfile.flush()
+            if self.logfile_read is not None:
+                self.logfile_read.write (s)
+                self.logfile_read.flush()
+
+            return s
+
+        raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().')
+
+    def read (self, size = -1):   # File-like object.
+
+        """This reads at most "size" bytes from the file (less if the read hits
+        EOF before obtaining size bytes). If the size argument is negative or
+        omitted, read all data until EOF is reached. The bytes are returned as
+        a string object. An empty string is returned when EOF is encountered
+        immediately. """
+
+        if size == 0:
+            return ''
+        if size < 0:
+            self.expect (self.delimiter) # delimiter default is EOF
+            return self.before
+
+        # I could have done this more directly by not using expect(), but
+        # I deliberately decided to couple read() to expect() so that
+        # I would catch any bugs early and ensure consistant behavior.
+        # It's a little less efficient, but there is less for me to
+        # worry about if I have to later modify read() or expect().
+        # Note, it's OK if size==-1 in the regex. That just means it
+        # will never match anything in which case we stop only on EOF.
+        cre = re.compile('.{%d}' % size, re.DOTALL)
+        index = self.expect ([cre, self.delimiter]) # delimiter default is EOF
+        if index == 0:
+            return self.after ### self.before should be ''. Should I assert this?
+        return self.before
+
+    def readline (self, size = -1):    # File-like object.
+
+        """This reads and returns one entire line. A trailing newline is kept
+        in the string, but may be absent when a file ends with an incomplete
+        line. Note: This readline() looks for a \\r\\n pair even on UNIX
+        because this is what the pseudo tty device returns. So contrary to what
+        you may expect you will receive the newline as \\r\\n. An empty string
+        is returned when EOF is hit immediately. Currently, the size argument is
+        mostly ignored, so this behavior is not standard for a file-like
+        object. If size is 0 then an empty string is returned. """
+
+        if size == 0:
+            return ''
+        index = self.expect (['\r\n', self.delimiter]) # delimiter default is EOF
+        if index == 0:
+            return self.before + '\r\n'
+        else:
+            return self.before
+
+    def __iter__ (self):    # File-like object.
+
+        """This is to support iterators over a file-like object.
+        """
+
+        return self
+
+    def next (self):    # File-like object.
+
+        """This is to support iterators over a file-like object.
+        """
+
+        result = self.readline()
+        if result == "":
+            raise StopIteration
+        return result
+
+    def readlines (self, sizehint = -1):    # File-like object.
+
+        """This reads until EOF using readline() and returns a list containing
+        the lines thus read. The optional "sizehint" argument is ignored. """
+
+        lines = []
+        while True:
+            line = self.readline()
+            if not line:
+                break
+            lines.append(line)
+        return lines
+
+    def write(self, s):   # File-like object.
+
+        """This is similar to send() except that there is no return value.
+        """
+
+        self.send (s)
+
+    def writelines (self, sequence):   # File-like object.
+
+        """This calls write() for each element in the sequence. The sequence
+        can be any iterable object producing strings, typically a list of
+        strings. This does not add line separators There is no return value.
+        """
+
+        for s in sequence:
+            self.write (s)
+
+    def send(self, s):
+
+        """This sends a string to the child process. This returns the number of
+        bytes written. If a log file was set then the data is also written to
+        the log. """
+
+        time.sleep(self.delaybeforesend)
+        if self.logfile is not None:
+            self.logfile.write (s)
+            self.logfile.flush()
+        if self.logfile_send is not None:
+            self.logfile_send.write (s)
+            self.logfile_send.flush()
+        c = os.write(self.child_fd, s)
+        return c
+
+    def sendline(self, s=''):
+
+        """This is like send(), but it adds a line feed (os.linesep). This
+        returns the number of bytes written. """
+
+        n = self.send(s)
+        n = n + self.send (os.linesep)
+        return n
+
+    def sendcontrol(self, char):
+
+        """This sends a control character to the child such as Ctrl-C or
+        Ctrl-D. For example, to send a Ctrl-G (ASCII 7)::
+
+            child.sendcontrol('g')
+
+        See also, sendintr() and sendeof().
+        """
+
+        char = char.lower()
+        a = ord(char)
+        if a>=97 and a<=122:
+            a = a - ord('a') + 1
+            return self.send (chr(a))
+        d = {'@':0, '`':0,
+            '[':27, '{':27,
+            '\\':28, '|':28,
+            ']':29, '}': 29,
+            '^':30, '~':30,
+            '_':31,
+            '?':127}
+        if char not in d:
+            return 0
+        return self.send (chr(d[char]))
+
+    def sendeof(self):
+
+        """This sends an EOF to the child. This sends a character which causes
+        the pending parent output buffer to be sent to the waiting child
+        program without waiting for end-of-line. If it is the first character
+        of the line, the read() in the user program returns 0, which signifies
+        end-of-file. This means to work as expected a sendeof() has to be
+        called at the beginning of a line. This method does not send a newline.
+        It is the responsibility of the caller to ensure the eof is sent at the
+        beginning of a line. """
+
+        ### Hmmm... how do I send an EOF?
+        ###C  if ((m = write(pty, *buf, p - *buf)) < 0)
+        ###C      return (errno == EWOULDBLOCK) ? n : -1;
+        #fd = sys.stdin.fileno()
+        #old = termios.tcgetattr(fd) # remember current state
+        #attr = termios.tcgetattr(fd)
+        #attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF
+        #try: # use try/finally to ensure state gets restored
+        #    termios.tcsetattr(fd, termios.TCSADRAIN, attr)
+        #    if hasattr(termios, 'CEOF'):
+        #        os.write (self.child_fd, '%c' % termios.CEOF)
+        #    else:
+        #        # Silly platform does not define CEOF so assume CTRL-D
+        #        os.write (self.child_fd, '%c' % 4)
+        #finally: # restore state
+        #    termios.tcsetattr(fd, termios.TCSADRAIN, old)
+        if hasattr(termios, 'VEOF'):
+            char = termios.tcgetattr(self.child_fd)[6][termios.VEOF]
+        else:
+            # platform does not define VEOF so assume CTRL-D
+            char = chr(4)
+        self.send(char)
+
+    def sendintr(self):
+
+        """This sends a SIGINT to the child. It does not require
+        the SIGINT to be the first character on a line. """
+
+        if hasattr(termios, 'VINTR'):
+            char = termios.tcgetattr(self.child_fd)[6][termios.VINTR]
+        else:
+            # platform does not define VINTR so assume CTRL-C
+            char = chr(3)
+        self.send (char)
+
+    def eof (self):
+
+        """This returns True if the EOF exception was ever raised.
+        """
+
+        return self.flag_eof
+
+    def terminate(self, force=False):
+
+        """This forces a child process to terminate. It starts nicely with
+        SIGHUP and SIGINT. If "force" is True then moves onto SIGKILL. This
+        returns True if the child was terminated. This returns False if the
+        child could not be terminated. """
+
+        if not self.isalive():
+            return True
+        try:
+            self.kill(signal.SIGHUP)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            self.kill(signal.SIGCONT)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            self.kill(signal.SIGINT)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            if force:
+                self.kill(signal.SIGKILL)
+                time.sleep(self.delayafterterminate)
+                if not self.isalive():
+                    return True
+                else:
+                    return False
+            return False
+        except OSError, e:
+            # I think there are kernel timing issues that sometimes cause
+            # this to happen. I think isalive() reports True, but the
+            # process is dead to the kernel.
+            # Make one last attempt to see if the kernel is up to date.
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            else:
+                return False
+
+    def wait(self):
+
+        """This waits until the child exits. This is a blocking call. This will
+        not read any data from the child, so this will block forever if the
+        child has unread output and has terminated. In other words, the child
+        may have printed output then called exit(); but, technically, the child
+        is still alive until its output is read. """
+
+        if self.isalive():
+            pid, status = os.waitpid(self.pid, 0) #@UnusedVariable
+        else:
+            raise ExceptionPexpect ('Cannot wait for dead child process.')
+        self.exitstatus = os.WEXITSTATUS(status)
+        if os.WIFEXITED (status):
+            self.status = status
+            self.exitstatus = os.WEXITSTATUS(status)
+            self.signalstatus = None
+            self.terminated = True
+        elif os.WIFSIGNALED (status):
+            self.status = status
+            self.exitstatus = None
+            self.signalstatus = os.WTERMSIG(status)
+            self.terminated = True
+        elif os.WIFSTOPPED (status):
+            raise ExceptionPexpect ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?')
+        return self.exitstatus
+
+    def isalive(self):
+
+        """This tests if the child process is running or not. This is
+        non-blocking. If the child was terminated then this will read the
+        exitstatus or signalstatus of the child. This returns True if the child
+        process appears to be running or False if not. It can take literally
+        SECONDS for Solaris to return the right status. """
+
+        if self.terminated:
+            return False
+
+        if self.flag_eof:
+            # This is for Linux, which requires the blocking form of waitpid to get
+            # status of a defunct process. This is super-lame. The flag_eof would have
+            # been set in read_nonblocking(), so this should be safe.
+            waitpid_options = 0
+        else:
+            waitpid_options = os.WNOHANG
+
+        try:
+            pid, status = os.waitpid(self.pid, waitpid_options)
+        except OSError, e: # No child processes
+            if e[0] == errno.ECHILD:
+                raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?')
+            else:
+                raise e
+
+        # I have to do this twice for Solaris. I can't even believe that I figured this out...
+        # If waitpid() returns 0 it means that no child process wishes to
+        # report, and the value of status is undefined.
+        if pid == 0:
+            try:
+                pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris!
+            except OSError, e: # This should never happen...
+                if e[0] == errno.ECHILD:
+                    raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?')
+                else:
+                    raise e
+
+            # If pid is still 0 after two calls to waitpid() then
+            # the process really is alive. This seems to work on all platforms, except
+            # for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking
+            # take care of this situation (unfortunately, this requires waiting through the timeout).
+            if pid == 0:
+                return True
+
+        if pid == 0:
+            return True
+
+        if os.WIFEXITED (status):
+            self.status = status
+            self.exitstatus = os.WEXITSTATUS(status)
+            self.signalstatus = None
+            self.terminated = True
+        elif os.WIFSIGNALED (status):
+            self.status = status
+            self.exitstatus = None
+            self.signalstatus = os.WTERMSIG(status)
+            self.terminated = True
+        elif os.WIFSTOPPED (status):
+            raise ExceptionPexpect ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?')
+        return False
+
+    def kill(self, sig):
+
+        """This sends the given signal to the child application. In keeping
+        with UNIX tradition it has a misleading name. It does not necessarily
+        kill the child unless you send the right signal. """
+
+        # Same as os.kill, but the pid is given for you.
+        if self.isalive():
+            os.kill(self.pid, sig)
+
+    def compile_pattern_list(self, patterns):
+
+        """This compiles a pattern-string or a list of pattern-strings.
+        Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of
+        those. Patterns may also be None which results in an empty list (you
+        might do this if waiting for an EOF or TIMEOUT condition without
+        expecting any pattern).
+
+        This is used by expect() when calling expect_list(). Thus expect() is
+        nothing more than::
+
+             cpl = self.compile_pattern_list(pl)
+             return self.expect_list(cpl, timeout)
+
+        If you are using expect() within a loop it may be more
+        efficient to compile the patterns first and then call expect_list().
+        This avoid calls in a loop to compile_pattern_list()::
+
+             cpl = self.compile_pattern_list(my_pattern)
+             while some_condition:
+                ...
+                i = self.expect_list(clp, timeout)
+                ...
+        """
+
+        if patterns is None:
+            return []
+        if type(patterns) is not types.ListType:
+            patterns = [patterns]
+
+        compile_flags = re.DOTALL # Allow dot to match \n
+        if self.ignorecase:
+            compile_flags = compile_flags | re.IGNORECASE
+        compiled_pattern_list = []
+        for p in patterns:
+            if type(p) in types.StringTypes:
+                compiled_pattern_list.append(re.compile(p, compile_flags))
+            elif p is EOF:
+                compiled_pattern_list.append(EOF)
+            elif p is TIMEOUT:
+                compiled_pattern_list.append(TIMEOUT)
+            elif type(p) is type(re.compile('')):
+                compiled_pattern_list.append(p)
+            else:
+                raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p)))
+
+        return compiled_pattern_list
+
+    def expect(self, pattern, timeout = -1, searchwindowsize=None):
+
+        """This seeks through the stream until a pattern is matched. The
+        pattern is overloaded and may take several types. The pattern can be a
+        StringType, EOF, a compiled re, or a list of any of those types.
+        Strings will be compiled to re types. This returns the index into the
+        pattern list. If the pattern was not a list this returns index 0 on a
+        successful match. This may raise exceptions for EOF or TIMEOUT. To
+        avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern
+        list. That will cause expect to match an EOF or TIMEOUT condition
+        instead of raising an exception.
+
+        If you pass a list of patterns and more than one matches, the first match
+        in the stream is chosen. If more than one pattern matches at that point,
+        the leftmost in the pattern list is chosen. For example::
+
+            # the input is 'foobar'
+            index = p.expect (['bar', 'foo', 'foobar'])
+            # returns 1 ('foo') even though 'foobar' is a "better" match
+
+        Please note, however, that buffering can affect this behavior, since
+        input arrives in unpredictable chunks. For example::
+
+            # the input is 'foobar'
+            index = p.expect (['foobar', 'foo'])
+            # returns 0 ('foobar') if all input is available at once,
+            # but returs 1 ('foo') if parts of the final 'bar' arrive late
+
+        After a match is found the instance attributes 'before', 'after' and
+        'match' will be set. You can see all the data read before the match in
+        'before'. You can see the data that was matched in 'after'. The
+        re.MatchObject used in the re match will be in 'match'. If an error
+        occurred then 'before' will be set to all the data read so far and
+        'after' and 'match' will be None.
+
+        If timeout is -1 then timeout will be set to the self.timeout value.
+
+        A list entry may be EOF or TIMEOUT instead of a string. This will
+        catch these exceptions and return the index of the list entry instead
+        of raising the exception. The attribute 'after' will be set to the
+        exception type. The attribute 'match' will be None. This allows you to
+        write code like this::
+
+                index = p.expect (['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])
+                if index == 0:
+                    do_something()
+                elif index == 1:
+                    do_something_else()
+                elif index == 2:
+                    do_some_other_thing()
+                elif index == 3:
+                    do_something_completely_different()
+
+        instead of code like this::
+
+                try:
+                    index = p.expect (['good', 'bad'])
+                    if index == 0:
+                        do_something()
+                    elif index == 1:
+                        do_something_else()
+                except EOF:
+                    do_some_other_thing()
+                except TIMEOUT:
+                    do_something_completely_different()
+
+        These two forms are equivalent. It all depends on what you want. You
+        can also just expect the EOF if you are waiting for all output of a
+        child to finish. For example::
+
+                p = pexpect.spawn('/bin/ls')
+                p.expect (pexpect.EOF)
+                print p.before
+
+        If you are trying to optimize for speed then see expect_list().
+        """
+
+        compiled_pattern_list = self.compile_pattern_list(pattern)
+        return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
+
+    def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1):
+
+        """This takes a list of compiled regular expressions and returns the
+        index into the pattern_list that matched the child output. The list may
+        also contain EOF or TIMEOUT (which are not compiled regular
+        expressions). This method is similar to the expect() method except that
+        expect_list() does not recompile the pattern list on every call. This
+        may help if you are trying to optimize for speed, otherwise just use
+        the expect() method.  This is called by expect(). If timeout==-1 then
+        the self.timeout value is used. If searchwindowsize==-1 then the
+        self.searchwindowsize value is used. """
+
+        return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
+
+    def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1):
+
+        """This is similar to expect(), but uses plain string matching instead
+        of compiled regular expressions in 'pattern_list'. The 'pattern_list'
+        may be a string; a list or other sequence of strings; or TIMEOUT and
+        EOF.
+
+        This call might be faster than expect() for two reasons: string
+        searching is faster than RE matching and it is possible to limit the
+        search to just the end of the input buffer.
+
+        This method is also useful when you don't want to have to worry about
+        escaping regular expression characters that you want to match."""
+
+        if type(pattern_list) in types.StringTypes or pattern_list in (TIMEOUT, EOF):
+            pattern_list = [pattern_list]
+        return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize)
+
+    def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1):
+
+        """This is the common loop used inside expect. The 'searcher' should be
+        an instance of searcher_re or searcher_string, which describes how and what
+        to search for in the input.
+
+        See expect() for other arguments, return value and exceptions. """
+
+        self.searcher = searcher
+
+        if timeout == -1:
+            timeout = self.timeout
+        if timeout is not None:
+            end_time = time.time() + timeout
+        if searchwindowsize == -1:
+            searchwindowsize = self.searchwindowsize
+
+        try:
+            incoming = self.buffer
+            freshlen = len(incoming)
+            while True: # Keep reading until exception or return.
+                index = searcher.search(incoming, freshlen, searchwindowsize)
+                if index >= 0:
+                    self.buffer = incoming[searcher.end : ]
+                    self.before = incoming[ : searcher.start]
+                    self.after = incoming[searcher.start : searcher.end]
+                    self.match = searcher.match
+                    self.match_index = index
+                    return self.match_index
+                # No match at this point
+                if timeout < 0 and timeout is not None:
+                    raise TIMEOUT ('Timeout exceeded in expect_any().')
+                # Still have time left, so read more data
+                c = self.read_nonblocking (self.maxread, timeout)
+                freshlen = len(c)
+                time.sleep (0.0001)
+                incoming = incoming + c
+                if timeout is not None:
+                    timeout = end_time - time.time()
+        except EOF, e:
+            self.buffer = ''
+            self.before = incoming
+            self.after = EOF
+            index = searcher.eof_index
+            if index >= 0:
+                self.match = EOF
+                self.match_index = index
+                return self.match_index
+            else:
+                self.match = None
+                self.match_index = None
+                raise EOF (str(e) + '\n' + str(self))
+        except TIMEOUT, e:
+            self.buffer = incoming
+            self.before = incoming
+            self.after = TIMEOUT
+            index = searcher.timeout_index
+            if index >= 0:
+                self.match = TIMEOUT
+                self.match_index = index
+                return self.match_index
+            else:
+                self.match = None
+                self.match_index = None
+                raise TIMEOUT (str(e) + '\n' + str(self))
+        except Exception:
+            self.before = incoming
+            self.after = None
+            self.match = None
+            self.match_index = None
+            raise
+
+    def getwinsize(self):
+
+        """This returns the terminal window size of the child tty. The return
+        value is a tuple of (rows, cols). """
+
+        TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L)
+        s = struct.pack('HHHH', 0, 0, 0, 0)
+        x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s)
+        return struct.unpack('HHHH', x)[0:2]
+
+    def setwinsize(self, r, c):
+
+        """This sets the terminal window size of the child tty. This will cause
+        a SIGWINCH signal to be sent to the child. This does not change the
+        physical window size. It changes the size reported to TTY-aware
+        applications like vi or curses -- applications that respond to the
+        SIGWINCH signal. """
+
+        # Check for buggy platforms. Some Python versions on some platforms
+        # (notably OSF1 Alpha and RedHat 7.1) truncate the value for
+        # termios.TIOCSWINSZ. It is not clear why this happens.
+        # These platforms don't seem to handle the signed int very well;
+        # yet other platforms like OpenBSD have a large negative value for
+        # TIOCSWINSZ and they don't have a truncate problem.
+        # Newer versions of Linux have totally different values for TIOCSWINSZ.
+        # Note that this fix is a hack.
+        TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561)
+        if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2.
+            TIOCSWINSZ = -2146929561 # Same bits, but with sign.
+        # Note, assume ws_xpixel and ws_ypixel are zero.
+        s = struct.pack('HHHH', r, c, 0, 0)
+        fcntl.ioctl(self.fileno(), TIOCSWINSZ, s)
+
+    def interact(self, escape_character = chr(29), input_filter = None, output_filter = None):
+
+        """This gives control of the child process to the interactive user (the
+        human at the keyboard). Keystrokes are sent to the child process, and
+        the stdout and stderr output of the child process is printed. This
+        simply echos the child stdout and child stderr to the real stdout and
+        it echos the real stdin to the child stdin. When the user types the
+        escape_character this method will stop. The default for
+        escape_character is ^]. This should not be confused with ASCII 27 --
+        the ESC character. ASCII 29 was chosen for historical merit because
+        this is the character used by 'telnet' as the escape character. The
+        escape_character will not be sent to the child process.
+
+        You may pass in optional input and output filter functions. These
+        functions should take a string and return a string. The output_filter
+        will be passed all the output from the child process. The input_filter
+        will be passed all the keyboard input from the user. The input_filter
+        is run BEFORE the check for the escape_character.
+
+        Note that if you change the window size of the parent the SIGWINCH
+        signal will not be passed through to the child. If you want the child
+        window size to change when the parent's window size changes then do
+        something like the following example::
+
+            import pexpect, struct, fcntl, termios, signal, sys
+            def sigwinch_passthrough (sig, data):
+                s = struct.pack("HHHH", 0, 0, 0, 0)
+                a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ , s))
+                global p
+                p.setwinsize(a[0],a[1])
+            p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough.
+            signal.signal(signal.SIGWINCH, sigwinch_passthrough)
+            p.interact()
+        """
+
+        # Flush the buffer.
+        self.stdout.write (self.buffer)
+        self.stdout.flush()
+        self.buffer = ''
+        mode = tty.tcgetattr(self.STDIN_FILENO)
+        tty.setraw(self.STDIN_FILENO)
+        try:
+            self.__interact_copy(escape_character, input_filter, output_filter)
+        finally:
+            tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode)
+
+    def __interact_writen(self, fd, data):
+
+        """This is used by the interact() method.
+        """
+
+        while data != '' and self.isalive():
+            n = os.write(fd, data)
+            data = data[n:]
+
+    def __interact_read(self, fd):
+
+        """This is used by the interact() method.
+        """
+
+        return os.read(fd, 1000)
+
+    def __interact_copy(self, escape_character = None, input_filter = None, output_filter = None):
+
+        """This is used by the interact() method.
+        """
+
+        while self.isalive():
+            r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], []) #@UnusedVariable
+            if self.child_fd in r:
+                data = self.__interact_read(self.child_fd)
+                if output_filter: data = output_filter(data)
+                if self.logfile is not None:
+                    self.logfile.write (data)
+                    self.logfile.flush()
+                os.write(self.STDOUT_FILENO, data)
+            if self.STDIN_FILENO in r:
+                data = self.__interact_read(self.STDIN_FILENO)
+                if input_filter: data = input_filter(data)
+                i = data.rfind(escape_character)
+                if i != -1:
+                    data = data[:i]
+                    self.__interact_writen(self.child_fd, data)
+                    break
+                self.__interact_writen(self.child_fd, data)
+
+    def __select (self, iwtd, owtd, ewtd, timeout=None):
+
+        """This is a wrapper around select.select() that ignores signals. If
+        select.select raises a select.error exception and errno is an EINTR
+        error then it is ignored. Mainly this is used to ignore sigwinch
+        (terminal resize). """
+
+        # if select() is interrupted by a signal (errno==EINTR) then
+        # we loop back and enter the select() again.
+        if timeout is not None:
+            end_time = time.time() + timeout
+        while True:
+            try:
+                return select.select (iwtd, owtd, ewtd, timeout)
+            except select.error, e:
+                if e[0] == errno.EINTR:
+                    # if we loop back we have to subtract the amount of time we already waited.
+                    if timeout is not None:
+                        timeout = end_time - time.time()
+                        if timeout < 0:
+                            return ([],[],[])
+                else: # something else caused the select.error, so this really is an exception
+                    raise
+
+##############################################################################
+# The following methods are no longer supported or allowed.
+
+    def setmaxread (self, maxread):
+
+        """This method is no longer supported or allowed. I don't like getters
+        and setters without a good reason. """
+
+        raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the maxread member variable.')
+
+    def setlog (self, fileobject):
+
+        """This method is no longer supported or allowed.
+        """
+
+        raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the logfile member variable.')
+
+##############################################################################
+# End of spawn class
+##############################################################################
+
+class searcher_string (object):
+
+    """This is a plain string search helper for the spawn.expect_any() method.
+
+    Attributes:
+
+        eof_index     - index of EOF, or -1
+        timeout_index - index of TIMEOUT, or -1
+
+    After a successful match by the search() method the following attributes
+    are available:
+
+        start - index into the buffer, first byte of match
+        end   - index into the buffer, first byte after match
+        match - the matching string itself
+    """
+
+    def __init__(self, strings):
+
+        """This creates an instance of searcher_string. This argument 'strings'
+        may be a list; a sequence of strings; or the EOF or TIMEOUT types. """
+
+        self.eof_index = -1
+        self.timeout_index = -1
+        self._strings = []
+        for n, s in zip(range(len(strings)), strings):
+            if s is EOF:
+                self.eof_index = n
+                continue
+            if s is TIMEOUT:
+                self.timeout_index = n
+                continue
+            self._strings.append((n, s))
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object."""
+
+        ss =  [ (ns[0],'    %d: "%s"' % ns) for ns in self._strings ]
+        ss.append((-1,'searcher_string:'))
+        if self.eof_index >= 0:
+            ss.append ((self.eof_index,'    %d: EOF' % self.eof_index))
+        if self.timeout_index >= 0:
+            ss.append ((self.timeout_index,'    %d: TIMEOUT' % self.timeout_index))
+        ss.sort()
+        ss = zip(*ss)[1]
+        return '\n'.join(ss)
+
+    def search(self, buffer, freshlen, searchwindowsize=None):
+
+        """This searches 'buffer' for the first occurence of one of the search
+        strings.  'freshlen' must indicate the number of bytes at the end of
+        'buffer' which have not been searched before. It helps to avoid
+        searching the same, possibly big, buffer over and over again.
+
+        See class spawn for the 'searchwindowsize' argument.
+
+        If there is a match this returns the index of that string, and sets
+        'start', 'end' and 'match'. Otherwise, this returns -1. """
+
+        absurd_match = len(buffer)
+        first_match = absurd_match
+
+        # 'freshlen' helps a lot here. Further optimizations could
+        # possibly include:
+        #
+        # using something like the Boyer-Moore Fast String Searching
+        # Algorithm; pre-compiling the search through a list of
+        # strings into something that can scan the input once to
+        # search for all N strings; realize that if we search for
+        # ['bar', 'baz'] and the input is '...foo' we need not bother
+        # rescanning until we've read three more bytes.
+        #
+        # Sadly, I don't know enough about this interesting topic. /grahn
+
+        for index, s in self._strings:
+            if searchwindowsize is None:
+                # the match, if any, can only be in the fresh data,
+                # or at the very end of the old data
+                offset = -(freshlen+len(s))
+            else:
+                # better obey searchwindowsize
+                offset = -searchwindowsize
+            n = buffer.find(s, offset)
+            if n >= 0 and n < first_match:
+                first_match = n
+                best_index, best_match = index, s
+        if first_match == absurd_match:
+            return -1
+        self.match = best_match
+        self.start = first_match
+        self.end = self.start + len(self.match)
+        return best_index
+
+class searcher_re (object):
+
+    """This is regular expression string search helper for the
+    spawn.expect_any() method.
+
+    Attributes:
+
+        eof_index     - index of EOF, or -1
+        timeout_index - index of TIMEOUT, or -1
+
+    After a successful match by the search() method the following attributes
+    are available:
+
+        start - index into the buffer, first byte of match
+        end   - index into the buffer, first byte after match
+        match - the re.match object returned by a succesful re.search
+
+    """
+
+    def __init__(self, patterns):
+
+        """This creates an instance that searches for 'patterns' Where
+        'patterns' may be a list or other sequence of compiled regular
+        expressions, or the EOF or TIMEOUT types."""
+
+        self.eof_index = -1
+        self.timeout_index = -1
+        self._searches = []
+        for n, s in zip(range(len(patterns)), patterns):
+            if s is EOF:
+                self.eof_index = n
+                continue
+            if s is TIMEOUT:
+                self.timeout_index = n
+                continue
+            self._searches.append((n, s))
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object."""
+
+        ss =  [ (n,'    %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches]
+        ss.append((-1,'searcher_re:'))
+        if self.eof_index >= 0:
+            ss.append ((self.eof_index,'    %d: EOF' % self.eof_index))
+        if self.timeout_index >= 0:
+            ss.append ((self.timeout_index,'    %d: TIMEOUT' % self.timeout_index))
+        ss.sort()
+        ss = zip(*ss)[1]
+        return '\n'.join(ss)
+
+    def search(self, buffer, freshlen, searchwindowsize=None):
+
+        """This searches 'buffer' for the first occurence of one of the regular
+        expressions. 'freshlen' must indicate the number of bytes at the end of
+        'buffer' which have not been searched before.
+
+        See class spawn for the 'searchwindowsize' argument.
+
+        If there is a match this returns the index of that string, and sets
+        'start', 'end' and 'match'. Otherwise, returns -1."""
+
+        absurd_match = len(buffer)
+        first_match = absurd_match
+        # 'freshlen' doesn't help here -- we cannot predict the
+        # length of a match, and the re module provides no help.
+        if searchwindowsize is None:
+            searchstart = 0
+        else:
+            searchstart = max(0, len(buffer)-searchwindowsize)
+        for index, s in self._searches:
+            match = s.search(buffer, searchstart)
+            if match is None:
+                continue
+            n = match.start()
+            if n < first_match:
+                first_match = n
+                the_match = match
+                best_index = index
+        if first_match == absurd_match:
+            return -1
+        self.start = first_match
+        self.match = the_match
+        self.end = self.match.end()
+        return best_index
+
+def which (filename):
+
+    """This takes a given filename; tries to find it in the environment path;
+    then checks if it is executable. This returns the full path to the filename
+    if found and executable. Otherwise this returns None."""
+
+    # Special case where filename already contains a path.
+    if os.path.dirname(filename) != '':
+        if os.access (filename, os.X_OK):
+            return filename
+
+    if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
+        p = os.defpath
+    else:
+        p = os.environ['PATH']
+
+    # Oddly enough this was the one line that made Pexpect
+    # incompatible with Python 1.5.2.
+    #pathlist = p.split (os.pathsep)
+    pathlist = string.split (p, os.pathsep)
+
+    for path in pathlist:
+        f = os.path.join(path, filename)
+        if os.access(f, os.X_OK):
+            return f
+    return None
+
+def split_command_line(command_line):
+
+    """This splits a command line into a list of arguments. It splits arguments
+    on spaces, but handles embedded quotes, doublequotes, and escaped
+    characters. It's impossible to do this with a regular expression, so I
+    wrote a little state machine to parse the command line. """
+
+    arg_list = []
+    arg = ''
+
+    # Constants to name the states we can be in.
+    state_basic = 0
+    state_esc = 1
+    state_singlequote = 2
+    state_doublequote = 3
+    state_whitespace = 4 # The state of consuming whitespace between commands.
+    state = state_basic
+
+    for c in command_line:
+        if state == state_basic or state == state_whitespace:
+            if c == '\\': # Escape the next character
+                state = state_esc
+            elif c == r"'": # Handle single quote
+                state = state_singlequote
+            elif c == r'"': # Handle double quote
+                state = state_doublequote
+            elif c.isspace():
+                # Add arg to arg_list if we aren't in the middle of whitespace.
+                if state == state_whitespace:
+                    None # Do nothing.
+                else:
+                    arg_list.append(arg)
+                    arg = ''
+                    state = state_whitespace
+            else:
+                arg = arg + c
+                state = state_basic
+        elif state == state_esc:
+            arg = arg + c
+            state = state_basic
+        elif state == state_singlequote:
+            if c == r"'":
+                state = state_basic
+            else:
+                arg = arg + c
+        elif state == state_doublequote:
+            if c == r'"':
+                state = state_basic
+            else:
+                arg = arg + c
+
+    if arg != '':
+        arg_list.append(arg)
+    return arg_list
+
+# vi:ts=4:sw=4:expandtab:ft=python:

=== modified file 'duplicity/progress.py'
--- duplicity/progress.py	2014-04-20 14:02:34 +0000
+++ duplicity/progress.py	2014-06-14 13:58:30 +0000
@@ -32,9 +32,6 @@
 This is a forecast based on gathered evidence.
 """
 
-from __future__ import absolute_import
-
-import collections as sys_collections
 import math
 import threading
 import time
@@ -44,6 +41,29 @@
 import pickle
 import os
 
+def import_non_local(name, custom_name=None):
+    """
+    This function is needed to play a trick... as there exists a local
+    "collections" module, that is named the same as a system module
+    """
+    import imp, sys
+
+    custom_name = custom_name or name
+
+    f, pathname, desc = imp.find_module(name, sys.path[1:])
+    module = imp.load_module(custom_name, f, pathname, desc)
+    f.close()
+
+    return module
+
+"""
+Import non-local module, use a custom name to differentiate it from local
+This name is only used internally for identifying the module. We decide
+the name in the local scope by assigning it to the variable sys_collections.
+"""
+sys_collections = import_non_local('collections','sys_collections')
+
+
 tracker = None
 progress_thread = None
 
@@ -103,7 +123,7 @@
 
 
 
-class ProgressTracker():
+class ProgressTracker:
 
     def __init__(self):
         self.total_stats = None
@@ -242,7 +262,7 @@
         projection = 1.0
         if self.progress_estimation > 0:
             projection = (1.0 - self.progress_estimation) / self.progress_estimation
-        self.time_estimation = int(projection * float(self.elapsed_sum.total_seconds()))
+        self.time_estimation = long(projection * float(self.elapsed_sum.total_seconds()))
 
         # Apply values only when monotonic, so the estimates look more consistent to the human eye
         if self.progress_estimation < last_progress_estimation:
@@ -277,7 +297,7 @@
         volume and for the current volume
         """
         changing = max(bytecount - self.last_bytecount, 0)
-        self.total_bytecount += int(changing) # Annotate only changing bytes since last probe
+        self.total_bytecount += long(changing) # Annotate only changing bytes since last probe
         self.last_bytecount = bytecount
         if changing > 0:
             self.stall_last_time = datetime.now()

=== modified file 'duplicity/robust.py'
--- duplicity/robust.py	2014-04-17 20:50:57 +0000
+++ duplicity/robust.py	2014-06-14 13:58:30 +0000
@@ -39,7 +39,7 @@
     #       RPathException, Rdiff.RdiffException,
     #       librsync.librsyncError, C.UnknownFileTypeError), exc:
     #   TracebackArchive.add()
-    except (IOError, EnvironmentError, librsync.librsyncError, path.PathException) as exc:
+    except (IOError, EnvironmentError, librsync.librsyncError, path.PathException), exc:
         if (not isinstance(exc, EnvironmentError) or
             ((exc[0] in errno.errorcode)
              and errno.errorcode[exc[0]] in

=== modified file 'duplicity/selection.py'
--- duplicity/selection.py	2014-04-25 23:53:46 +0000
+++ duplicity/selection.py	2014-06-14 13:58:30 +0000
@@ -19,8 +19,6 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-from future_builtins import filter, map
-
 import os #@UnusedImport
 import re #@UnusedImport
 import stat #@UnusedImport
@@ -237,8 +235,8 @@
                         filelists[filelists_index], 0, arg))
                     filelists_index += 1
                 elif opt == "--exclude-globbing-filelist":
-                    for sf in self.filelist_globbing_get_sfs(filelists[filelists_index], 0, arg):
-                        self.add_selection_func(sf)
+                    map(self.add_selection_func,
+                        self.filelist_globbing_get_sfs(filelists[filelists_index], 0, arg))
                     filelists_index += 1
                 elif opt == "--exclude-other-filesystems":
                     self.add_selection_func(self.other_filesystems_get_sf(0))
@@ -251,14 +249,14 @@
                         filelists[filelists_index], 1, arg))
                     filelists_index += 1
                 elif opt == "--include-globbing-filelist":
-                    for sf in self.filelist_globbing_get_sfs(filelists[filelists_index], 1, arg):
-                        self.add_selection_func(sf)
+                    map(self.add_selection_func,
+                        self.filelist_globbing_get_sfs(filelists[filelists_index], 1, arg))
                     filelists_index += 1
                 elif opt == "--include-regexp":
                     self.add_selection_func(self.regexp_get_sf(arg, 1))
                 else:
                     assert 0, "Bad selection option %s" % opt
-        except SelectError as e:
+        except SelectError, e:
             self.parse_catch_error(e)
         assert filelists_index == len(filelists)
         self.parse_last_excludes()
@@ -353,7 +351,7 @@
                 continue # skip blanks
             try:
                 tuple = self.filelist_parse_line(line, include)
-            except FilePrefixError as exc:
+            except FilePrefixError, exc:
                 incr_warnings(exc)
                 continue
             tuple_list.append(tuple)
@@ -628,7 +626,8 @@
             raise GlobbingError("Consecutive '/'s found in globbing string "
                                 + glob_str)
 
-        prefixes = ["/".join(glob_parts[:i+1]) for i in range(len(glob_parts))]
+        prefixes = map(lambda i: "/".join(glob_parts[:i+1]),
+                       range(len(glob_parts)))
         # we must make exception for root "/", only dir to end in slash
         if prefixes[0] == "":
             prefixes[0] = "/"

=== added file 'duplicity/static.py'
--- duplicity/static.py	1970-01-01 00:00:00 +0000
+++ duplicity/static.py	2014-06-14 13:58:30 +0000
@@ -0,0 +1,46 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
+# Copyright 2007 Kenneth Loafman <kenneth@xxxxxxxxxxx>
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+"""MakeStatic and MakeClass
+
+These functions are used to make all the instance methods in a class
+into static or class methods.
+
+"""
+
+class StaticMethodsError(Exception): pass
+
+def MakeStatic(cls):
+    """turn instance methods into static ones
+
+    The methods (that don't begin with _) of any class that
+    subclasses this will be turned into static methods.
+
+    """
+    for name in dir(cls):
+        if name[0] != "_":
+            cls.__dict__[name] = staticmethod(cls.__dict__[name])
+
+def MakeClass(cls):
+    """Turn instance methods into classmethods.  Ignore _ like above"""
+    for name in dir(cls):
+        if name[0] != "_":
+            cls.__dict__[name] = classmethod(cls.__dict__[name])

=== modified file 'duplicity/statistics.py'
--- duplicity/statistics.py	2014-04-25 23:53:46 +0000
+++ duplicity/statistics.py	2014-06-14 13:58:30 +0000
@@ -21,8 +21,6 @@
 
 """Generate and process backup statistics"""
 
-from future_builtins import map
-
 import re, time, os
 
 from duplicity import dup_time
@@ -101,11 +99,12 @@
 
     def get_stats_line(self, index, use_repr = 1):
         """Return one line abbreviated version of full stats string"""
-        file_attrs = [str(self.get_stat(a)) for a in self.stat_file_attrs]
+        file_attrs = map(lambda attr: str(self.get_stat(attr)),
+                         self.stat_file_attrs)
         if not index:
             filename = "."
         else:
-            filename = os.path.join(*index)
+            filename = apply(os.path.join, index)
             if use_repr:
                 # use repr to quote newlines in relative filename, then
                 # take of leading and trailing quote and quote spaces.
@@ -124,7 +123,7 @@
         for attr, val_string in zip(self.stat_file_attrs,
                                     lineparts[-len(self.stat_file_attrs):]):
             try:
-                val = int(val_string)
+                val = long(val_string)
             except ValueError:
                 try:
                     val = float(val_string)
@@ -231,7 +230,7 @@
                 error(line)
             try:
                 try:
-                    val1 = int(value_string)
+                    val1 = long(value_string)
                 except ValueError:
                     val1 = None
                 val2 = float(value_string)

=== modified file 'duplicity/tarfile.py'
--- duplicity/tarfile.py	2014-04-16 20:45:09 +0000
+++ duplicity/tarfile.py	2014-06-14 13:58:30 +0000
@@ -1,35 +1,2594 @@
-# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
-#
-# Copyright 2013 Michael Terry <mike@xxxxxxxxxxx>
-#
-# This file is part of duplicity.
-#
-# Duplicity is free software; you can redistribute it and/or modify it
-# under the terms of the GNU General Public License as published by the
-# Free Software Foundation; either version 2 of the License, or (at your
-# option) any later version.
-#
-# Duplicity is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-# General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with duplicity; if not, write to the Free Software Foundation,
-# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
-"""Like system tarfile but with caching."""
-
-from __future__ import absolute_import
-
-import tarfile
-
-# Grab all symbols in tarfile, to try to reproduce its API exactly.
-# from <> import * wouldn't get everything we want, since tarfile defines
-# __all__.  So we do it ourselves.
-for sym in dir(tarfile):
-    globals()[sym] = getattr(tarfile, sym)
-
-# Now make sure that we cache the grp/pwd ops
+#! /usr/bin/python2.7
+# -*- coding: iso-8859-1 -*-
+#-------------------------------------------------------------------
+# tarfile.py
+#-------------------------------------------------------------------
+# Copyright (C) 2002 Lars Gust�l <lars@xxxxxxxxxxxx>
+# All rights reserved.
+#
+# Permission  is  hereby granted,  free  of charge,  to  any person
+# obtaining a  copy of  this software  and associated documentation
+# files  (the  "Software"),  to   deal  in  the  Software   without
+# restriction,  including  without limitation  the  rights to  use,
+# copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies  of  the  Software,  and to  permit  persons  to  whom the
+# Software  is  furnished  to  do  so,  subject  to  the  following
+# conditions:
+#
+# The above copyright  notice and this  permission notice shall  be
+# included in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS  IS", WITHOUT WARRANTY OF ANY  KIND,
+# EXPRESS OR IMPLIED, INCLUDING  BUT NOT LIMITED TO  THE WARRANTIES
+# OF  MERCHANTABILITY,  FITNESS   FOR  A  PARTICULAR   PURPOSE  AND
+# NONINFRINGEMENT.  IN  NO  EVENT SHALL  THE  AUTHORS  OR COPYRIGHT
+# HOLDERS  BE LIABLE  FOR ANY  CLAIM, DAMAGES  OR OTHER  LIABILITY,
+# WHETHER  IN AN  ACTION OF  CONTRACT, TORT  OR OTHERWISE,  ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+# OTHER DEALINGS IN THE SOFTWARE.
+#
+"""Read from and write to tar format archives.
+"""
+
+__version__ = "$Revision: 85213 $"
+# $Source$
+
+version     = "0.9.0"
+__author__  = "Lars Gust�l (lars@xxxxxxxxxxxx)"
+__date__    = "$Date: 2010-10-04 10:37:53 -0500 (Mon, 04 Oct 2010) $"
+__cvsid__   = "$Id: tarfile.py 85213 2010-10-04 15:37:53Z lars.gustaebel $"
+__credits__ = "Gustavo Niemeyer, Niels Gust�l, Richard Townsend."
+
+#---------
+# Imports
+#---------
+import sys
+import os
+import shutil
+import stat
+import errno
+import time
+import struct
+import copy
+import re
+import operator
+
 from duplicity import cached_ops
 grp = pwd = cached_ops
+
+# from tarfile import *
+__all__ = ["TarFile", "TarInfo", "is_tarfile", "TarError"]
+
+#---------------------------------------------------------
+# tar constants
+#---------------------------------------------------------
+NUL = "\0"                      # the null character
+BLOCKSIZE = 512                 # length of processing blocks
+RECORDSIZE = BLOCKSIZE * 20     # length of records
+GNU_MAGIC = "ustar  \0"         # magic gnu tar string
+POSIX_MAGIC = "ustar\x0000"     # magic posix tar string
+
+LENGTH_NAME = 100               # maximum length of a filename
+LENGTH_LINK = 100               # maximum length of a linkname
+LENGTH_PREFIX = 155             # maximum length of the prefix field
+
+REGTYPE = "0"                   # regular file
+AREGTYPE = "\0"                 # regular file
+LNKTYPE = "1"                   # link (inside tarfile)
+SYMTYPE = "2"                   # symbolic link
+CHRTYPE = "3"                   # character special device
+BLKTYPE = "4"                   # block special device
+DIRTYPE = "5"                   # directory
+FIFOTYPE = "6"                  # fifo special device
+CONTTYPE = "7"                  # contiguous file
+
+GNUTYPE_LONGNAME = "L"          # GNU tar longname
+GNUTYPE_LONGLINK = "K"          # GNU tar longlink
+GNUTYPE_SPARSE = "S"            # GNU tar sparse file
+
+XHDTYPE = "x"                   # POSIX.1-2001 extended header
+XGLTYPE = "g"                   # POSIX.1-2001 global header
+SOLARIS_XHDTYPE = "X"           # Solaris extended header
+
+USTAR_FORMAT = 0                # POSIX.1-1988 (ustar) format
+GNU_FORMAT = 1                  # GNU tar format
+PAX_FORMAT = 2                  # POSIX.1-2001 (pax) format
+DEFAULT_FORMAT = GNU_FORMAT
+
+#---------------------------------------------------------
+# tarfile constants
+#---------------------------------------------------------
+# File types that tarfile supports:
+SUPPORTED_TYPES = (REGTYPE, AREGTYPE, LNKTYPE,
+                   SYMTYPE, DIRTYPE, FIFOTYPE,
+                   CONTTYPE, CHRTYPE, BLKTYPE,
+                   GNUTYPE_LONGNAME, GNUTYPE_LONGLINK,
+                   GNUTYPE_SPARSE)
+
+# File types that will be treated as a regular file.
+REGULAR_TYPES = (REGTYPE, AREGTYPE,
+                 CONTTYPE, GNUTYPE_SPARSE)
+
+# File types that are part of the GNU tar format.
+GNU_TYPES = (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK,
+             GNUTYPE_SPARSE)
+
+# Fields from a pax header that override a TarInfo attribute.
+PAX_FIELDS = ("path", "linkpath", "size", "mtime",
+              "uid", "gid", "uname", "gname")
+
+# Fields in a pax header that are numbers, all other fields
+# are treated as strings.
+PAX_NUMBER_FIELDS = {
+    "atime": float,
+    "ctime": float,
+    "mtime": float,
+    "uid": int,
+    "gid": int,
+    "size": int
+}
+
+#---------------------------------------------------------
+# Bits used in the mode field, values in octal.
+#---------------------------------------------------------
+S_IFLNK = 0120000        # symbolic link
+S_IFREG = 0100000        # regular file
+S_IFBLK = 0060000        # block device
+S_IFDIR = 0040000        # directory
+S_IFCHR = 0020000        # character device
+S_IFIFO = 0010000        # fifo
+
+TSUID   = 04000          # set UID on execution
+TSGID   = 02000          # set GID on execution
+TSVTX   = 01000          # reserved
+
+TUREAD  = 0400           # read by owner
+TUWRITE = 0200           # write by owner
+TUEXEC  = 0100           # execute/search by owner
+TGREAD  = 0040           # read by group
+TGWRITE = 0020           # write by group
+TGEXEC  = 0010           # execute/search by group
+TOREAD  = 0004           # read by other
+TOWRITE = 0002           # write by other
+TOEXEC  = 0001           # execute/search by other
+
+#---------------------------------------------------------
+# initialization
+#---------------------------------------------------------
+ENCODING = sys.getfilesystemencoding()
+if ENCODING is None:
+    ENCODING = sys.getdefaultencoding()
+
+#---------------------------------------------------------
+# Some useful functions
+#---------------------------------------------------------
+
+def stn(s, length):
+    """Convert a python string to a null-terminated string buffer.
+    """
+    return s[:length] + (length - len(s)) * NUL
+
+def nts(s):
+    """Convert a null-terminated string field to a python string.
+    """
+    # Use the string up to the first null char.
+    p = s.find("\0")
+    if p == -1:
+        return s
+    return s[:p]
+
+def nti(s):
+    """Convert a number field to a python number.
+    """
+    # There are two possible encodings for a number field, see
+    # itn() below.
+    if s[0] != chr(0200):
+        try:
+            n = int(nts(s) or "0", 8)
+        except ValueError:
+            raise InvalidHeaderError("invalid header")
+    else:
+        n = 0L
+        for i in xrange(len(s) - 1):
+            n <<= 8
+            n += ord(s[i + 1])
+    return n
+
+def itn(n, digits=8, format=DEFAULT_FORMAT):
+    """Convert a python number to a number field.
+    """
+    # POSIX 1003.1-1988 requires numbers to be encoded as a string of
+    # octal digits followed by a null-byte, this allows values up to
+    # (8**(digits-1))-1. GNU tar allows storing numbers greater than
+    # that if necessary. A leading 0200 byte indicates this particular
+    # encoding, the following digits-1 bytes are a big-endian
+    # representation. This allows values up to (256**(digits-1))-1.
+    if 0 <= n < 8 ** (digits - 1):
+        s = "%0*o" % (digits - 1, n) + NUL
+    else:
+        if format != GNU_FORMAT or n >= 256 ** (digits - 1):
+            raise ValueError("overflow in number field")
+
+        if n < 0:
+            # XXX We mimic GNU tar's behaviour with negative numbers,
+            # this could raise OverflowError.
+            n = struct.unpack("L", struct.pack("l", n))[0]
+
+        s = ""
+        for i in xrange(digits - 1):
+            s = chr(n & 0377) + s
+            n >>= 8
+        s = chr(0200) + s
+    return s
+
+def uts(s, encoding, errors):
+    """Convert a unicode object to a string.
+    """
+    if errors == "utf-8":
+        # An extra error handler similar to the -o invalid=UTF-8 option
+        # in POSIX.1-2001. Replace untranslatable characters with their
+        # UTF-8 representation.
+        try:
+            return s.encode(encoding, "strict")
+        except UnicodeEncodeError:
+            x = []
+            for c in s:
+                try:
+                    x.append(c.encode(encoding, "strict"))
+                except UnicodeEncodeError:
+                    x.append(c.encode("utf8"))
+            return "".join(x)
+    else:
+        return s.encode(encoding, errors)
+
+def calc_chksums(buf):
+    """Calculate the checksum for a member's header by summing up all
+       characters except for the chksum field which is treated as if
+       it was filled with spaces. According to the GNU tar sources,
+       some tars (Sun and NeXT) calculate chksum with signed char,
+       which will be different if there are chars in the buffer with
+       the high bit set. So we calculate two checksums, unsigned and
+       signed.
+    """
+    unsigned_chksum = 256 + sum(struct.unpack("148B", buf[:148]) + struct.unpack("356B", buf[156:512]))
+    signed_chksum = 256 + sum(struct.unpack("148b", buf[:148]) + struct.unpack("356b", buf[156:512]))
+    return unsigned_chksum, signed_chksum
+
+def copyfileobj(src, dst, length=None):
+    """Copy length bytes from fileobj src to fileobj dst.
+       If length is None, copy the entire content.
+    """
+    if length == 0:
+        return
+    if length is None:
+        shutil.copyfileobj(src, dst)
+        return
+
+    BUFSIZE = 16 * 1024
+    blocks, remainder = divmod(length, BUFSIZE)
+    for b in xrange(blocks):
+        buf = src.read(BUFSIZE)
+        if len(buf) < BUFSIZE:
+            raise IOError("end of file reached")
+        dst.write(buf)
+
+    if remainder != 0:
+        buf = src.read(remainder)
+        if len(buf) < remainder:
+            raise IOError("end of file reached")
+        dst.write(buf)
+    return
+
+filemode_table = (
+    ((S_IFLNK,      "l"),
+     (S_IFREG,      "-"),
+     (S_IFBLK,      "b"),
+     (S_IFDIR,      "d"),
+     (S_IFCHR,      "c"),
+     (S_IFIFO,      "p")),
+
+    ((TUREAD,       "r"),),
+    ((TUWRITE,      "w"),),
+    ((TUEXEC|TSUID, "s"),
+     (TSUID,        "S"),
+     (TUEXEC,       "x")),
+
+    ((TGREAD,       "r"),),
+    ((TGWRITE,      "w"),),
+    ((TGEXEC|TSGID, "s"),
+     (TSGID,        "S"),
+     (TGEXEC,       "x")),
+
+    ((TOREAD,       "r"),),
+    ((TOWRITE,      "w"),),
+    ((TOEXEC|TSVTX, "t"),
+     (TSVTX,        "T"),
+     (TOEXEC,       "x"))
+)
+
+def filemode(mode):
+    """Convert a file's mode to a string of the form
+       -rwxrwxrwx.
+       Used by TarFile.list()
+    """
+    perm = []
+    for table in filemode_table:
+        for bit, char in table:
+            if mode & bit == bit:
+                perm.append(char)
+                break
+        else:
+            perm.append("-")
+    return "".join(perm)
+
+class TarError(Exception):
+    """Base exception."""
+    pass
+class ExtractError(TarError):
+    """General exception for extract errors."""
+    pass
+class ReadError(TarError):
+    """Exception for unreadble tar archives."""
+    pass
+class CompressionError(TarError):
+    """Exception for unavailable compression methods."""
+    pass
+class StreamError(TarError):
+    """Exception for unsupported operations on stream-like TarFiles."""
+    pass
+class HeaderError(TarError):
+    """Base exception for header errors."""
+    pass
+class EmptyHeaderError(HeaderError):
+    """Exception for empty headers."""
+    pass
+class TruncatedHeaderError(HeaderError):
+    """Exception for truncated headers."""
+    pass
+class EOFHeaderError(HeaderError):
+    """Exception for end of file headers."""
+    pass
+class InvalidHeaderError(HeaderError):
+    """Exception for invalid headers."""
+    pass
+class SubsequentHeaderError(HeaderError):
+    """Exception for missing and invalid extended headers."""
+    pass
+
+#---------------------------
+# internal stream interface
+#---------------------------
+class _LowLevelFile:
+    """Low-level file object. Supports reading and writing.
+       It is used instead of a regular file object for streaming
+       access.
+    """
+
+    def __init__(self, name, mode):
+        mode = {
+            "r": os.O_RDONLY,
+            "w": os.O_WRONLY | os.O_CREAT | os.O_TRUNC,
+        }[mode]
+        if hasattr(os, "O_BINARY"):
+            mode |= os.O_BINARY
+        self.fd = os.open(name, mode, 0666)
+
+    def close(self):
+        os.close(self.fd)
+
+    def read(self, size):
+        return os.read(self.fd, size)
+
+    def write(self, s):
+        os.write(self.fd, s)
+
+class _Stream:
+    """Class that serves as an adapter between TarFile and
+       a stream-like object.  The stream-like object only
+       needs to have a read() or write() method and is accessed
+       blockwise.  Use of gzip or bzip2 compression is possible.
+       A stream-like object could be for example: sys.stdin,
+       sys.stdout, a socket, a tape device etc.
+
+       _Stream is intended to be used only internally.
+    """
+
+    def __init__(self, name, mode, comptype, fileobj, bufsize):
+        """Construct a _Stream object.
+        """
+        self._extfileobj = True
+        if fileobj is None:
+            fileobj = _LowLevelFile(name, mode)
+            self._extfileobj = False
+
+        if comptype == '*':
+            # Enable transparent compression detection for the
+            # stream interface
+            fileobj = _StreamProxy(fileobj)
+            comptype = fileobj.getcomptype()
+
+        self.name     = name or ""
+        self.mode     = mode
+        self.comptype = comptype
+        self.fileobj  = fileobj
+        self.bufsize  = bufsize
+        self.buf      = ""
+        self.pos      = 0L
+        self.closed   = False
+
+        if comptype == "gz":
+            try:
+                import zlib
+            except ImportError:
+                raise CompressionError("zlib module is not available")
+            self.zlib = zlib
+            self.crc = zlib.crc32("") & 0xffffffffL
+            if mode == "r":
+                self._init_read_gz()
+            else:
+                self._init_write_gz()
+
+        if comptype == "bz2":
+            try:
+                import bz2
+            except ImportError:
+                raise CompressionError("bz2 module is not available")
+            if mode == "r":
+                self.dbuf = ""
+                self.cmp = bz2.BZ2Decompressor()
+            else:
+                self.cmp = bz2.BZ2Compressor()
+
+    def __del__(self):
+        if hasattr(self, "closed") and not self.closed:
+            self.close()
+
+    def _init_write_gz(self):
+        """Initialize for writing with gzip compression.
+        """
+        self.cmp = self.zlib.compressobj(9, self.zlib.DEFLATED,
+                                            -self.zlib.MAX_WBITS,
+                                            self.zlib.DEF_MEM_LEVEL,
+                                            0)
+        timestamp = struct.pack("<L", long(time.time()))
+        self.__write("\037\213\010\010%s\002\377" % timestamp)
+        if self.name.endswith(".gz"):
+            self.name = self.name[:-3]
+        self.__write(self.name + NUL)
+
+    def write(self, s):
+        """Write string s to the stream.
+        """
+        if self.comptype == "gz":
+            self.crc = self.zlib.crc32(s, self.crc) & 0xffffffffL
+        self.pos += len(s)
+        if self.comptype != "tar":
+            s = self.cmp.compress(s)
+        self.__write(s)
+
+    def __write(self, s):
+        """Write string s to the stream if a whole new block
+           is ready to be written.
+        """
+        self.buf += s
+        while len(self.buf) > self.bufsize:
+            self.fileobj.write(self.buf[:self.bufsize])
+            self.buf = self.buf[self.bufsize:]
+
+    def close(self):
+        """Close the _Stream object. No operation should be
+           done on it afterwards.
+        """
+        if self.closed:
+            return
+
+        if self.mode == "w" and self.comptype != "tar":
+            self.buf += self.cmp.flush()
+
+        if self.mode == "w" and self.buf:
+            self.fileobj.write(self.buf)
+            self.buf = ""
+            if self.comptype == "gz":
+                # The native zlib crc is an unsigned 32-bit integer, but
+                # the Python wrapper implicitly casts that to a signed C
+                # long.  So, on a 32-bit box self.crc may "look negative",
+                # while the same crc on a 64-bit box may "look positive".
+                # To avoid irksome warnings from the `struct` module, force
+                # it to look positive on all boxes.
+                self.fileobj.write(struct.pack("<L", self.crc & 0xffffffffL))
+                self.fileobj.write(struct.pack("<L", self.pos & 0xffffFFFFL))
+
+        if not self._extfileobj:
+            self.fileobj.close()
+
+        self.closed = True
+
+    def _init_read_gz(self):
+        """Initialize for reading a gzip compressed fileobj.
+        """
+        self.cmp = self.zlib.decompressobj(-self.zlib.MAX_WBITS)
+        self.dbuf = ""
+
+        # taken from gzip.GzipFile with some alterations
+        if self.__read(2) != "\037\213":
+            raise ReadError("not a gzip file")
+        if self.__read(1) != "\010":
+            raise CompressionError("unsupported compression method")
+
+        flag = ord(self.__read(1))
+        self.__read(6)
+
+        if flag & 4:
+            xlen = ord(self.__read(1)) + 256 * ord(self.__read(1))
+            self.read(xlen)
+        if flag & 8:
+            while True:
+                s = self.__read(1)
+                if not s or s == NUL:
+                    break
+        if flag & 16:
+            while True:
+                s = self.__read(1)
+                if not s or s == NUL:
+                    break
+        if flag & 2:
+            self.__read(2)
+
+    def tell(self):
+        """Return the stream's file pointer position.
+        """
+        return self.pos
+
+    def seek(self, pos=0):
+        """Set the stream's file pointer to pos. Negative seeking
+           is forbidden.
+        """
+        if pos - self.pos >= 0:
+            blocks, remainder = divmod(pos - self.pos, self.bufsize)
+            for i in xrange(blocks):
+                self.read(self.bufsize)
+            self.read(remainder)
+        else:
+            raise StreamError("seeking backwards is not allowed")
+        return self.pos
+
+    def read(self, size=None):
+        """Return the next size number of bytes from the stream.
+           If size is not defined, return all bytes of the stream
+           up to EOF.
+        """
+        if size is None:
+            t = []
+            while True:
+                buf = self._read(self.bufsize)
+                if not buf:
+                    break
+                t.append(buf)
+            buf = "".join(t)
+        else:
+            buf = self._read(size)
+        self.pos += len(buf)
+        return buf
+
+    def _read(self, size):
+        """Return size bytes from the stream.
+        """
+        if self.comptype == "tar":
+            return self.__read(size)
+
+        c = len(self.dbuf)
+        t = [self.dbuf]
+        while c < size:
+            buf = self.__read(self.bufsize)
+            if not buf:
+                break
+            try:
+                buf = self.cmp.decompress(buf)
+            except IOError:
+                raise ReadError("invalid compressed data")
+            t.append(buf)
+            c += len(buf)
+        t = "".join(t)
+        self.dbuf = t[size:]
+        return t[:size]
+
+    def __read(self, size):
+        """Return size bytes from stream. If internal buffer is empty,
+           read another block from the stream.
+        """
+        c = len(self.buf)
+        t = [self.buf]
+        while c < size:
+            buf = self.fileobj.read(self.bufsize)
+            if not buf:
+                break
+            t.append(buf)
+            c += len(buf)
+        t = "".join(t)
+        self.buf = t[size:]
+        return t[:size]
+# class _Stream
+
+class _StreamProxy(object):
+    """Small proxy class that enables transparent compression
+       detection for the Stream interface (mode 'r|*').
+    """
+
+    def __init__(self, fileobj):
+        self.fileobj = fileobj
+        self.buf = self.fileobj.read(BLOCKSIZE)
+
+    def read(self, size):
+        self.read = self.fileobj.read
+        return self.buf
+
+    def getcomptype(self):
+        if self.buf.startswith("\037\213\010"):
+            return "gz"
+        if self.buf.startswith("BZh91"):
+            return "bz2"
+        return "tar"
+
+    def close(self):
+        self.fileobj.close()
+# class StreamProxy
+
+class _BZ2Proxy(object):
+    """Small proxy class that enables external file object
+       support for "r:bz2" and "w:bz2" modes. This is actually
+       a workaround for a limitation in bz2 module's BZ2File
+       class which (unlike gzip.GzipFile) has no support for
+       a file object argument.
+    """
+
+    blocksize = 16 * 1024
+
+    def __init__(self, fileobj, mode):
+        self.fileobj = fileobj
+        self.mode = mode
+        self.name = getattr(self.fileobj, "name", None)
+        self.init()
+
+    def init(self):
+        import bz2
+        self.pos = 0
+        if self.mode == "r":
+            self.bz2obj = bz2.BZ2Decompressor()
+            self.fileobj.seek(0)
+            self.buf = ""
+        else:
+            self.bz2obj = bz2.BZ2Compressor()
+
+    def read(self, size):
+        b = [self.buf]
+        x = len(self.buf)
+        while x < size:
+            raw = self.fileobj.read(self.blocksize)
+            if not raw:
+                break
+            data = self.bz2obj.decompress(raw)
+            b.append(data)
+            x += len(data)
+        self.buf = "".join(b)
+
+        buf = self.buf[:size]
+        self.buf = self.buf[size:]
+        self.pos += len(buf)
+        return buf
+
+    def seek(self, pos):
+        if pos < self.pos:
+            self.init()
+        self.read(pos - self.pos)
+
+    def tell(self):
+        return self.pos
+
+    def write(self, data):
+        self.pos += len(data)
+        raw = self.bz2obj.compress(data)
+        self.fileobj.write(raw)
+
+    def close(self):
+        if self.mode == "w":
+            raw = self.bz2obj.flush()
+            self.fileobj.write(raw)
+# class _BZ2Proxy
+
+#------------------------
+# Extraction file object
+#------------------------
+class _FileInFile(object):
+    """A thin wrapper around an existing file object that
+       provides a part of its data as an individual file
+       object.
+    """
+
+    def __init__(self, fileobj, offset, size, sparse=None):
+        self.fileobj = fileobj
+        self.offset = offset
+        self.size = size
+        self.sparse = sparse
+        self.position = 0
+
+    def tell(self):
+        """Return the current file position.
+        """
+        return self.position
+
+    def seek(self, position):
+        """Seek to a position in the file.
+        """
+        self.position = position
+
+    def read(self, size=None):
+        """Read data from the file.
+        """
+        if size is None:
+            size = self.size - self.position
+        else:
+            size = min(size, self.size - self.position)
+
+        if self.sparse is None:
+            return self.readnormal(size)
+        else:
+            return self.readsparse(size)
+
+    def readnormal(self, size):
+        """Read operation for regular files.
+        """
+        self.fileobj.seek(self.offset + self.position)
+        self.position += size
+        return self.fileobj.read(size)
+
+    def readsparse(self, size):
+        """Read operation for sparse files.
+        """
+        data = []
+        while size > 0:
+            buf = self.readsparsesection(size)
+            if not buf:
+                break
+            size -= len(buf)
+            data.append(buf)
+        return "".join(data)
+
+    def readsparsesection(self, size):
+        """Read a single section of a sparse file.
+        """
+        section = self.sparse.find(self.position)
+
+        if section is None:
+            return ""
+
+        size = min(size, section.offset + section.size - self.position)
+
+        if isinstance(section, _data):
+            realpos = section.realpos + self.position - section.offset
+            self.fileobj.seek(self.offset + realpos)
+            self.position += size
+            return self.fileobj.read(size)
+        else:
+            self.position += size
+            return NUL * size
+#class _FileInFile
+
+
+class ExFileObject(object):
+    """File-like object for reading an archive member.
+       Is returned by TarFile.extractfile().
+    """
+    blocksize = 1024
+
+    def __init__(self, tarfile, tarinfo):
+        self.fileobj = _FileInFile(tarfile.fileobj,
+                                   tarinfo.offset_data,
+                                   tarinfo.size,
+                                   getattr(tarinfo, "sparse", None))
+        self.name = tarinfo.name
+        self.mode = "r"
+        self.closed = False
+        self.size = tarinfo.size
+
+        self.position = 0
+        self.buffer = ""
+
+    def read(self, size=None):
+        """Read at most size bytes from the file. If size is not
+           present or None, read all data until EOF is reached.
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+
+        buf = ""
+        if self.buffer:
+            if size is None:
+                buf = self.buffer
+                self.buffer = ""
+            else:
+                buf = self.buffer[:size]
+                self.buffer = self.buffer[size:]
+
+        if size is None:
+            buf += self.fileobj.read()
+        else:
+            buf += self.fileobj.read(size - len(buf))
+
+        self.position += len(buf)
+        return buf
+
+    def readline(self, size=-1):
+        """Read one entire line from the file. If size is present
+           and non-negative, return a string with at most that
+           size, which may be an incomplete line.
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+
+        if "\n" in self.buffer:
+            pos = self.buffer.find("\n") + 1
+        else:
+            buffers = [self.buffer]
+            while True:
+                buf = self.fileobj.read(self.blocksize)
+                buffers.append(buf)
+                if not buf or "\n" in buf:
+                    self.buffer = "".join(buffers)
+                    pos = self.buffer.find("\n") + 1
+                    if pos == 0:
+                        # no newline found.
+                        pos = len(self.buffer)
+                    break
+
+        if size != -1:
+            pos = min(size, pos)
+
+        buf = self.buffer[:pos]
+        self.buffer = self.buffer[pos:]
+        self.position += len(buf)
+        return buf
+
+    def readlines(self):
+        """Return a list with all remaining lines.
+        """
+        result = []
+        while True:
+            line = self.readline()
+            if not line: break
+            result.append(line)
+        return result
+
+    def tell(self):
+        """Return the current file position.
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+
+        return self.position
+
+    def seek(self, pos, whence=0):
+        """Seek to a position in the file.
+        """
+        if self.closed:
+            raise ValueError("I/O operation on closed file")
+
+        if whence == 0:
+            self.position = min(max(pos, 0), self.size)
+        elif whence == 1:
+            if pos < 0:
+                self.position = max(self.position + pos, 0)
+            else:
+                self.position = min(self.position + pos, self.size)
+        elif whence == 2:
+            self.position = max(min(self.size + pos, self.size), 0)
+        else:
+            raise ValueError("Invalid argument")
+
+        self.buffer = ""
+        self.fileobj.seek(self.position)
+
+    def close(self):
+        """Close the file object.
+        """
+        self.closed = True
+
+    def __iter__(self):
+        """Get an iterator over the file's lines.
+        """
+        while True:
+            line = self.readline()
+            if not line:
+                break
+            yield line
+#class ExFileObject
+
+#------------------
+# Exported Classes
+#------------------
+class TarInfo(object):
+    """Informational class which holds the details about an
+       archive member given by a tar header block.
+       TarInfo objects are returned by TarFile.getmember(),
+       TarFile.getmembers() and TarFile.gettarinfo() and are
+       usually created internally.
+    """
+
+    def __init__(self, name=""):
+        """Construct a TarInfo object. name is the optional name
+           of the member.
+        """
+        self.name = name        # member name
+        self.mode = 0644        # file permissions
+        self.uid = 0            # user id
+        self.gid = 0            # group id
+        self.size = 0           # file size
+        self.mtime = 0          # modification time
+        self.chksum = 0         # header checksum
+        self.type = REGTYPE     # member type
+        self.linkname = ""      # link name
+        self.uname = ""         # user name
+        self.gname = ""         # group name
+        self.devmajor = 0       # device major number
+        self.devminor = 0       # device minor number
+
+        self.offset = 0         # the tar header starts here
+        self.offset_data = 0    # the file's data starts here
+
+        self.pax_headers = {}   # pax header information
+
+    # In pax headers the "name" and "linkname" field are called
+    # "path" and "linkpath".
+    def _getpath(self):
+        return self.name
+    def _setpath(self, name):
+        self.name = name
+    path = property(_getpath, _setpath)
+
+    def _getlinkpath(self):
+        return self.linkname
+    def _setlinkpath(self, linkname):
+        self.linkname = linkname
+    linkpath = property(_getlinkpath, _setlinkpath)
+
+    def __repr__(self):
+        return "<%s %r at %#x>" % (self.__class__.__name__,self.name,id(self))
+
+    def get_info(self, encoding, errors):
+        """Return the TarInfo's attributes as a dictionary.
+        """
+        info = {
+            "name":     self.name,
+            "mode":     self.mode & 07777,
+            "uid":      self.uid,
+            "gid":      self.gid,
+            "size":     self.size,
+            "mtime":    self.mtime,
+            "chksum":   self.chksum,
+            "type":     self.type,
+            "linkname": self.linkname,
+            "uname":    self.uname,
+            "gname":    self.gname,
+            "devmajor": self.devmajor,
+            "devminor": self.devminor
+        }
+
+        if info["type"] == DIRTYPE and not info["name"].endswith("/"):
+            info["name"] += "/"
+
+        for key in ("name", "linkname", "uname", "gname"):
+            if type(info[key]) is unicode:
+                info[key] = info[key].encode(encoding, errors)
+
+        return info
+
+    def tobuf(self, format=DEFAULT_FORMAT, encoding=ENCODING, errors="strict"):
+        """Return a tar header as a string of 512 byte blocks.
+        """
+        info = self.get_info(encoding, errors)
+
+        if format == USTAR_FORMAT:
+            return self.create_ustar_header(info)
+        elif format == GNU_FORMAT:
+            return self.create_gnu_header(info)
+        elif format == PAX_FORMAT:
+            return self.create_pax_header(info, encoding, errors)
+        else:
+            raise ValueError("invalid format")
+
+    def create_ustar_header(self, info):
+        """Return the object as a ustar header block.
+        """
+        info["magic"] = POSIX_MAGIC
+
+        if len(info["linkname"]) > LENGTH_LINK:
+            raise ValueError("linkname is too long")
+
+        if len(info["name"]) > LENGTH_NAME:
+            info["prefix"], info["name"] = self._posix_split_name(info["name"])
+
+        return self._create_header(info, USTAR_FORMAT)
+
+    def create_gnu_header(self, info):
+        """Return the object as a GNU header block sequence.
+        """
+        info["magic"] = GNU_MAGIC
+
+        buf = ""
+        if len(info["linkname"]) > LENGTH_LINK:
+            buf += self._create_gnu_long_header(info["linkname"], GNUTYPE_LONGLINK)
+
+        if len(info["name"]) > LENGTH_NAME:
+            buf += self._create_gnu_long_header(info["name"], GNUTYPE_LONGNAME)
+
+        return buf + self._create_header(info, GNU_FORMAT)
+
+    def create_pax_header(self, info, encoding, errors):
+        """Return the object as a ustar header block. If it cannot be
+           represented this way, prepend a pax extended header sequence
+           with supplement information.
+        """
+        info["magic"] = POSIX_MAGIC
+        pax_headers = self.pax_headers.copy()
+
+        # Test string fields for values that exceed the field length or cannot
+        # be represented in ASCII encoding.
+        for name, hname, length in (
+                ("name", "path", LENGTH_NAME), ("linkname", "linkpath", LENGTH_LINK),
+                ("uname", "uname", 32), ("gname", "gname", 32)):
+
+            if hname in pax_headers:
+                # The pax header has priority.
+                continue
+
+            val = info[name].decode(encoding, errors)
+
+            # Try to encode the string as ASCII.
+            try:
+                val.encode("ascii")
+            except UnicodeEncodeError:
+                pax_headers[hname] = val
+                continue
+
+            if len(info[name]) > length:
+                pax_headers[hname] = val
+
+        # Test number fields for values that exceed the field limit or values
+        # that like to be stored as float.
+        for name, digits in (("uid", 8), ("gid", 8), ("size", 12), ("mtime", 12)):
+            if name in pax_headers:
+                # The pax header has priority. Avoid overflow.
+                info[name] = 0
+                continue
+
+            val = info[name]
+            if not 0 <= val < 8 ** (digits - 1) or isinstance(val, float):
+                pax_headers[name] = unicode(val)
+                info[name] = 0
+
+        # Create a pax extended header if necessary.
+        if pax_headers:
+            buf = self._create_pax_generic_header(pax_headers)
+        else:
+            buf = ""
+
+        return buf + self._create_header(info, USTAR_FORMAT)
+
+    @classmethod
+    def create_pax_global_header(cls, pax_headers):
+        """Return the object as a pax global header block sequence.
+        """
+        return cls._create_pax_generic_header(pax_headers, type=XGLTYPE)
+
+    def _posix_split_name(self, name):
+        """Split a name longer than 100 chars into a prefix
+           and a name part.
+        """
+        prefix = name[:LENGTH_PREFIX + 1]
+        while prefix and prefix[-1] != "/":
+            prefix = prefix[:-1]
+
+        name = name[len(prefix):]
+        prefix = prefix[:-1]
+
+        if not prefix or len(name) > LENGTH_NAME:
+            raise ValueError("name is too long")
+        return prefix, name
+
+    @staticmethod
+    def _create_header(info, format):
+        """Return a header block. info is a dictionary with file
+           information, format must be one of the *_FORMAT constants.
+        """
+        parts = [
+            stn(info.get("name", ""), 100),
+            itn(info.get("mode", 0) & 07777, 8, format),
+            itn(info.get("uid", 0), 8, format),
+            itn(info.get("gid", 0), 8, format),
+            itn(info.get("size", 0), 12, format),
+            itn(info.get("mtime", 0), 12, format),
+            "        ", # checksum field
+            info.get("type", REGTYPE),
+            stn(info.get("linkname", ""), 100),
+            stn(info.get("magic", POSIX_MAGIC), 8),
+            stn(info.get("uname", ""), 32),
+            stn(info.get("gname", ""), 32),
+            itn(info.get("devmajor", 0), 8, format),
+            itn(info.get("devminor", 0), 8, format),
+            stn(info.get("prefix", ""), 155)
+        ]
+
+        buf = struct.pack("%ds" % BLOCKSIZE, "".join(parts))
+        chksum = calc_chksums(buf[-BLOCKSIZE:])[0]
+        buf = buf[:-364] + "%06o\0" % chksum + buf[-357:]
+        return buf
+
+    @staticmethod
+    def _create_payload(payload):
+        """Return the string payload filled with zero bytes
+           up to the next 512 byte border.
+        """
+        blocks, remainder = divmod(len(payload), BLOCKSIZE)
+        if remainder > 0:
+            payload += (BLOCKSIZE - remainder) * NUL
+        return payload
+
+    @classmethod
+    def _create_gnu_long_header(cls, name, type):
+        """Return a GNUTYPE_LONGNAME or GNUTYPE_LONGLINK sequence
+           for name.
+        """
+        name += NUL
+
+        info = {}
+        info["name"] = "././@LongLink"
+        info["type"] = type
+        info["size"] = len(name)
+        info["magic"] = GNU_MAGIC
+
+        # create extended header + name blocks.
+        return cls._create_header(info, USTAR_FORMAT) + \
+                cls._create_payload(name)
+
+    @classmethod
+    def _create_pax_generic_header(cls, pax_headers, type=XHDTYPE):
+        """Return a POSIX.1-2001 extended or global header sequence
+           that contains a list of keyword, value pairs. The values
+           must be unicode objects.
+        """
+        records = []
+        for keyword, value in pax_headers.iteritems():
+            keyword = keyword.encode("utf8")
+            value = value.encode("utf8")
+            l = len(keyword) + len(value) + 3   # ' ' + '=' + '\n'
+            n = p = 0
+            while True:
+                n = l + len(str(p))
+                if n == p:
+                    break
+                p = n
+            records.append("%d %s=%s\n" % (p, keyword, value))
+        records = "".join(records)
+
+        # We use a hardcoded "././@PaxHeader" name like star does
+        # instead of the one that POSIX recommends.
+        info = {}
+        info["name"] = "././@PaxHeader"
+        info["type"] = type
+        info["size"] = len(records)
+        info["magic"] = POSIX_MAGIC
+
+        # Create pax header + record blocks.
+        return cls._create_header(info, USTAR_FORMAT) + \
+                cls._create_payload(records)
+
+    @classmethod
+    def frombuf(cls, buf):
+        """Construct a TarInfo object from a 512 byte string buffer.
+        """
+        if len(buf) == 0:
+            raise EmptyHeaderError("empty header")
+        if len(buf) != BLOCKSIZE:
+            raise TruncatedHeaderError("truncated header")
+        if buf.count(NUL) == BLOCKSIZE:
+            raise EOFHeaderError("end of file header")
+
+        chksum = nti(buf[148:156])
+        if chksum not in calc_chksums(buf):
+            raise InvalidHeaderError("bad checksum")
+
+        obj = cls()
+        obj.buf = buf
+        obj.name = nts(buf[0:100])
+        obj.mode = nti(buf[100:108])
+        obj.uid = nti(buf[108:116])
+        obj.gid = nti(buf[116:124])
+        obj.size = nti(buf[124:136])
+        obj.mtime = nti(buf[136:148])
+        obj.chksum = chksum
+        obj.type = buf[156:157]
+        obj.linkname = nts(buf[157:257])
+        obj.uname = nts(buf[265:297])
+        obj.gname = nts(buf[297:329])
+        obj.devmajor = nti(buf[329:337])
+        obj.devminor = nti(buf[337:345])
+        prefix = nts(buf[345:500])
+
+        # Old V7 tar format represents a directory as a regular
+        # file with a trailing slash.
+        if obj.type == AREGTYPE and obj.name.endswith("/"):
+            obj.type = DIRTYPE
+
+        # Remove redundant slashes from directories.
+        if obj.isdir():
+            obj.name = obj.name.rstrip("/")
+
+        # Reconstruct a ustar longname.
+        if prefix and obj.type not in GNU_TYPES:
+            obj.name = prefix + "/" + obj.name
+        return obj
+
+    @classmethod
+    def fromtarfile(cls, tarfile):
+        """Return the next TarInfo object from TarFile object
+           tarfile.
+        """
+        buf = tarfile.fileobj.read(BLOCKSIZE)
+        obj = cls.frombuf(buf)
+        obj.offset = tarfile.fileobj.tell() - BLOCKSIZE
+        return obj._proc_member(tarfile)
+
+    #--------------------------------------------------------------------------
+    # The following are methods that are called depending on the type of a
+    # member. The entry point is _proc_member() which can be overridden in a
+    # subclass to add custom _proc_*() methods. A _proc_*() method MUST
+    # implement the following
+    # operations:
+    # 1. Set self.offset_data to the position where the data blocks begin,
+    #    if there is data that follows.
+    # 2. Set tarfile.offset to the position where the next member's header will
+    #    begin.
+    # 3. Return self or another valid TarInfo object.
+    def _proc_member(self, tarfile):
+        """Choose the right processing method depending on
+           the type and call it.
+        """
+        if self.type in (GNUTYPE_LONGNAME, GNUTYPE_LONGLINK):
+            return self._proc_gnulong(tarfile)
+        elif self.type == GNUTYPE_SPARSE:
+            return self._proc_sparse(tarfile)
+        elif self.type in (XHDTYPE, XGLTYPE, SOLARIS_XHDTYPE):
+            return self._proc_pax(tarfile)
+        else:
+            return self._proc_builtin(tarfile)
+
+    def _proc_builtin(self, tarfile):
+        """Process a builtin type or an unknown type which
+           will be treated as a regular file.
+        """
+        self.offset_data = tarfile.fileobj.tell()
+        offset = self.offset_data
+        if self.isreg() or self.type not in SUPPORTED_TYPES:
+            # Skip the following data blocks.
+            offset += self._block(self.size)
+        tarfile.offset = offset
+
+        # Patch the TarInfo object with saved global
+        # header information.
+        self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors)
+
+        return self
+
+    def _proc_gnulong(self, tarfile):
+        """Process the blocks that hold a GNU longname
+           or longlink member.
+        """
+        buf = tarfile.fileobj.read(self._block(self.size))
+
+        # Fetch the next header and process it.
+        try:
+            next = self.fromtarfile(tarfile)
+        except HeaderError:
+            raise SubsequentHeaderError("missing or bad subsequent header")
+
+        # Patch the TarInfo object from the next header with
+        # the longname information.
+        next.offset = self.offset
+        if self.type == GNUTYPE_LONGNAME:
+            next.name = nts(buf)
+        elif self.type == GNUTYPE_LONGLINK:
+            next.linkname = nts(buf)
+
+        return next
+
+    def _proc_sparse(self, tarfile):
+        """Process a GNU sparse header plus extra headers.
+        """
+        buf = self.buf
+        sp = _ringbuffer()
+        pos = 386
+        lastpos = 0L
+        realpos = 0L
+        # There are 4 possible sparse structs in the
+        # first header.
+        for i in xrange(4):
+            try:
+                offset = nti(buf[pos:pos + 12])
+                numbytes = nti(buf[pos + 12:pos + 24])
+            except ValueError:
+                break
+            if offset > lastpos:
+                sp.append(_hole(lastpos, offset - lastpos))
+            sp.append(_data(offset, numbytes, realpos))
+            realpos += numbytes
+            lastpos = offset + numbytes
+            pos += 24
+
+        isextended = ord(buf[482])
+        origsize = nti(buf[483:495])
+
+        # If the isextended flag is given,
+        # there are extra headers to process.
+        while isextended == 1:
+            buf = tarfile.fileobj.read(BLOCKSIZE)
+            pos = 0
+            for i in xrange(21):
+                try:
+                    offset = nti(buf[pos:pos + 12])
+                    numbytes = nti(buf[pos + 12:pos + 24])
+                except ValueError:
+                    break
+                if offset > lastpos:
+                    sp.append(_hole(lastpos, offset - lastpos))
+                sp.append(_data(offset, numbytes, realpos))
+                realpos += numbytes
+                lastpos = offset + numbytes
+                pos += 24
+            isextended = ord(buf[504])
+
+        if lastpos < origsize:
+            sp.append(_hole(lastpos, origsize - lastpos))
+
+        self.sparse = sp
+
+        self.offset_data = tarfile.fileobj.tell()
+        tarfile.offset = self.offset_data + self._block(self.size)
+        self.size = origsize
+
+        return self
+
+    def _proc_pax(self, tarfile):
+        """Process an extended or global header as described in
+           POSIX.1-2001.
+        """
+        # Read the header information.
+        buf = tarfile.fileobj.read(self._block(self.size))
+
+        # A pax header stores supplemental information for either
+        # the following file (extended) or all following files
+        # (global).
+        if self.type == XGLTYPE:
+            pax_headers = tarfile.pax_headers
+        else:
+            pax_headers = tarfile.pax_headers.copy()
+
+        # Parse pax header information. A record looks like that:
+        # "%d %s=%s\n" % (length, keyword, value). length is the size
+        # of the complete record including the length field itself and
+        # the newline. keyword and value are both UTF-8 encoded strings.
+        regex = re.compile(r"(\d+) ([^=]+)=", re.U)
+        pos = 0
+        while True:
+            match = regex.match(buf, pos)
+            if not match:
+                break
+
+            length, keyword = match.groups()
+            length = int(length)
+            value = buf[match.end(2) + 1:match.start(1) + length - 1]
+
+            keyword = keyword.decode("utf8")
+            value = value.decode("utf8")
+
+            pax_headers[keyword] = value
+            pos += length
+
+        # Fetch the next header.
+        try:
+            next = self.fromtarfile(tarfile)
+        except HeaderError:
+            raise SubsequentHeaderError("missing or bad subsequent header")
+
+        if self.type in (XHDTYPE, SOLARIS_XHDTYPE):
+            # Patch the TarInfo object with the extended header info.
+            next._apply_pax_info(pax_headers, tarfile.encoding, tarfile.errors)
+            next.offset = self.offset
+
+            if "size" in pax_headers:
+                # If the extended header replaces the size field,
+                # we need to recalculate the offset where the next
+                # header starts.
+                offset = next.offset_data
+                if next.isreg() or next.type not in SUPPORTED_TYPES:
+                    offset += next._block(next.size)
+                tarfile.offset = offset
+
+        return next
+
+    def _apply_pax_info(self, pax_headers, encoding, errors):
+        """Replace fields with supplemental information from a previous
+           pax extended or global header.
+        """
+        for keyword, value in pax_headers.iteritems():
+            if keyword not in PAX_FIELDS:
+                continue
+
+            if keyword == "path":
+                value = value.rstrip("/")
+
+            if keyword in PAX_NUMBER_FIELDS:
+                try:
+                    value = PAX_NUMBER_FIELDS[keyword](value)
+                except ValueError:
+                    value = 0
+            else:
+                value = uts(value, encoding, errors)
+
+            setattr(self, keyword, value)
+
+        self.pax_headers = pax_headers.copy()
+
+    def _block(self, count):
+        """Round up a byte count by BLOCKSIZE and return it,
+           e.g. _block(834) => 1024.
+        """
+        blocks, remainder = divmod(count, BLOCKSIZE)
+        if remainder:
+            blocks += 1
+        return blocks * BLOCKSIZE
+
+    def isreg(self):
+        return self.type in REGULAR_TYPES
+    def isfile(self):
+        return self.isreg()
+    def isdir(self):
+        return self.type == DIRTYPE
+    def issym(self):
+        return self.type == SYMTYPE
+    def islnk(self):
+        return self.type == LNKTYPE
+    def ischr(self):
+        return self.type == CHRTYPE
+    def isblk(self):
+        return self.type == BLKTYPE
+    def isfifo(self):
+        return self.type == FIFOTYPE
+    def issparse(self):
+        return self.type == GNUTYPE_SPARSE
+    def isdev(self):
+        return self.type in (CHRTYPE, BLKTYPE, FIFOTYPE)
+# class TarInfo
+
+class TarFile(object):
+    """The TarFile Class provides an interface to tar archives.
+    """
+
+    debug = 0                   # May be set from 0 (no msgs) to 3 (all msgs)
+
+    dereference = False         # If true, add content of linked file to the
+                                # tar file, else the link.
+
+    ignore_zeros = False        # If true, skips empty or invalid blocks and
+                                # continues processing.
+
+    errorlevel = 1              # If 0, fatal errors only appear in debug
+                                # messages (if debug >= 0). If > 0, errors
+                                # are passed to the caller as exceptions.
+
+    format = DEFAULT_FORMAT     # The format to use when creating an archive.
+
+    encoding = ENCODING         # Encoding for 8-bit character strings.
+
+    errors = None               # Error handler for unicode conversion.
+
+    tarinfo = TarInfo           # The default TarInfo class to use.
+
+    fileobject = ExFileObject   # The default ExFileObject class to use.
+
+    def __init__(self, name=None, mode="r", fileobj=None, format=None,
+            tarinfo=None, dereference=None, ignore_zeros=None, encoding=None,
+            errors=None, pax_headers=None, debug=None, errorlevel=None):
+        """Open an (uncompressed) tar archive `name'. `mode' is either 'r' to
+           read from an existing archive, 'a' to append data to an existing
+           file or 'w' to create a new file overwriting an existing one. `mode'
+           defaults to 'r'.
+           If `fileobj' is given, it is used for reading or writing data. If it
+           can be determined, `mode' is overridden by `fileobj's mode.
+           `fileobj' is not closed, when TarFile is closed.
+        """
+        if len(mode) > 1 or mode not in "raw":
+            raise ValueError("mode must be 'r', 'a' or 'w'")
+        self.mode = mode
+        self._mode = {"r": "rb", "a": "r+b", "w": "wb"}[mode]
+
+        if not fileobj:
+            if self.mode == "a" and not os.path.exists(name):
+                # Create nonexistent files in append mode.
+                self.mode = "w"
+                self._mode = "wb"
+            fileobj = bltn_open(name, self._mode)
+            self._extfileobj = False
+        else:
+            if name is None and hasattr(fileobj, "name"):
+                name = fileobj.name
+            if hasattr(fileobj, "mode"):
+                self._mode = fileobj.mode
+            self._extfileobj = True
+        if name:
+            self.name = os.path.abspath(name)
+        else:
+            self.name = None
+        self.fileobj = fileobj
+
+        # Init attributes.
+        if format is not None:
+            self.format = format
+        if tarinfo is not None:
+            self.tarinfo = tarinfo
+        if dereference is not None:
+            self.dereference = dereference
+        if ignore_zeros is not None:
+            self.ignore_zeros = ignore_zeros
+        if encoding is not None:
+            self.encoding = encoding
+
+        if errors is not None:
+            self.errors = errors
+        elif mode == "r":
+            self.errors = "utf-8"
+        else:
+            self.errors = "strict"
+
+        if pax_headers is not None and self.format == PAX_FORMAT:
+            self.pax_headers = pax_headers
+        else:
+            self.pax_headers = {}
+
+        if debug is not None:
+            self.debug = debug
+        if errorlevel is not None:
+            self.errorlevel = errorlevel
+
+        # Init datastructures.
+        self.closed = False
+        self.members = []       # list of members as TarInfo objects
+        self._loaded = False    # flag if all members have been read
+        self.offset = self.fileobj.tell()
+                                # current position in the archive file
+        self.inodes = {}        # dictionary caching the inodes of
+                                # archive members already added
+
+        try:
+            if self.mode == "r":
+                self.firstmember = None
+                self.firstmember = self.next()
+
+            if self.mode == "a":
+                # Move to the end of the archive,
+                # before the first empty block.
+                while True:
+                    self.fileobj.seek(self.offset)
+                    try:
+                        tarinfo = self.tarinfo.fromtarfile(self)
+                        self.members.append(tarinfo)
+                    except EOFHeaderError:
+                        self.fileobj.seek(self.offset)
+                        break
+                    except HeaderError, e:
+                        raise ReadError(str(e))
+
+            if self.mode in "aw":
+                self._loaded = True
+
+                if self.pax_headers:
+                    buf = self.tarinfo.create_pax_global_header(self.pax_headers.copy())
+                    self.fileobj.write(buf)
+                    self.offset += len(buf)
+        except:
+            if not self._extfileobj:
+                self.fileobj.close()
+            self.closed = True
+            raise
+
+    def _getposix(self):
+        return self.format == USTAR_FORMAT
+    def _setposix(self, value):
+        import warnings
+        warnings.warn("use the format attribute instead", DeprecationWarning,
+                      2)
+        if value:
+            self.format = USTAR_FORMAT
+        else:
+            self.format = GNU_FORMAT
+    posix = property(_getposix, _setposix)
+
+    #--------------------------------------------------------------------------
+    # Below are the classmethods which act as alternate constructors to the
+    # TarFile class. The open() method is the only one that is needed for
+    # public use; it is the "super"-constructor and is able to select an
+    # adequate "sub"-constructor for a particular compression using the mapping
+    # from OPEN_METH.
+    #
+    # This concept allows one to subclass TarFile without losing the comfort of
+    # the super-constructor. A sub-constructor is registered and made available
+    # by adding it to the mapping in OPEN_METH.
+
+    @classmethod
+    def open(cls, name=None, mode="r", fileobj=None, bufsize=RECORDSIZE, **kwargs):
+        """Open a tar archive for reading, writing or appending. Return
+           an appropriate TarFile class.
+
+           mode:
+           'r' or 'r:*' open for reading with transparent compression
+           'r:'         open for reading exclusively uncompressed
+           'r:gz'       open for reading with gzip compression
+           'r:bz2'      open for reading with bzip2 compression
+           'a' or 'a:'  open for appending, creating the file if necessary
+           'w' or 'w:'  open for writing without compression
+           'w:gz'       open for writing with gzip compression
+           'w:bz2'      open for writing with bzip2 compression
+
+           'r|*'        open a stream of tar blocks with transparent compression
+           'r|'         open an uncompressed stream of tar blocks for reading
+           'r|gz'       open a gzip compressed stream of tar blocks
+           'r|bz2'      open a bzip2 compressed stream of tar blocks
+           'w|'         open an uncompressed stream for writing
+           'w|gz'       open a gzip compressed stream for writing
+           'w|bz2'      open a bzip2 compressed stream for writing
+        """
+
+        if not name and not fileobj:
+            raise ValueError("nothing to open")
+
+        if mode in ("r", "r:*"):
+            # Find out which *open() is appropriate for opening the file.
+            for comptype in cls.OPEN_METH:
+                func = getattr(cls, cls.OPEN_METH[comptype])
+                if fileobj is not None:
+                    saved_pos = fileobj.tell()
+                try:
+                    return func(name, "r", fileobj, **kwargs)
+                except (ReadError, CompressionError), e:
+                    if fileobj is not None:
+                        fileobj.seek(saved_pos)
+                    continue
+            raise ReadError("file could not be opened successfully")
+
+        elif ":" in mode:
+            filemode, comptype = mode.split(":", 1)
+            filemode = filemode or "r"
+            comptype = comptype or "tar"
+
+            # Select the *open() function according to
+            # given compression.
+            if comptype in cls.OPEN_METH:
+                func = getattr(cls, cls.OPEN_METH[comptype])
+            else:
+                raise CompressionError("unknown compression type %r" % comptype)
+            return func(name, filemode, fileobj, **kwargs)
+
+        elif "|" in mode:
+            filemode, comptype = mode.split("|", 1)
+            filemode = filemode or "r"
+            comptype = comptype or "tar"
+
+            if filemode not in "rw":
+                raise ValueError("mode must be 'r' or 'w'")
+
+            t = cls(name, filemode,
+                    _Stream(name, filemode, comptype, fileobj, bufsize),
+                    **kwargs)
+            t._extfileobj = False
+            return t
+
+        elif mode in "aw":
+            return cls.taropen(name, mode, fileobj, **kwargs)
+
+        raise ValueError("undiscernible mode")
+
+    @classmethod
+    def taropen(cls, name, mode="r", fileobj=None, **kwargs):
+        """Open uncompressed tar archive name for reading or writing.
+        """
+        if len(mode) > 1 or mode not in "raw":
+            raise ValueError("mode must be 'r', 'a' or 'w'")
+        return cls(name, mode, fileobj, **kwargs)
+
+    @classmethod
+    def gzopen(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs):
+        """Open gzip compressed tar archive name for reading or writing.
+           Appending is not allowed.
+        """
+        if len(mode) > 1 or mode not in "rw":
+            raise ValueError("mode must be 'r' or 'w'")
+
+        try:
+            import gzip
+            gzip.GzipFile
+        except (ImportError, AttributeError):
+            raise CompressionError("gzip module is not available")
+
+        if fileobj is None:
+            fileobj = bltn_open(name, mode + "b")
+
+        try:
+            t = cls.taropen(name, mode,
+                gzip.GzipFile(name, mode, compresslevel, fileobj),
+                **kwargs)
+        except IOError:
+            raise ReadError("not a gzip file")
+        t._extfileobj = False
+        return t
+
+    @classmethod
+    def bz2open(cls, name, mode="r", fileobj=None, compresslevel=9, **kwargs):
+        """Open bzip2 compressed tar archive name for reading or writing.
+           Appending is not allowed.
+        """
+        if len(mode) > 1 or mode not in "rw":
+            raise ValueError("mode must be 'r' or 'w'.")
+
+        try:
+            import bz2
+        except ImportError:
+            raise CompressionError("bz2 module is not available")
+
+        if fileobj is not None:
+            fileobj = _BZ2Proxy(fileobj, mode)
+        else:
+            fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel)
+
+        try:
+            t = cls.taropen(name, mode, fileobj, **kwargs)
+        except (IOError, EOFError):
+            raise ReadError("not a bzip2 file")
+        t._extfileobj = False
+        return t
+
+    # All *open() methods are registered here.
+    OPEN_METH = {
+        "tar": "taropen",   # uncompressed tar
+        "gz":  "gzopen",    # gzip compressed tar
+        "bz2": "bz2open"    # bzip2 compressed tar
+    }
+
+    #--------------------------------------------------------------------------
+    # The public methods which TarFile provides:
+
+    def close(self):
+        """Close the TarFile. In write-mode, two finishing zero blocks are
+           appended to the archive.
+        """
+        if self.closed:
+            return
+
+        if self.mode in "aw":
+            self.fileobj.write(NUL * (BLOCKSIZE * 2))
+            self.offset += (BLOCKSIZE * 2)
+            # fill up the end with zero-blocks
+            # (like option -b20 for tar does)
+            blocks, remainder = divmod(self.offset, RECORDSIZE)
+            if remainder > 0:
+                self.fileobj.write(NUL * (RECORDSIZE - remainder))
+
+        if not self._extfileobj:
+            self.fileobj.close()
+        self.closed = True
+
+    def getmember(self, name):
+        """Return a TarInfo object for member `name'. If `name' can not be
+           found in the archive, KeyError is raised. If a member occurs more
+           than once in the archive, its last occurrence is assumed to be the
+           most up-to-date version.
+        """
+        tarinfo = self._getmember(name)
+        if tarinfo is None:
+            raise KeyError("filename %r not found" % name)
+        return tarinfo
+
+    def getmembers(self):
+        """Return the members of the archive as a list of TarInfo objects. The
+           list has the same order as the members in the archive.
+        """
+        self._check()
+        if not self._loaded:    # if we want to obtain a list of
+            self._load()        # all members, we first have to
+                                # scan the whole archive.
+        return self.members
+
+    def getnames(self):
+        """Return the members of the archive as a list of their names. It has
+           the same order as the list returned by getmembers().
+        """
+        return [tarinfo.name for tarinfo in self.getmembers()]
+
+    def gettarinfo(self, name=None, arcname=None, fileobj=None):
+        """Create a TarInfo object for either the file `name' or the file
+           object `fileobj' (using os.fstat on its file descriptor). You can
+           modify some of the TarInfo's attributes before you add it using
+           addfile(). If given, `arcname' specifies an alternative name for the
+           file in the archive.
+        """
+        self._check("aw")
+
+        # When fileobj is given, replace name by
+        # fileobj's real name.
+        if fileobj is not None:
+            name = fileobj.name
+
+        # Building the name of the member in the archive.
+        # Backward slashes are converted to forward slashes,
+        # Absolute paths are turned to relative paths.
+        if arcname is None:
+            arcname = name
+        drv, arcname = os.path.splitdrive(arcname)
+        arcname = arcname.replace(os.sep, "/")
+        arcname = arcname.lstrip("/")
+
+        # Now, fill the TarInfo object with
+        # information specific for the file.
+        tarinfo = self.tarinfo()
+        tarinfo.tarfile = self
+
+        # Use os.stat or os.lstat, depending on platform
+        # and if symlinks shall be resolved.
+        if fileobj is None:
+            if hasattr(os, "lstat") and not self.dereference:
+                statres = os.lstat(name)
+            else:
+                statres = os.stat(name)
+        else:
+            statres = os.fstat(fileobj.fileno())
+        linkname = ""
+
+        stmd = statres.st_mode
+        if stat.S_ISREG(stmd):
+            inode = (statres.st_ino, statres.st_dev)
+            if not self.dereference and statres.st_nlink > 1 and \
+                    inode in self.inodes and arcname != self.inodes[inode]:
+                # Is it a hardlink to an already
+                # archived file?
+                type = LNKTYPE
+                linkname = self.inodes[inode]
+            else:
+                # The inode is added only if its valid.
+                # For win32 it is always 0.
+                type = REGTYPE
+                if inode[0]:
+                    self.inodes[inode] = arcname
+        elif stat.S_ISDIR(stmd):
+            type = DIRTYPE
+        elif stat.S_ISFIFO(stmd):
+            type = FIFOTYPE
+        elif stat.S_ISLNK(stmd):
+            type = SYMTYPE
+            linkname = os.readlink(name)
+        elif stat.S_ISCHR(stmd):
+            type = CHRTYPE
+        elif stat.S_ISBLK(stmd):
+            type = BLKTYPE
+        else:
+            return None
+
+        # Fill the TarInfo object with all
+        # information we can get.
+        tarinfo.name = arcname
+        tarinfo.mode = stmd
+        tarinfo.uid = statres.st_uid
+        tarinfo.gid = statres.st_gid
+        if type == REGTYPE:
+            tarinfo.size = statres.st_size
+        else:
+            tarinfo.size = 0L
+        tarinfo.mtime = statres.st_mtime
+        tarinfo.type = type
+        tarinfo.linkname = linkname
+        if pwd:
+            try:
+                tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
+            except KeyError:
+                pass
+        if grp:
+            try:
+                tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
+            except KeyError:
+                pass
+
+        if type in (CHRTYPE, BLKTYPE):
+            if hasattr(os, "major") and hasattr(os, "minor"):
+                tarinfo.devmajor = os.major(statres.st_rdev)
+                tarinfo.devminor = os.minor(statres.st_rdev)
+        return tarinfo
+
+    def list(self, verbose=True):
+        """Print a table of contents to sys.stdout. If `verbose' is False, only
+           the names of the members are printed. If it is True, an `ls -l'-like
+           output is produced.
+        """
+        self._check()
+
+        for tarinfo in self:
+            if verbose:
+                print filemode(tarinfo.mode),
+                print "%s/%s" % (tarinfo.uname or tarinfo.uid,
+                                 tarinfo.gname or tarinfo.gid),
+                if tarinfo.ischr() or tarinfo.isblk():
+                    print "%10s" % ("%d,%d" \
+                                    % (tarinfo.devmajor, tarinfo.devminor)),
+                else:
+                    print "%10d" % tarinfo.size,
+                print "%d-%02d-%02d %02d:%02d:%02d" \
+                      % time.localtime(tarinfo.mtime)[:6],
+
+            if tarinfo.isdir():
+                print tarinfo.name + "/",
+            else:
+                print tarinfo.name,
+
+            if verbose:
+                if tarinfo.issym():
+                    print "->", tarinfo.linkname,
+                if tarinfo.islnk():
+                    print "link to", tarinfo.linkname,
+            print
+
+    def add(self, name, arcname=None, recursive=True, exclude=None, filter=None):
+        """Add the file `name' to the archive. `name' may be any type of file
+           (directory, fifo, symbolic link, etc.). If given, `arcname'
+           specifies an alternative name for the file in the archive.
+           Directories are added recursively by default. This can be avoided by
+           setting `recursive' to False. `exclude' is a function that should
+           return True for each filename to be excluded. `filter' is a function
+           that expects a TarInfo object argument and returns the changed
+           TarInfo object, if it returns None the TarInfo object will be
+           excluded from the archive.
+        """
+        self._check("aw")
+
+        if arcname is None:
+            arcname = name
+
+        # Exclude pathnames.
+        if exclude is not None:
+            import warnings
+            warnings.warn("use the filter argument instead",
+                    DeprecationWarning, 2)
+            if exclude(name):
+                self._dbg(2, "tarfile: Excluded %r" % name)
+                return
+
+        # Skip if somebody tries to archive the archive...
+        if self.name is not None and os.path.abspath(name) == self.name:
+            self._dbg(2, "tarfile: Skipped %r" % name)
+            return
+
+        self._dbg(1, name)
+
+        # Create a TarInfo object from the file.
+        tarinfo = self.gettarinfo(name, arcname)
+
+        if tarinfo is None:
+            self._dbg(1, "tarfile: Unsupported type %r" % name)
+            return
+
+        # Change or exclude the TarInfo object.
+        if filter is not None:
+            tarinfo = filter(tarinfo)
+            if tarinfo is None:
+                self._dbg(2, "tarfile: Excluded %r" % name)
+                return
+
+        # Append the tar header and data to the archive.
+        if tarinfo.isreg():
+            f = bltn_open(name, "rb")
+            self.addfile(tarinfo, f)
+            f.close()
+
+        elif tarinfo.isdir():
+            self.addfile(tarinfo)
+            if recursive:
+                for f in os.listdir(name):
+                    self.add(os.path.join(name, f), os.path.join(arcname, f),
+                            recursive, exclude, filter)
+
+        else:
+            self.addfile(tarinfo)
+
+    def addfile(self, tarinfo, fileobj=None):
+        """Add the TarInfo object `tarinfo' to the archive. If `fileobj' is
+           given, tarinfo.size bytes are read from it and added to the archive.
+           You can create TarInfo objects using gettarinfo().
+           On Windows platforms, `fileobj' should always be opened with mode
+           'rb' to avoid irritation about the file size.
+        """
+        self._check("aw")
+
+        tarinfo = copy.copy(tarinfo)
+
+        buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
+        self.fileobj.write(buf)
+        self.offset += len(buf)
+
+        # If there's data to follow, append it.
+        if fileobj is not None:
+            copyfileobj(fileobj, self.fileobj, tarinfo.size)
+            blocks, remainder = divmod(tarinfo.size, BLOCKSIZE)
+            if remainder > 0:
+                self.fileobj.write(NUL * (BLOCKSIZE - remainder))
+                blocks += 1
+            self.offset += blocks * BLOCKSIZE
+
+        self.members.append(tarinfo)
+
+    def extractall(self, path=".", members=None):
+        """Extract all members from the archive to the current working
+           directory and set owner, modification time and permissions on
+           directories afterwards. `path' specifies a different directory
+           to extract to. `members' is optional and must be a subset of the
+           list returned by getmembers().
+        """
+        directories = []
+
+        if members is None:
+            members = self
+
+        for tarinfo in members:
+            if tarinfo.isdir():
+                # Extract directories with a safe mode.
+                directories.append(tarinfo)
+                tarinfo = copy.copy(tarinfo)
+                tarinfo.mode = 0700
+            self.extract(tarinfo, path)
+
+        # Reverse sort directories.
+        directories.sort(key=operator.attrgetter('name'))
+        directories.reverse()
+
+        # Set correct owner, mtime and filemode on directories.
+        for tarinfo in directories:
+            dirpath = os.path.join(path, tarinfo.name)
+            try:
+                self.chown(tarinfo, dirpath)
+                self.utime(tarinfo, dirpath)
+                self.chmod(tarinfo, dirpath)
+            except ExtractError, e:
+                if self.errorlevel > 1:
+                    raise
+                else:
+                    self._dbg(1, "tarfile: %s" % e)
+
+    def extract(self, member, path=""):
+        """Extract a member from the archive to the current working directory,
+           using its full name. Its file information is extracted as accurately
+           as possible. `member' may be a filename or a TarInfo object. You can
+           specify a different directory using `path'.
+        """
+        self._check("r")
+
+        if isinstance(member, basestring):
+            tarinfo = self.getmember(member)
+        else:
+            tarinfo = member
+
+        # Prepare the link target for makelink().
+        if tarinfo.islnk():
+            tarinfo._link_target = os.path.join(path, tarinfo.linkname)
+
+        try:
+            self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
+        except EnvironmentError, e:
+            if self.errorlevel > 0:
+                raise
+            else:
+                if e.filename is None:
+                    self._dbg(1, "tarfile: %s" % e.strerror)
+                else:
+                    self._dbg(1, "tarfile: %s %r" % (e.strerror, e.filename))
+        except ExtractError, e:
+            if self.errorlevel > 1:
+                raise
+            else:
+                self._dbg(1, "tarfile: %s" % e)
+
+    def extractfile(self, member):
+        """Extract a member from the archive as a file object. `member' may be
+           a filename or a TarInfo object. If `member' is a regular file, a
+           file-like object is returned. If `member' is a link, a file-like
+           object is constructed from the link's target. If `member' is none of
+           the above, None is returned.
+           The file-like object is read-only and provides the following
+           methods: read(), readline(), readlines(), seek() and tell()
+        """
+        self._check("r")
+
+        if isinstance(member, basestring):
+            tarinfo = self.getmember(member)
+        else:
+            tarinfo = member
+
+        if tarinfo.isreg():
+            return self.fileobject(self, tarinfo)
+
+        elif tarinfo.type not in SUPPORTED_TYPES:
+            # If a member's type is unknown, it is treated as a
+            # regular file.
+            return self.fileobject(self, tarinfo)
+
+        elif tarinfo.islnk() or tarinfo.issym():
+            if isinstance(self.fileobj, _Stream):
+                # A small but ugly workaround for the case that someone tries
+                # to extract a (sym)link as a file-object from a non-seekable
+                # stream of tar blocks.
+                raise StreamError("cannot extract (sym)link as file object")
+            else:
+                # A (sym)link's file object is its target's file object.
+                return self.extractfile(self._find_link_target(tarinfo))
+        else:
+            # If there's no data associated with the member (directory, chrdev,
+            # blkdev, etc.), return None instead of a file object.
+            return None
+
+    def _extract_member(self, tarinfo, targetpath):
+        """Extract the TarInfo object tarinfo to a physical
+           file called targetpath.
+        """
+        # Fetch the TarInfo object for the given name
+        # and build the destination pathname, replacing
+        # forward slashes to platform specific separators.
+        targetpath = targetpath.rstrip("/")
+        targetpath = targetpath.replace("/", os.sep)
+
+        # Create all upper directories.
+        upperdirs = os.path.dirname(targetpath)
+        if upperdirs and not os.path.exists(upperdirs):
+            # Create directories that are not part of the archive with
+            # default permissions.
+            os.makedirs(upperdirs)
+
+        if tarinfo.islnk() or tarinfo.issym():
+            self._dbg(1, "%s -> %s" % (tarinfo.name, tarinfo.linkname))
+        else:
+            self._dbg(1, tarinfo.name)
+
+        if tarinfo.isreg():
+            self.makefile(tarinfo, targetpath)
+        elif tarinfo.isdir():
+            self.makedir(tarinfo, targetpath)
+        elif tarinfo.isfifo():
+            self.makefifo(tarinfo, targetpath)
+        elif tarinfo.ischr() or tarinfo.isblk():
+            self.makedev(tarinfo, targetpath)
+        elif tarinfo.islnk() or tarinfo.issym():
+            self.makelink(tarinfo, targetpath)
+        elif tarinfo.type not in SUPPORTED_TYPES:
+            self.makeunknown(tarinfo, targetpath)
+        else:
+            self.makefile(tarinfo, targetpath)
+
+        self.chown(tarinfo, targetpath)
+        if not tarinfo.issym():
+            self.chmod(tarinfo, targetpath)
+            self.utime(tarinfo, targetpath)
+
+    #--------------------------------------------------------------------------
+    # Below are the different file methods. They are called via
+    # _extract_member() when extract() is called. They can be replaced in a
+    # subclass to implement other functionality.
+
+    def makedir(self, tarinfo, targetpath):
+        """Make a directory called targetpath.
+        """
+        try:
+            # Use a safe mode for the directory, the real mode is set
+            # later in _extract_member().
+            os.mkdir(targetpath, 0700)
+        except EnvironmentError, e:
+            if e.errno != errno.EEXIST:
+                raise
+
+    def makefile(self, tarinfo, targetpath):
+        """Make a file called targetpath.
+        """
+        source = self.extractfile(tarinfo)
+        target = bltn_open(targetpath, "wb")
+        copyfileobj(source, target)
+        source.close()
+        target.close()
+
+    def makeunknown(self, tarinfo, targetpath):
+        """Make a file from a TarInfo object with an unknown type
+           at targetpath.
+        """
+        self.makefile(tarinfo, targetpath)
+        self._dbg(1, "tarfile: Unknown file type %r, " \
+                     "extracted as regular file." % tarinfo.type)
+
+    def makefifo(self, tarinfo, targetpath):
+        """Make a fifo called targetpath.
+        """
+        if hasattr(os, "mkfifo"):
+            os.mkfifo(targetpath)
+        else:
+            raise ExtractError("fifo not supported by system")
+
+    def makedev(self, tarinfo, targetpath):
+        """Make a character or block device called targetpath.
+        """
+        if not hasattr(os, "mknod") or not hasattr(os, "makedev"):
+            raise ExtractError("special devices not supported by system")
+
+        mode = tarinfo.mode
+        if tarinfo.isblk():
+            mode |= stat.S_IFBLK
+        else:
+            mode |= stat.S_IFCHR
+
+        os.mknod(targetpath, mode,
+                 os.makedev(tarinfo.devmajor, tarinfo.devminor))
+
+    def makelink(self, tarinfo, targetpath):
+        """Make a (symbolic) link called targetpath. If it cannot be created
+          (platform limitation), we try to make a copy of the referenced file
+          instead of a link.
+        """
+        if hasattr(os, "symlink") and hasattr(os, "link"):
+            # For systems that support symbolic and hard links.
+            if tarinfo.issym():
+                os.symlink(tarinfo.linkname, targetpath)
+            else:
+                # See extract().
+                if os.path.exists(tarinfo._link_target):
+                    os.link(tarinfo._link_target, targetpath)
+                else:
+                    self._extract_member(self._find_link_target(tarinfo), targetpath)
+        else:
+            try:
+                self._extract_member(self._find_link_target(tarinfo), targetpath)
+            except KeyError:
+                raise ExtractError("unable to resolve link inside archive")
+
+    def chown(self, tarinfo, targetpath):
+        """Set owner of targetpath according to tarinfo.
+        """
+        if pwd and hasattr(os, "geteuid") and os.geteuid() == 0:
+            # We have to be root to do so.
+            try:
+                g = grp.getgrnam(tarinfo.gname)[2]
+            except KeyError:
+                try:
+                    g = grp.getgrgid(tarinfo.gid)[2]
+                except KeyError:
+                    g = os.getgid()
+            try:
+                u = pwd.getpwnam(tarinfo.uname)[2]
+            except KeyError:
+                try:
+                    u = pwd.getpwuid(tarinfo.uid)[2]
+                except KeyError:
+                    u = os.getuid()
+            try:
+                if tarinfo.issym() and hasattr(os, "lchown"):
+                    os.lchown(targetpath, u, g)
+                else:
+                    if sys.platform != "os2emx":
+                        os.chown(targetpath, u, g)
+            except EnvironmentError, e:
+                raise ExtractError("could not change owner to %d:%d" % (u, g))
+
+    def chmod(self, tarinfo, targetpath):
+        """Set file permissions of targetpath according to tarinfo.
+        """
+        if hasattr(os, 'chmod'):
+            try:
+                os.chmod(targetpath, tarinfo.mode)
+            except EnvironmentError, e:
+                raise ExtractError("could not change mode")
+
+    def utime(self, tarinfo, targetpath):
+        """Set modification time of targetpath according to tarinfo.
+        """
+        if not hasattr(os, 'utime'):
+            return
+        try:
+            os.utime(targetpath, (tarinfo.mtime, tarinfo.mtime))
+        except EnvironmentError, e:
+            raise ExtractError("could not change modification time")
+
+    #--------------------------------------------------------------------------
+    def next(self):
+        """Return the next member of the archive as a TarInfo object, when
+           TarFile is opened for reading. Return None if there is no more
+           available.
+        """
+        self._check("ra")
+        if self.firstmember is not None:
+            m = self.firstmember
+            self.firstmember = None
+            return m
+
+        # Read the next block.
+        self.fileobj.seek(self.offset)
+        tarinfo = None
+        while True:
+            try:
+                tarinfo = self.tarinfo.fromtarfile(self)
+            except EOFHeaderError, e:
+                if self.ignore_zeros:
+                    self._dbg(2, "0x%X: %s" % (self.offset, e))
+                    self.offset += BLOCKSIZE
+                    continue
+            except InvalidHeaderError, e:
+                if self.ignore_zeros:
+                    self._dbg(2, "0x%X: %s" % (self.offset, e))
+                    self.offset += BLOCKSIZE
+                    continue
+                elif self.offset == 0:
+                    raise ReadError(str(e))
+            except EmptyHeaderError:
+                if self.offset == 0:
+                    raise ReadError("empty file")
+            except TruncatedHeaderError, e:
+                if self.offset == 0:
+                    raise ReadError(str(e))
+            except SubsequentHeaderError, e:
+                raise ReadError(str(e))
+            break
+
+        if tarinfo is not None:
+            self.members.append(tarinfo)
+        else:
+            self._loaded = True
+
+        return tarinfo
+
+    #--------------------------------------------------------------------------
+    # Little helper methods:
+
+    def _getmember(self, name, tarinfo=None, normalize=False):
+        """Find an archive member by name from bottom to top.
+           If tarinfo is given, it is used as the starting point.
+        """
+        # Ensure that all members have been loaded.
+        members = self.getmembers()
+
+        # Limit the member search list up to tarinfo.
+        if tarinfo is not None:
+            members = members[:members.index(tarinfo)]
+
+        if normalize:
+            name = os.path.normpath(name)
+
+        for member in reversed(members):
+            if normalize:
+                member_name = os.path.normpath(member.name)
+            else:
+                member_name = member.name
+
+            if name == member_name:
+                return member
+
+    def _load(self):
+        """Read through the entire archive file and look for readable
+           members.
+        """
+        while True:
+            tarinfo = self.next()
+            if tarinfo is None:
+                break
+        self._loaded = True
+
+    def _check(self, mode=None):
+        """Check if TarFile is still open, and if the operation's mode
+           corresponds to TarFile's mode.
+        """
+        if self.closed:
+            raise IOError("%s is closed" % self.__class__.__name__)
+        if mode is not None and self.mode not in mode:
+            raise IOError("bad operation for mode %r" % self.mode)
+
+    def _find_link_target(self, tarinfo):
+        """Find the target member of a symlink or hardlink member in the
+           archive.
+        """
+        if tarinfo.issym():
+            # Always search the entire archive.
+            linkname = os.path.dirname(tarinfo.name) + "/" + tarinfo.linkname
+            limit = None
+        else:
+            # Search the archive before the link, because a hard link is
+            # just a reference to an already archived file.
+            linkname = tarinfo.linkname
+            limit = tarinfo
+
+        member = self._getmember(linkname, tarinfo=limit, normalize=True)
+        if member is None:
+            raise KeyError("linkname %r not found" % linkname)
+        return member
+
+    def __iter__(self):
+        """Provide an iterator object.
+        """
+        if self._loaded:
+            return iter(self.members)
+        else:
+            return TarIter(self)
+
+    def _dbg(self, level, msg):
+        """Write debugging output to sys.stderr.
+        """
+        if level <= self.debug:
+            print >> sys.stderr, msg
+
+    def __enter__(self):
+        self._check()
+        return self
+
+    def __exit__(self, type, value, traceback):
+        if type is None:
+            self.close()
+        else:
+            # An exception occurred. We must not call close() because
+            # it would try to write end-of-archive blocks and padding.
+            if not self._extfileobj:
+                self.fileobj.close()
+            self.closed = True
+# class TarFile
+
+class TarIter:
+    """Iterator Class.
+
+       for tarinfo in TarFile(...):
+           suite...
+    """
+
+    def __init__(self, tarfile):
+        """Construct a TarIter object.
+        """
+        self.tarfile = tarfile
+        self.index = 0
+    def __iter__(self):
+        """Return iterator object.
+        """
+        return self
+    def next(self):
+        """Return the next item using TarFile's next() method.
+           When all members have been read, set TarFile as _loaded.
+        """
+        # Fix for SF #1100429: Under rare circumstances it can
+        # happen that getmembers() is called during iteration,
+        # which will cause TarIter to stop prematurely.
+        if not self.tarfile._loaded:
+            tarinfo = self.tarfile.next()
+            if not tarinfo:
+                self.tarfile._loaded = True
+                raise StopIteration
+        else:
+            try:
+                tarinfo = self.tarfile.members[self.index]
+            except IndexError:
+                raise StopIteration
+        self.index += 1
+        return tarinfo
+
+# Helper classes for sparse file support
+class _section:
+    """Base class for _data and _hole.
+    """
+    def __init__(self, offset, size):
+        self.offset = offset
+        self.size = size
+    def __contains__(self, offset):
+        return self.offset <= offset < self.offset + self.size
+
+class _data(_section):
+    """Represent a data section in a sparse file.
+    """
+    def __init__(self, offset, size, realpos):
+        _section.__init__(self, offset, size)
+        self.realpos = realpos
+
+class _hole(_section):
+    """Represent a hole section in a sparse file.
+    """
+    pass
+
+class _ringbuffer(list):
+    """Ringbuffer class which increases performance
+       over a regular list.
+    """
+    def __init__(self):
+        self.idx = 0
+    def find(self, offset):
+        idx = self.idx
+        while True:
+            item = self[idx]
+            if offset in item:
+                break
+            idx += 1
+            if idx == len(self):
+                idx = 0
+            if idx == self.idx:
+                # End of File
+                return None
+        self.idx = idx
+        return item
+
+#---------------------------------------------
+# zipfile compatible TarFile class
+#---------------------------------------------
+TAR_PLAIN = 0           # zipfile.ZIP_STORED
+TAR_GZIPPED = 8         # zipfile.ZIP_DEFLATED
+class TarFileCompat:
+    """TarFile class compatible with standard module zipfile's
+       ZipFile class.
+    """
+    def __init__(self, file, mode="r", compression=TAR_PLAIN):
+        from warnings import warnpy3k
+        warnpy3k("the TarFileCompat class has been removed in Python 3.0",
+                stacklevel=2)
+        if compression == TAR_PLAIN:
+            self.tarfile = TarFile.taropen(file, mode)
+        elif compression == TAR_GZIPPED:
+            self.tarfile = TarFile.gzopen(file, mode)
+        else:
+            raise ValueError("unknown compression constant")
+        if mode[0:1] == "r":
+            members = self.tarfile.getmembers()
+            for m in members:
+                m.filename = m.name
+                m.file_size = m.size
+                m.date_time = time.gmtime(m.mtime)[:6]
+    def namelist(self):
+        return map(lambda m: m.name, self.infolist())
+    def infolist(self):
+        return filter(lambda m: m.type in REGULAR_TYPES,
+                      self.tarfile.getmembers())
+    def printdir(self):
+        self.tarfile.list()
+    def testzip(self):
+        return
+    def getinfo(self, name):
+        return self.tarfile.getmember(name)
+    def read(self, name):
+        return self.tarfile.extractfile(self.tarfile.getmember(name)).read()
+    def write(self, filename, arcname=None, compress_type=None):
+        self.tarfile.add(filename, arcname)
+    def writestr(self, zinfo, bytes):
+        try:
+            from cStringIO import StringIO
+        except ImportError:
+            from StringIO import StringIO
+        import calendar
+        tinfo = TarInfo(zinfo.filename)
+        tinfo.size = len(bytes)
+        tinfo.mtime = calendar.timegm(zinfo.date_time)
+        self.tarfile.addfile(tinfo, StringIO(bytes))
+    def close(self):
+        self.tarfile.close()
+#class TarFileCompat
+
+#--------------------
+# exported functions
+#--------------------
+def is_tarfile(name):
+    """Return True if name points to a tar archive that we
+       are able to handle, else return False.
+    """
+    try:
+        t = open(name)
+        t.close()
+        return True
+    except TarError:
+        return False
+
+bltn_open = open
+open = TarFile.open

=== modified file 'duplicity/tempdir.py'
--- duplicity/tempdir.py	2014-04-20 05:58:47 +0000
+++ duplicity/tempdir.py	2014-06-14 13:58:30 +0000
@@ -213,7 +213,7 @@
         """
         self.__lock.acquire()
         try:
-            if fname in self.__pending:
+            if self.__pending.has_key(fname):
                 log.Debug(_("Forgetting temporary file %s") % util.ufn(fname))
                 del(self.__pending[fname])
             else:

=== added file 'duplicity/urlparse_2_5.py'
--- duplicity/urlparse_2_5.py	1970-01-01 00:00:00 +0000
+++ duplicity/urlparse_2_5.py	2014-06-14 13:58:30 +0000
@@ -0,0 +1,385 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+
+"""Parse (absolute and relative) URLs.
+
+See RFC 1808: "Relative Uniform Resource Locators", by R. Fielding,
+UC Irvine, June 1995.
+"""
+
+__all__ = ["urlparse", "urlunparse", "urljoin", "urldefrag",
+           "urlsplit", "urlunsplit"]
+
+# A classification of schemes ('' means apply by default)
+uses_relative = ['ftp', 'ftps', 'http', 'gopher', 'nntp',
+                 'wais', 'file', 'https', 'shttp', 'mms',
+                 'prospero', 'rtsp', 'rtspu', '', 'sftp', 'imap', 'imaps']
+uses_netloc = ['ftp', 'ftps', 'http', 'gopher', 'nntp', 'telnet',
+               'wais', 'file', 'mms', 'https', 'shttp',
+               'snews', 'prospero', 'rtsp', 'rtspu', 'rsync', '',
+               'svn', 'svn+ssh', 'sftp', 'imap', 'imaps']
+non_hierarchical = ['gopher', 'hdl', 'mailto', 'news',
+                    'telnet', 'wais', 'snews', 'sip', 'sips', 'imap', 'imaps']
+uses_params = ['ftp', 'ftps', 'hdl', 'prospero', 'http',
+               'https', 'shttp', 'rtsp', 'rtspu', 'sip', 'sips',
+               'mms', '', 'sftp', 'imap', 'imaps']
+uses_query = ['http', 'wais', 'https', 'shttp', 'mms',
+              'gopher', 'rtsp', 'rtspu', 'sip', 'sips', 'imap', 'imaps', '']
+uses_fragment = ['ftp', 'ftps', 'hdl', 'http', 'gopher', 'news',
+                 'nntp', 'wais', 'https', 'shttp', 'snews',
+                 'file', 'prospero', '']
+
+# Characters valid in scheme names
+scheme_chars = ('abcdefghijklmnopqrstuvwxyz'
+                'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
+                '0123456789'
+                '+-.')
+
+MAX_CACHE_SIZE = 20
+_parse_cache = {}
+
+def clear_cache():
+    """Clear the parse cache."""
+    global _parse_cache
+    _parse_cache = {}
+
+import string
+def _rsplit(str, delim, numsplit):
+    parts = string.split(str, delim)
+    if len(parts) <= numsplit + 1:
+        return parts
+    else:
+        left = string.join(parts[0:-numsplit], delim)
+        right = string.join(parts[len(parts)-numsplit:], delim)
+        return [left, right]
+
+class BaseResult(tuple):
+    """Base class for the parsed result objects.
+
+    This provides the attributes shared by the two derived result
+    objects as read-only properties.  The derived classes are
+    responsible for checking the right number of arguments were
+    supplied to the constructor.
+
+    """
+
+    __slots__ = ()
+
+    # Attributes that access the basic components of the URL:
+
+    def get_scheme(self):
+        return self[0]
+    scheme = property(get_scheme)
+
+    def get_netloc(self):
+        return self[1]
+    netloc = property(get_netloc)
+
+    def get_path(self):
+        return self[2]
+    path = property(get_path)
+
+    def get_query(self):
+        return self[-2]
+    query = property(get_query)
+
+    def get_fragment(self):
+        return self[-1]
+    fragment = property(get_fragment)
+
+    # Additional attributes that provide access to parsed-out portions
+    # of the netloc:
+
+    def get_username(self):
+        netloc = self.netloc
+        if "@" in netloc:
+            userinfo = _rsplit(netloc, "@", 1)[0]
+            if ":" in userinfo:
+                userinfo = userinfo.split(":", 1)[0]
+            return userinfo
+        return None
+    username = property(get_username)
+
+    def get_password(self):
+        netloc = self.netloc
+        if "@" in netloc:
+            userinfo = _rsplit(netloc, "@", 1)[0]
+            if ":" in userinfo:
+                return userinfo.split(":", 1)[1]
+        return None
+    password = property(get_password)
+
+    def get_hostname(self):
+        netloc = self.netloc.split('@')[-1]
+        if '[' in netloc and ']' in netloc:
+            return netloc.split(']')[0][1:].lower()
+        elif ':' in netloc:
+            return netloc.split(':')[0].lower()
+        elif netloc == '':
+            return None
+        else:
+            return netloc.lower()
+    hostname = property(get_hostname)
+
+    def get_port(self):
+        netloc = self.netloc.split('@')[-1].split(']')[-1]
+        if ":" in netloc:
+            port = netloc.split(":", 1)[1]
+            return int(port, 10)
+        return None
+    port = property(get_port)
+
+
+class SplitResult(BaseResult):
+
+    __slots__ = ()
+
+    def __new__(cls, scheme, netloc, path, query, fragment):
+        return BaseResult.__new__(
+            cls, (scheme, netloc, path, query, fragment))
+
+    def geturl(self):
+        return urlunsplit(self)
+
+
+class ParseResult(BaseResult):
+
+    __slots__ = ()
+
+    def __new__(cls, scheme, netloc, path, params, query, fragment):
+        return BaseResult.__new__(
+            cls, (scheme, netloc, path, params, query, fragment))
+
+    def get_params(self):
+        return self[3]
+    params = property(get_params)
+
+    def geturl(self):
+        return urlunparse(self)
+
+
+def urlparse(url, scheme='', allow_fragments=True):
+    """Parse a URL into 6 components:
+    <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
+    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
+    Note that we don't break the components up in smaller bits
+    (e.g. netloc is a single string) and we don't expand % escapes."""
+    tuple = urlsplit(url, scheme, allow_fragments)
+    scheme, netloc, url, query, fragment = tuple
+    if scheme in uses_params and ';' in url:
+        url, params = _splitparams(url)
+    else:
+        params = ''
+    return ParseResult(scheme, netloc, url, params, query, fragment)
+
+def _splitparams(url):
+    if '/'  in url:
+        i = url.find(';', url.rfind('/'))
+        if i < 0:
+            return url, ''
+    else:
+        i = url.find(';')
+    return url[:i], url[i+1:]
+
+def _splitnetloc(url, start=0):
+    for c in '/?#': # the order is important!
+        delim = url.find(c, start)
+        if delim >= 0:
+            break
+    else:
+        delim = len(url)
+    return url[start:delim], url[delim:]
+
+def urlsplit(url, scheme='', allow_fragments=True):
+    """Parse a URL into 5 components:
+    <scheme>://<netloc>/<path>?<query>#<fragment>
+    Return a 5-tuple: (scheme, netloc, path, query, fragment).
+    Note that we don't break the components up in smaller bits
+    (e.g. netloc is a single string) and we don't expand % escapes."""
+    allow_fragments = bool(allow_fragments)
+    key = url, scheme, allow_fragments
+    cached = _parse_cache.get(key, None)
+    if cached:
+        return cached
+    if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth
+        clear_cache()
+    netloc = query = fragment = ''
+    i = url.find(':')
+    if i > 0:
+        if url[:i] == 'http': # optimize the common case
+            scheme = url[:i].lower()
+            url = url[i+1:]
+            if url[:2] == '//':
+                netloc, url = _splitnetloc(url, 2)
+            if allow_fragments and '#' in url:
+                url, fragment = url.split('#', 1)
+            if '?' in url:
+                url, query = url.split('?', 1)
+            v = SplitResult(scheme, netloc, url, query, fragment)
+            _parse_cache[key] = v
+            return v
+        for c in url[:i]:
+            if c not in scheme_chars:
+                break
+        else:
+            scheme, url = url[:i].lower(), url[i+1:]
+    if scheme in uses_netloc and url[:2] == '//':
+        netloc, url = _splitnetloc(url, 2)
+    if allow_fragments and scheme in uses_fragment and '#' in url:
+        url, fragment = url.split('#', 1)
+    if scheme in uses_query and '?' in url:
+        url, query = url.split('?', 1)
+    v = SplitResult(scheme, netloc, url, query, fragment)
+    _parse_cache[key] = v
+    return v
+
+def urlunparse((scheme, netloc, url, params, query, fragment)):
+    """Put a parsed URL back together again.  This may result in a
+    slightly different, but equivalent URL, if the URL that was parsed
+    originally had redundant delimiters, e.g. a ? with an empty query
+    (the draft states that these are equivalent)."""
+    if params:
+        url = "%s;%s" % (url, params)
+    return urlunsplit((scheme, netloc, url, query, fragment))
+
+def urlunsplit((scheme, netloc, url, query, fragment)):
+    if netloc or (scheme and scheme in uses_netloc and url[:2] != '//'):
+        if url and url[:1] != '/': url = '/' + url
+        url = '//' + (netloc or '') + url
+    if scheme:
+        url = scheme + ':' + url
+    if query:
+        url = url + '?' + query
+    if fragment:
+        url = url + '#' + fragment
+    return url
+
+def urljoin(base, url, allow_fragments=True):
+    """Join a base URL and a possibly relative URL to form an absolute
+    interpretation of the latter."""
+    if not base:
+        return url
+    if not url:
+        return base
+    bscheme, bnetloc, bpath, bparams, bquery, bfragment = urlparse(base, '', allow_fragments) #@UnusedVariable
+    scheme, netloc, path, params, query, fragment = urlparse(url, bscheme, allow_fragments)
+    if scheme != bscheme or scheme not in uses_relative:
+        return url
+    if scheme in uses_netloc:
+        if netloc:
+            return urlunparse((scheme, netloc, path,
+                               params, query, fragment))
+        netloc = bnetloc
+    if path[:1] == '/':
+        return urlunparse((scheme, netloc, path,
+                           params, query, fragment))
+    if not (path or params or query):
+        return urlunparse((scheme, netloc, bpath,
+                           bparams, bquery, fragment))
+    segments = bpath.split('/')[:-1] + path.split('/')
+    # XXX The stuff below is bogus in various ways...
+    if segments[-1] == '.':
+        segments[-1] = ''
+    while '.' in segments:
+        segments.remove('.')
+    while 1:
+        i = 1
+        n = len(segments) - 1
+        while i < n:
+            if (segments[i] == '..'
+                and segments[i-1] not in ('', '..')):
+                del segments[i-1:i+1]
+                break
+            i = i+1
+        else:
+            break
+    if segments == ['', '..']:
+        segments[-1] = ''
+    elif len(segments) >= 2 and segments[-1] == '..':
+        segments[-2:] = ['']
+    return urlunparse((scheme, netloc, '/'.join(segments),
+                       params, query, fragment))
+
+def urldefrag(url):
+    """Removes any existing fragment from URL.
+
+    Returns a tuple of the defragmented URL and the fragment.  If
+    the URL contained no fragments, the second element is the
+    empty string.
+    """
+    if '#' in url:
+        s, n, p, a, q, frag = urlparse(url)
+        defrag = urlunparse((s, n, p, a, q, ''))
+        return defrag, frag
+    else:
+        return url, ''
+
+
+test_input = """
+      http://a/b/c/d
+
+      g:h        = <URL:g:h>
+      http:g     = <URL:http://a/b/c/g>
+      http:      = <URL:http://a/b/c/d>
+      g          = <URL:http://a/b/c/g>
+      ./g        = <URL:http://a/b/c/g>
+      g/         = <URL:http://a/b/c/g/>
+      /g         = <URL:http://a/g>
+      //g        = <URL:http://g>
+      ?y         = <URL:http://a/b/c/d?y>
+      g?y        = <URL:http://a/b/c/g?y>
+      g?y/./x    = <URL:http://a/b/c/g?y/./x>
+      .          = <URL:http://a/b/c/>
+      ./         = <URL:http://a/b/c/>
+      ..         = <URL:http://a/b/>
+      ../        = <URL:http://a/b/>
+      ../g       = <URL:http://a/b/g>
+      ../..      = <URL:http://a/>
+      ../../g    = <URL:http://a/g>
+      ../../../g = <URL:http://a/../g>
+      ./../g     = <URL:http://a/b/g>
+      ./g/.      = <URL:http://a/b/c/g/>
+      /./g       = <URL:http://a/./g>
+      g/./h      = <URL:http://a/b/c/g/h>
+      g/../h     = <URL:http://a/b/c/h>
+      http:g     = <URL:http://a/b/c/g>
+      http:      = <URL:http://a/b/c/d>
+      http:?y         = <URL:http://a/b/c/d?y>
+      http:g?y        = <URL:http://a/b/c/g?y>
+      http:g?y/./x    = <URL:http://a/b/c/g?y/./x>
+"""
+
+def test():
+    import sys
+    base = ''
+    if sys.argv[1:]:
+        fn = sys.argv[1]
+        if fn == '-':
+            fp = sys.stdin
+        else:
+            fp = open(fn)
+    else:
+        try:
+            from cStringIO import StringIO
+        except ImportError:
+            from StringIO import StringIO
+        fp = StringIO(test_input)
+    while 1:
+        line = fp.readline()
+        if not line: break
+        words = line.split()
+        if not words:
+            continue
+        url = words[0]
+        parts = urlparse(url)
+        print '%-10s : %s' % (url, parts)
+        abs = urljoin(base, url)
+        if not base:
+            base = abs
+        wrapped = '<URL:%s>' % abs
+        print '%-10s = %s' % (url, wrapped)
+        if len(words) == 3 and words[1] == '=':
+            if wrapped != words[2]:
+                print 'EXPECTED', words[2], '!!!!!!!!!!'
+
+if __name__ == '__main__':
+    test()

=== modified file 'duplicity/util.py'
--- duplicity/util.py	2014-04-29 23:49:01 +0000
+++ duplicity/util.py	2014-06-14 13:58:30 +0000
@@ -23,8 +23,6 @@
 Miscellaneous utilities.
 """
 
-from future_builtins import map
-
 import errno
 import os
 import sys
@@ -88,7 +86,7 @@
     """
     try:
         return fn()
-    except Exception as e:
+    except Exception, e:
         if globals.ignore_errors:
             log.Warn(_("IGNORED_ERROR: Warning: ignoring error as requested: %s: %s")
                      % (e.__class__.__name__, uexc(e)))
@@ -139,7 +137,7 @@
     """
     try:
         fn(filename)
-    except OSError as ex:
+    except OSError, ex:
         if ex.errno == errno.ENOENT:
             pass
         else:

=== modified file 'po/POTFILES.in'
--- po/POTFILES.in	2014-04-20 13:15:18 +0000
+++ po/POTFILES.in	2014-06-14 13:58:30 +0000
@@ -7,6 +7,7 @@
 duplicity/selection.py
 duplicity/globals.py
 duplicity/commandline.py
+duplicity/urlparse_2_5.py
 duplicity/dup_temp.py
 duplicity/backend.py
 duplicity/asyncscheduler.py
@@ -15,6 +16,7 @@
 duplicity/collections.py
 duplicity/log.py
 duplicity/robust.py
+duplicity/static.py
 duplicity/diffdir.py
 duplicity/lazy.py
 duplicity/backends/_cf_pyrax.py
@@ -49,6 +51,7 @@
 duplicity/filechunkio.py
 duplicity/dup_threading.py
 duplicity/path.py
+duplicity/pexpect.py
 duplicity/gpginterface.py
 duplicity/dup_time.py
 duplicity/gpg.py

=== modified file 'po/duplicity.pot'
--- po/duplicity.pot	2014-05-11 11:50:12 +0000
+++ po/duplicity.pot	2014-06-14 13:58:30 +0000
@@ -8,7 +8,11 @@
 msgstr ""
 "Project-Id-Version: PACKAGE VERSION\n"
 "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@xxxxxxxxxxx>\n"
+<<<<<<< TREE
 "POT-Creation-Date: 2014-05-11 06:34-0500\n"
+=======
+"POT-Creation-Date: 2014-05-07 09:39-0500\n"
+>>>>>>> MERGE-SOURCE
 "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
 "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
 "Language-Team: LANGUAGE <LL@xxxxxx>\n"
@@ -69,243 +73,243 @@
 "Continuing restart on file %s."
 msgstr ""
 
-#: ../bin/duplicity:297
+#: ../bin/duplicity:299
 #, python-format
 msgid "File %s was corrupted during upload."
 msgstr ""
 
-#: ../bin/duplicity:331
+#: ../bin/duplicity:333
 msgid ""
 "Restarting backup, but current encryption settings do not match original "
 "settings"
 msgstr ""
 
-#: ../bin/duplicity:354
+#: ../bin/duplicity:356
 #, python-format
 msgid "Restarting after volume %s, file %s, block %s"
 msgstr ""
 
-#: ../bin/duplicity:421
+#: ../bin/duplicity:423
 #, python-format
 msgid "Processed volume %d"
 msgstr ""
 
-#: ../bin/duplicity:570
+#: ../bin/duplicity:572
 msgid ""
 "Fatal Error: Unable to start incremental backup.  Old signatures not found "
 "and incremental specified"
 msgstr ""
 
-#: ../bin/duplicity:574
+#: ../bin/duplicity:576
 msgid "No signatures found, switching to full backup."
 msgstr ""
 
-#: ../bin/duplicity:588
+#: ../bin/duplicity:590
 msgid "Backup Statistics"
 msgstr ""
 
-#: ../bin/duplicity:693
+#: ../bin/duplicity:695
 #, python-format
 msgid "%s not found in archive, no files restored."
 msgstr ""
 
-#: ../bin/duplicity:697
+#: ../bin/duplicity:699
 msgid "No files found in archive - nothing restored."
 msgstr ""
 
-#: ../bin/duplicity:730
+#: ../bin/duplicity:732
 #, python-format
 msgid "Processed volume %d of %d"
 msgstr ""
 
-#: ../bin/duplicity:764
+#: ../bin/duplicity:766
 #, python-format
 msgid "Invalid data - %s hash mismatch for file:"
 msgstr ""
 
-#: ../bin/duplicity:766
+#: ../bin/duplicity:768
 #, python-format
 msgid "Calculated hash: %s"
 msgstr ""
 
-#: ../bin/duplicity:767
+#: ../bin/duplicity:769
 #, python-format
 msgid "Manifest hash: %s"
 msgstr ""
 
-#: ../bin/duplicity:805
+#: ../bin/duplicity:807
 #, python-format
 msgid "Volume was signed by key %s, not %s"
 msgstr ""
 
-#: ../bin/duplicity:835
+#: ../bin/duplicity:837
 #, python-format
 msgid "Verify complete: %s, %s."
 msgstr ""
 
-#: ../bin/duplicity:836
+#: ../bin/duplicity:838
 #, python-format
 msgid "%d file compared"
 msgid_plural "%d files compared"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:838
+#: ../bin/duplicity:840
 #, python-format
 msgid "%d difference found"
 msgid_plural "%d differences found"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:857
+#: ../bin/duplicity:859
 msgid "No extraneous files found, nothing deleted in cleanup."
 msgstr ""
 
-#: ../bin/duplicity:862
+#: ../bin/duplicity:864
 msgid "Deleting this file from backend:"
 msgid_plural "Deleting these files from backend:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:874
+#: ../bin/duplicity:876
 msgid "Found the following file to delete:"
 msgid_plural "Found the following files to delete:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:878
+#: ../bin/duplicity:880
 msgid "Run duplicity again with the --force option to actually delete."
 msgstr ""
 
-#: ../bin/duplicity:921
+#: ../bin/duplicity:923
 msgid "There are backup set(s) at time(s):"
 msgstr ""
 
-#: ../bin/duplicity:923
+#: ../bin/duplicity:925
 msgid "Which can't be deleted because newer sets depend on them."
 msgstr ""
 
-#: ../bin/duplicity:927
+#: ../bin/duplicity:929
 msgid ""
 "Current active backup chain is older than specified time.  However, it will "
 "not be deleted.  To remove all your backups, manually purge the repository."
 msgstr ""
 
-#: ../bin/duplicity:933
+#: ../bin/duplicity:935
 msgid "No old backup sets found, nothing deleted."
 msgstr ""
 
-#: ../bin/duplicity:936
+#: ../bin/duplicity:938
 msgid "Deleting backup chain at time:"
 msgid_plural "Deleting backup chains at times:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:947
+#: ../bin/duplicity:949
 #, python-format
 msgid "Deleting incremental signature chain %s"
 msgstr ""
 
-#: ../bin/duplicity:949
+#: ../bin/duplicity:951
 #, python-format
 msgid "Deleting incremental backup chain %s"
 msgstr ""
 
-#: ../bin/duplicity:952
+#: ../bin/duplicity:954
 #, python-format
 msgid "Deleting complete signature chain %s"
 msgstr ""
 
-#: ../bin/duplicity:954
+#: ../bin/duplicity:956
 #, python-format
 msgid "Deleting complete backup chain %s"
 msgstr ""
 
-#: ../bin/duplicity:960
+#: ../bin/duplicity:962
 msgid "Found old backup chain at the following time:"
 msgid_plural "Found old backup chains at the following times:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:964
+#: ../bin/duplicity:966
 msgid "Rerun command with --force option to actually delete."
 msgstr ""
 
-#: ../bin/duplicity:1041
+#: ../bin/duplicity:1043
 #, python-format
 msgid "Deleting local %s (not authoritative at backend)."
 msgstr ""
 
-#: ../bin/duplicity:1045
+#: ../bin/duplicity:1047
 #, python-format
 msgid "Unable to delete %s: %s"
 msgstr ""
 
-#: ../bin/duplicity:1073 ../duplicity/dup_temp.py:263
+#: ../bin/duplicity:1075 ../duplicity/dup_temp.py:263
 #, python-format
 msgid "Failed to read %s: %s"
 msgstr ""
 
-#: ../bin/duplicity:1087
+#: ../bin/duplicity:1089
 #, python-format
 msgid "Copying %s to local cache."
 msgstr ""
 
-#: ../bin/duplicity:1135
+#: ../bin/duplicity:1137
 msgid "Local and Remote metadata are synchronized, no sync needed."
 msgstr ""
 
-#: ../bin/duplicity:1140
+#: ../bin/duplicity:1142
 msgid "Synchronizing remote metadata to local cache..."
 msgstr ""
 
-#: ../bin/duplicity:1155
+#: ../bin/duplicity:1157
 msgid "Sync would copy the following from remote to local:"
 msgstr ""
 
-#: ../bin/duplicity:1158
+#: ../bin/duplicity:1160
 msgid "Sync would remove the following spurious local files:"
 msgstr ""
 
-#: ../bin/duplicity:1201
+#: ../bin/duplicity:1203
 msgid "Unable to get free space on temp."
 msgstr ""
 
-#: ../bin/duplicity:1209
+#: ../bin/duplicity:1211
 #, python-format
 msgid "Temp space has %d available, backup needs approx %d."
 msgstr ""
 
-#: ../bin/duplicity:1212
+#: ../bin/duplicity:1214
 #, python-format
 msgid "Temp has %d available, backup will use approx %d."
 msgstr ""
 
-#: ../bin/duplicity:1220
+#: ../bin/duplicity:1222
 msgid "Unable to get max open files."
 msgstr ""
 
-#: ../bin/duplicity:1224
+#: ../bin/duplicity:1226
 #, python-format
 msgid ""
 "Max open files of %s is too low, should be >= 1024.\n"
 "Use 'ulimit -n 1024' or higher to correct.\n"
 msgstr ""
 
-#: ../bin/duplicity:1273
+#: ../bin/duplicity:1275
 msgid ""
 "RESTART: The first volume failed to upload before termination.\n"
 "         Restart is impossible...starting backup from beginning."
 msgstr ""
 
-#: ../bin/duplicity:1279
+#: ../bin/duplicity:1281
 #, python-format
 msgid ""
 "RESTART: Volumes %d to %d failed to upload before termination.\n"
 "         Restarting backup at volume %d."
 msgstr ""
 
-#: ../bin/duplicity:1286
+#: ../bin/duplicity:1288
 #, python-format
 msgid ""
 "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
@@ -314,7 +318,7 @@
 "         backup then restart the backup from the beginning."
 msgstr ""
 
-#: ../bin/duplicity:1308
+#: ../bin/duplicity:1310
 msgid ""
 "\n"
 "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n"
@@ -324,54 +328,54 @@
 "See https://bugs.launchpad.net/duplicity/+bug/931175\n";
 msgstr ""
 
-#: ../bin/duplicity:1399
+#: ../bin/duplicity:1401
 #, python-format
 msgid "Last %s backup left a partial set, restarting."
 msgstr ""
 
-#: ../bin/duplicity:1403
+#: ../bin/duplicity:1405
 #, python-format
 msgid "Cleaning up previous partial %s backup set, restarting."
 msgstr ""
 
-#: ../bin/duplicity:1414
+#: ../bin/duplicity:1416
 msgid "Last full backup date:"
 msgstr ""
 
-#: ../bin/duplicity:1416
+#: ../bin/duplicity:1418
 msgid "Last full backup date: none"
 msgstr ""
 
-#: ../bin/duplicity:1418
+#: ../bin/duplicity:1420
 msgid "Last full backup is too old, forcing full backup"
 msgstr ""
 
-#: ../bin/duplicity:1461
+#: ../bin/duplicity:1463
 msgid ""
 "When using symmetric encryption, the signing passphrase must equal the "
 "encryption passphrase."
 msgstr ""
 
-#: ../bin/duplicity:1514
+#: ../bin/duplicity:1516
 msgid "INT intercepted...exiting."
 msgstr ""
 
-#: ../bin/duplicity:1522
+#: ../bin/duplicity:1524
 #, python-format
 msgid "GPG error detail: %s"
 msgstr ""
 
-#: ../bin/duplicity:1532
+#: ../bin/duplicity:1534
 #, python-format
 msgid "User error detail: %s"
 msgstr ""
 
-#: ../bin/duplicity:1542
+#: ../bin/duplicity:1544
 #, python-format
 msgid "Backend error detail: %s"
 msgstr ""
 
-#: ../bin/rdiffdir:56 ../duplicity/commandline.py:233
+#: ../bin/rdiffdir:56 ../duplicity/commandline.py:238
 #, python-format
 msgid "Error opening file %s"
 msgstr ""
@@ -381,33 +385,33 @@
 msgid "File %s already exists, will not overwrite."
 msgstr ""
 
-#: ../duplicity/selection.py:121
+#: ../duplicity/selection.py:119
 #, python-format
 msgid "Skipping socket %s"
 msgstr ""
 
-#: ../duplicity/selection.py:125
+#: ../duplicity/selection.py:123
 #, python-format
 msgid "Error initializing file %s"
 msgstr ""
 
-#: ../duplicity/selection.py:129 ../duplicity/selection.py:150
+#: ../duplicity/selection.py:127 ../duplicity/selection.py:148
 #, python-format
 msgid "Error accessing possibly locked file %s"
 msgstr ""
 
-#: ../duplicity/selection.py:165
+#: ../duplicity/selection.py:163
 #, python-format
 msgid "Warning: base %s doesn't exist, continuing"
 msgstr ""
 
-#: ../duplicity/selection.py:168 ../duplicity/selection.py:186
-#: ../duplicity/selection.py:189
+#: ../duplicity/selection.py:166 ../duplicity/selection.py:184
+#: ../duplicity/selection.py:187
 #, python-format
 msgid "Selecting %s"
 msgstr ""
 
-#: ../duplicity/selection.py:270
+#: ../duplicity/selection.py:268
 #, python-format
 msgid ""
 "Fatal Error: The file specification\n"
@@ -418,14 +422,14 @@
 "pattern (such as '**') which matches the base directory."
 msgstr ""
 
-#: ../duplicity/selection.py:278
+#: ../duplicity/selection.py:276
 #, python-format
 msgid ""
 "Fatal Error while processing expression\n"
 "%s"
 msgstr ""
 
-#: ../duplicity/selection.py:288
+#: ../duplicity/selection.py:286
 #, python-format
 msgid ""
 "Last selection expression:\n"
@@ -435,49 +439,49 @@
 "probably isn't what you meant."
 msgstr ""
 
-#: ../duplicity/selection.py:313
+#: ../duplicity/selection.py:311
 #, python-format
 msgid "Reading filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:316
+#: ../duplicity/selection.py:314
 #, python-format
 msgid "Sorting filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:343
+#: ../duplicity/selection.py:341
 #, python-format
 msgid ""
 "Warning: file specification '%s' in filelist %s\n"
 "doesn't start with correct prefix %s.  Ignoring."
 msgstr ""
 
-#: ../duplicity/selection.py:347
+#: ../duplicity/selection.py:345
 msgid "Future prefix errors will not be logged."
 msgstr ""
 
-#: ../duplicity/selection.py:363
+#: ../duplicity/selection.py:361
 #, python-format
 msgid "Error closing filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:430
+#: ../duplicity/selection.py:428
 #, python-format
 msgid "Reading globbing filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:463
+#: ../duplicity/selection.py:461
 #, python-format
 msgid "Error compiling regular expression %s"
 msgstr ""
 
-#: ../duplicity/selection.py:479
+#: ../duplicity/selection.py:477
 msgid ""
 "Warning: exclude-device-files is not the first selector.\n"
 "This may not be what you intended"
 msgstr ""
 
-#: ../duplicity/commandline.py:70
+#: ../duplicity/commandline.py:68
 #, python-format
 msgid ""
 "Warning: Option %s is pending deprecation and will be removed in a future "
@@ -485,11 +489,16 @@
 "Use of default filenames is strongly suggested."
 msgstr ""
 
+#: ../duplicity/commandline.py:216
+#, python-format
+msgid "Unable to load gio backend: %s"
+msgstr ""
+
 #. Used in usage help to represent a Unix-style path name. Example:
 #. --archive-dir <path>
-#: ../duplicity/commandline.py:254 ../duplicity/commandline.py:264
-#: ../duplicity/commandline.py:281 ../duplicity/commandline.py:347
-#: ../duplicity/commandline.py:549 ../duplicity/commandline.py:765
+#: ../duplicity/commandline.py:259 ../duplicity/commandline.py:269
+#: ../duplicity/commandline.py:286 ../duplicity/commandline.py:352
+#: ../duplicity/commandline.py:554 ../duplicity/commandline.py:770
 msgid "path"
 msgstr ""
 
@@ -499,9 +508,9 @@
 #. --hidden-encrypt-key <gpg_key_id>
 #. Used in usage help to represent an ID for a GnuPG key. Example:
 #. --encrypt-key <gpg_key_id>
-#: ../duplicity/commandline.py:276 ../duplicity/commandline.py:283
-#: ../duplicity/commandline.py:369 ../duplicity/commandline.py:530
-#: ../duplicity/commandline.py:738
+#: ../duplicity/commandline.py:281 ../duplicity/commandline.py:288
+#: ../duplicity/commandline.py:372 ../duplicity/commandline.py:535
+#: ../duplicity/commandline.py:743
 msgid "gpg-key-id"
 msgstr ""
 
@@ -509,42 +518,42 @@
 #. matching one or more files, as described in the documentation.
 #. Example:
 #. --exclude <shell_pattern>
-#: ../duplicity/commandline.py:291 ../duplicity/commandline.py:395
-#: ../duplicity/commandline.py:788
+#: ../duplicity/commandline.py:296 ../duplicity/commandline.py:398
+#: ../duplicity/commandline.py:793
 msgid "shell_pattern"
 msgstr ""
 
 #. Used in usage help to represent the name of a file. Example:
 #. --log-file <filename>
-#: ../duplicity/commandline.py:297 ../duplicity/commandline.py:304
-#: ../duplicity/commandline.py:309 ../duplicity/commandline.py:397
-#: ../duplicity/commandline.py:402 ../duplicity/commandline.py:413
-#: ../duplicity/commandline.py:734
+#: ../duplicity/commandline.py:302 ../duplicity/commandline.py:309
+#: ../duplicity/commandline.py:314 ../duplicity/commandline.py:400
+#: ../duplicity/commandline.py:405 ../duplicity/commandline.py:416
+#: ../duplicity/commandline.py:739
 msgid "filename"
 msgstr ""
 
 #. Used in usage help to represent a regular expression (regexp).
-#: ../duplicity/commandline.py:316 ../duplicity/commandline.py:404
+#: ../duplicity/commandline.py:321 ../duplicity/commandline.py:407
 msgid "regular_expression"
 msgstr ""
 
 #. Used in usage help to represent a time spec for a previous
 #. point in time, as described in the documentation. Example:
 #. duplicity remove-older-than time [options] target_url
-#: ../duplicity/commandline.py:359 ../duplicity/commandline.py:472
-#: ../duplicity/commandline.py:820
+#: ../duplicity/commandline.py:364 ../duplicity/commandline.py:475
+#: ../duplicity/commandline.py:825
 msgid "time"
 msgstr ""
 
 #. Used in usage help. (Should be consistent with the "Options:"
 #. header.) Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:365 ../duplicity/commandline.py:475
-#: ../duplicity/commandline.py:541 ../duplicity/commandline.py:753
+#: ../duplicity/commandline.py:368 ../duplicity/commandline.py:478
+#: ../duplicity/commandline.py:546 ../duplicity/commandline.py:758
 msgid "options"
 msgstr ""
 
-#: ../duplicity/commandline.py:380
+#: ../duplicity/commandline.py:383
 #, python-format
 msgid ""
 "Running in 'ignore errors' mode due to %s; please re-consider if this was "
@@ -552,156 +561,156 @@
 msgstr ""
 
 #. Used in usage help to represent an imap mailbox
-#: ../duplicity/commandline.py:393
+#: ../duplicity/commandline.py:396
 msgid "imap_mailbox"
 msgstr ""
 
-#: ../duplicity/commandline.py:407
+#: ../duplicity/commandline.py:410
 msgid "file_descriptor"
 msgstr ""
 
 #. Used in usage help to represent a desired number of
 #. something. Example:
 #. --num-retries <number>
-#: ../duplicity/commandline.py:418 ../duplicity/commandline.py:440
-#: ../duplicity/commandline.py:452 ../duplicity/commandline.py:458
-#: ../duplicity/commandline.py:496 ../duplicity/commandline.py:501
-#: ../duplicity/commandline.py:505 ../duplicity/commandline.py:579
-#: ../duplicity/commandline.py:748
+#: ../duplicity/commandline.py:421 ../duplicity/commandline.py:443
+#: ../duplicity/commandline.py:455 ../duplicity/commandline.py:461
+#: ../duplicity/commandline.py:499 ../duplicity/commandline.py:504
+#: ../duplicity/commandline.py:508 ../duplicity/commandline.py:584
+#: ../duplicity/commandline.py:753
 msgid "number"
 msgstr ""
 
 #. Used in usage help (noun)
-#: ../duplicity/commandline.py:421
+#: ../duplicity/commandline.py:424
 msgid "backup name"
 msgstr ""
 
 #. noun
-#: ../duplicity/commandline.py:514 ../duplicity/commandline.py:517
-#: ../duplicity/commandline.py:719
+#: ../duplicity/commandline.py:519 ../duplicity/commandline.py:522
+#: ../duplicity/commandline.py:724
 msgid "command"
 msgstr ""
 
-#: ../duplicity/commandline.py:520
+#: ../duplicity/commandline.py:525
 msgid "pyrax|cloudfiles"
 msgstr ""
 
-#: ../duplicity/commandline.py:538
+#: ../duplicity/commandline.py:543
 msgid "paramiko|pexpect"
 msgstr ""
 
-#: ../duplicity/commandline.py:544
+#: ../duplicity/commandline.py:549
 msgid "pem formatted bundle of certificate authorities"
 msgstr ""
 
 #. Used in usage help. Example:
 #. --timeout <seconds>
-#: ../duplicity/commandline.py:554 ../duplicity/commandline.py:782
+#: ../duplicity/commandline.py:559 ../duplicity/commandline.py:787
 msgid "seconds"
 msgstr ""
 
 #. abbreviation for "character" (noun)
-#: ../duplicity/commandline.py:560 ../duplicity/commandline.py:716
+#: ../duplicity/commandline.py:565 ../duplicity/commandline.py:721
 msgid "char"
 msgstr ""
 
-#: ../duplicity/commandline.py:682
+#: ../duplicity/commandline.py:687
 #, python-format
 msgid "Using archive dir: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:683
+#: ../duplicity/commandline.py:688
 #, python-format
 msgid "Using backup name: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:690
+#: ../duplicity/commandline.py:695
 #, python-format
 msgid "Command line error: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:691
+#: ../duplicity/commandline.py:696
 msgid "Enter 'duplicity --help' for help screen."
 msgstr ""
 
 #. Used in usage help to represent a Unix-style path name. Example:
 #. rsync://user[:password]@other_host[:port]//absolute_path
-#: ../duplicity/commandline.py:704
+#: ../duplicity/commandline.py:709
 msgid "absolute_path"
 msgstr ""
 
 #. Used in usage help. Example:
 #. tahoe://alias/some_dir
-#: ../duplicity/commandline.py:708
+#: ../duplicity/commandline.py:713
 msgid "alias"
 msgstr ""
 
 #. Used in help to represent a "bucket name" for Amazon Web
 #. Services' Simple Storage Service (S3). Example:
 #. s3://other.host/bucket_name[/prefix]
-#: ../duplicity/commandline.py:713
+#: ../duplicity/commandline.py:718
 msgid "bucket_name"
 msgstr ""
 
 #. Used in usage help to represent the name of a container in
 #. Amazon Web Services' Cloudfront. Example:
 #. cf+http://container_name
-#: ../duplicity/commandline.py:724
+#: ../duplicity/commandline.py:729
 msgid "container_name"
 msgstr ""
 
 #. noun
-#: ../duplicity/commandline.py:727
+#: ../duplicity/commandline.py:732
 msgid "count"
 msgstr ""
 
 #. Used in usage help to represent the name of a file directory
-#: ../duplicity/commandline.py:730
+#: ../duplicity/commandline.py:735
 msgid "directory"
 msgstr ""
 
 #. Used in usage help, e.g. to represent the name of a code
 #. module. Example:
 #. rsync://user[:password]@other.host[:port]::/module/some_dir
-#: ../duplicity/commandline.py:743
+#: ../duplicity/commandline.py:748
 msgid "module"
 msgstr ""
 
 #. Used in usage help to represent an internet hostname. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:757
+#: ../duplicity/commandline.py:762
 msgid "other.host"
 msgstr ""
 
 #. Used in usage help. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:761
+#: ../duplicity/commandline.py:766
 msgid "password"
 msgstr ""
 
 #. Used in usage help to represent a TCP port number. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:769
+#: ../duplicity/commandline.py:774
 msgid "port"
 msgstr ""
 
 #. Used in usage help. This represents a string to be used as a
 #. prefix to names for backup files created by Duplicity. Example:
 #. s3://other.host/bucket_name[/prefix]
-#: ../duplicity/commandline.py:774
+#: ../duplicity/commandline.py:779
 msgid "prefix"
 msgstr ""
 
 #. Used in usage help to represent a Unix-style path name. Example:
 #. rsync://user[:password]@other.host[:port]/relative_path
-#: ../duplicity/commandline.py:778
+#: ../duplicity/commandline.py:783
 msgid "relative_path"
 msgstr ""
 
 #. Used in usage help to represent the name of a single file
 #. directory or a Unix-style path to a directory. Example:
 #. file:///some_dir
-#: ../duplicity/commandline.py:793
+#: ../duplicity/commandline.py:798
 msgid "some_dir"
 msgstr ""
 
@@ -709,14 +718,14 @@
 #. directory or a Unix-style path to a directory where files will be
 #. coming FROM. Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:799
+#: ../duplicity/commandline.py:804
 msgid "source_dir"
 msgstr ""
 
 #. Used in usage help to represent a URL files will be coming
 #. FROM. Example:
 #. duplicity [restore] [options] source_url target_dir
-#: ../duplicity/commandline.py:804
+#: ../duplicity/commandline.py:809
 msgid "source_url"
 msgstr ""
 
@@ -724,75 +733,75 @@
 #. directory or a Unix-style path to a directory. where files will be
 #. going TO. Example:
 #. duplicity [restore] [options] source_url target_dir
-#: ../duplicity/commandline.py:810
+#: ../duplicity/commandline.py:815
 msgid "target_dir"
 msgstr ""
 
 #. Used in usage help to represent a URL files will be going TO.
 #. Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:815
+#: ../duplicity/commandline.py:820
 msgid "target_url"
 msgstr ""
 
 #. Used in usage help to represent a user name (i.e. login).
 #. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:825
+#: ../duplicity/commandline.py:830
 msgid "user"
 msgstr ""
 
 #. Header in usage help
-#: ../duplicity/commandline.py:842
+#: ../duplicity/commandline.py:847
 msgid "Backends and their URL formats:"
 msgstr ""
 
 #. Header in usage help
-#: ../duplicity/commandline.py:867
+#: ../duplicity/commandline.py:872
 msgid "Commands:"
 msgstr ""
 
-#: ../duplicity/commandline.py:891
+#: ../duplicity/commandline.py:896
 #, python-format
 msgid "Specified archive directory '%s' does not exist, or is not a directory"
 msgstr ""
 
-#: ../duplicity/commandline.py:900
+#: ../duplicity/commandline.py:905
 #, python-format
 msgid ""
 "Sign key should be an 8 character hex string, like 'AA0E73D2'.\n"
 "Received '%s' instead."
 msgstr ""
 
-#: ../duplicity/commandline.py:960
+#: ../duplicity/commandline.py:965
 #, python-format
 msgid ""
 "Restore destination directory %s already exists.\n"
 "Will not overwrite."
 msgstr ""
 
-#: ../duplicity/commandline.py:965
+#: ../duplicity/commandline.py:970
 #, python-format
 msgid "Verify directory %s does not exist"
 msgstr ""
 
-#: ../duplicity/commandline.py:971
+#: ../duplicity/commandline.py:976
 #, python-format
 msgid "Backup source directory %s does not exist."
 msgstr ""
 
-#: ../duplicity/commandline.py:1000
+#: ../duplicity/commandline.py:1005
 #, python-format
 msgid "Command line warning: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:1000
+#: ../duplicity/commandline.py:1005
 msgid ""
 "Selection options --exclude/--include\n"
 "currently work only when backing up,not restoring."
 msgstr ""
 
-#: ../duplicity/commandline.py:1048
+#: ../duplicity/commandline.py:1053
 #, python-format
 msgid ""
 "Bad URL '%s'.\n"
@@ -800,40 +809,64 @@
 "\"file:///usr/local\".  See the man page for more information."
 msgstr ""
 
-#: ../duplicity/commandline.py:1073
+#: ../duplicity/commandline.py:1078
 msgid "Main action: "
 msgstr ""
 
-#: ../duplicity/backend.py:110
+#: ../duplicity/backend.py:87
 #, python-format
 msgid "Import of %s %s"
 msgstr ""
 
-#: ../duplicity/backend.py:212
+#: ../duplicity/backend.py:164
 #, python-format
 msgid "Could not initialize backend: %s"
 msgstr ""
 
+<<<<<<< TREE
 #: ../duplicity/backend.py:373
+=======
+#: ../duplicity/backend.py:319
+#, python-format
+msgid "Attempt %s failed: %s: %s"
+msgstr ""
+
+#: ../duplicity/backend.py:321 ../duplicity/backend.py:351
+#: ../duplicity/backend.py:358
+>>>>>>> MERGE-SOURCE
 #, python-format
 msgid "Backtrace of previous error: %s"
 msgstr ""
 
+<<<<<<< TREE
 #: ../duplicity/backend.py:388
+=======
+#: ../duplicity/backend.py:349
+#, python-format
+msgid "Attempt %s failed. %s: %s"
+msgstr ""
+
+#: ../duplicity/backend.py:360
+>>>>>>> MERGE-SOURCE
 #, python-format
 msgid "Giving up after %s attempts. %s: %s"
 msgstr ""
 
+<<<<<<< TREE
 #: ../duplicity/backend.py:392
 #, python-format
 msgid "Attempt %s failed. %s: %s"
 msgstr ""
 
 #: ../duplicity/backend.py:475
+=======
+#: ../duplicity/backend.py:545 ../duplicity/backend.py:569
+>>>>>>> MERGE-SOURCE
 #, python-format
 msgid "Reading results of '%s'"
 msgstr ""
 
+<<<<<<< TREE
 #: ../duplicity/backend.py:502
 #, python-format
 msgid "Writing %s"
@@ -843,6 +876,28 @@
 #, python-format
 msgid "File %s not found locally after get from backend"
 msgstr ""
+=======
+#: ../duplicity/backend.py:584
+#, python-format
+msgid "Running '%s' failed with code %d (attempt #%d)"
+msgid_plural "Running '%s' failed with code %d (attempt #%d)"
+msgstr[0] ""
+msgstr[1] ""
+
+#: ../duplicity/backend.py:588
+#, python-format
+msgid ""
+"Error is:\n"
+"%s"
+msgstr ""
+
+#: ../duplicity/backend.py:590
+#, python-format
+msgid "Giving up trying to execute '%s' after %d attempt"
+msgid_plural "Giving up trying to execute '%s' after %d attempts"
+msgstr[0] ""
+msgstr[1] ""
+>>>>>>> MERGE-SOURCE
 
 #: ../duplicity/asyncscheduler.py:66
 #, python-format
@@ -880,142 +935,142 @@
 msgid "task execution done (success: %s)"
 msgstr ""
 
-#: ../duplicity/patchdir.py:76 ../duplicity/patchdir.py:81
+#: ../duplicity/patchdir.py:74 ../duplicity/patchdir.py:79
 #, python-format
 msgid "Patching %s"
 msgstr ""
 
-#: ../duplicity/patchdir.py:510
+#: ../duplicity/patchdir.py:508
 #, python-format
 msgid "Error '%s' patching %s"
 msgstr ""
 
-#: ../duplicity/patchdir.py:582
+#: ../duplicity/patchdir.py:581
 #, python-format
 msgid "Writing %s of type %s"
 msgstr ""
 
-#: ../duplicity/collections.py:152 ../duplicity/collections.py:163
+#: ../duplicity/collections.py:150 ../duplicity/collections.py:161
 #, python-format
 msgid "BackupSet.delete: missing %s"
 msgstr ""
 
-#: ../duplicity/collections.py:188
+#: ../duplicity/collections.py:186
 msgid "Fatal Error: No manifests found for most recent backup"
 msgstr ""
 
-#: ../duplicity/collections.py:197
+#: ../duplicity/collections.py:195
 msgid ""
 "Fatal Error: Remote manifest does not match local one.  Either the remote "
 "backup set or the local archive directory has been corrupted."
 msgstr ""
 
-#: ../duplicity/collections.py:205
+#: ../duplicity/collections.py:203
 msgid "Fatal Error: Neither remote nor local manifest is readable."
 msgstr ""
 
-#: ../duplicity/collections.py:315
+#: ../duplicity/collections.py:314
 msgid "Preferring Backupset over previous one!"
 msgstr ""
 
-#: ../duplicity/collections.py:318
+#: ../duplicity/collections.py:317
 #, python-format
 msgid "Ignoring incremental Backupset (start_time: %s; needed: %s)"
 msgstr ""
 
-#: ../duplicity/collections.py:323
+#: ../duplicity/collections.py:322
 #, python-format
 msgid "Added incremental Backupset (start_time: %s / end_time: %s)"
 msgstr ""
 
+#: ../duplicity/collections.py:392
+msgid "Chain start time: "
+msgstr ""
+
 #: ../duplicity/collections.py:393
-msgid "Chain start time: "
+msgid "Chain end time: "
 msgstr ""
 
 #: ../duplicity/collections.py:394
-msgid "Chain end time: "
-msgstr ""
-
-#: ../duplicity/collections.py:395
 #, python-format
 msgid "Number of contained backup sets: %d"
 msgstr ""
 
-#: ../duplicity/collections.py:397
+#: ../duplicity/collections.py:396
 #, python-format
 msgid "Total number of contained volumes: %d"
 msgstr ""
 
-#: ../duplicity/collections.py:399
+#: ../duplicity/collections.py:398
 msgid "Type of backup set:"
 msgstr ""
 
-#: ../duplicity/collections.py:399
+#: ../duplicity/collections.py:398
 msgid "Time:"
 msgstr ""
 
-#: ../duplicity/collections.py:399
+#: ../duplicity/collections.py:398
 msgid "Num volumes:"
 msgstr ""
 
-#: ../duplicity/collections.py:403
+#: ../duplicity/collections.py:402
 msgid "Full"
 msgstr ""
 
-#: ../duplicity/collections.py:406
+#: ../duplicity/collections.py:405
 msgid "Incremental"
 msgstr ""
 
-#: ../duplicity/collections.py:466
+#: ../duplicity/collections.py:465
 msgid "local"
 msgstr ""
 
-#: ../duplicity/collections.py:468
+#: ../duplicity/collections.py:467
 msgid "remote"
 msgstr ""
 
-#: ../duplicity/collections.py:623
+#: ../duplicity/collections.py:622
 msgid "Collection Status"
 msgstr ""
 
-#: ../duplicity/collections.py:625
+#: ../duplicity/collections.py:624
 #, python-format
 msgid "Connecting with backend: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:627
+#: ../duplicity/collections.py:626
 #, python-format
 msgid "Archive dir: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:630
+#: ../duplicity/collections.py:629
 #, python-format
 msgid "Found %d secondary backup chain."
 msgid_plural "Found %d secondary backup chains."
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:635
+#: ../duplicity/collections.py:634
 #, python-format
 msgid "Secondary chain %d of %d:"
 msgstr ""
 
-#: ../duplicity/collections.py:641
+#: ../duplicity/collections.py:640
 msgid "Found primary backup chain with matching signature chain:"
 msgstr ""
 
-#: ../duplicity/collections.py:645
+#: ../duplicity/collections.py:644
 msgid "No backup chains with active signatures found"
 msgstr ""
 
-#: ../duplicity/collections.py:648
+#: ../duplicity/collections.py:647
 #, python-format
 msgid "Also found %d backup set not part of any chain,"
 msgid_plural "Also found %d backup sets not part of any chain,"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:652
+#: ../duplicity/collections.py:651
 #, python-format
 msgid "and %d incomplete backup set."
 msgid_plural "and %d incomplete backup sets."
@@ -1023,95 +1078,95 @@
 msgstr[1] ""
 
 #. "cleanup" is a hard-coded command, so do not translate it
-#: ../duplicity/collections.py:657
+#: ../duplicity/collections.py:656
 msgid "These may be deleted by running duplicity with the \"cleanup\" command."
 msgstr ""
 
-#: ../duplicity/collections.py:660
+#: ../duplicity/collections.py:659
 msgid "No orphaned or incomplete backup sets found."
 msgstr ""
 
-#: ../duplicity/collections.py:676
+#: ../duplicity/collections.py:675
 #, python-format
 msgid "%d file exists on backend"
 msgid_plural "%d files exist on backend"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:683
+#: ../duplicity/collections.py:682
 #, python-format
 msgid "%d file exists in cache"
 msgid_plural "%d files exist in cache"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:735
+#: ../duplicity/collections.py:734
 msgid "Warning, discarding last backup set, because of missing signature file."
 msgstr ""
 
-#: ../duplicity/collections.py:758
+#: ../duplicity/collections.py:757
 msgid "Warning, found the following local orphaned signature file:"
 msgid_plural "Warning, found the following local orphaned signature files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:767
+#: ../duplicity/collections.py:766
 msgid "Warning, found the following remote orphaned signature file:"
 msgid_plural "Warning, found the following remote orphaned signature files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:776
+#: ../duplicity/collections.py:775
 msgid "Warning, found signatures but no corresponding backup files"
 msgstr ""
 
-#: ../duplicity/collections.py:780
+#: ../duplicity/collections.py:779
 msgid ""
 "Warning, found incomplete backup sets, probably left from aborted session"
 msgstr ""
 
-#: ../duplicity/collections.py:784
+#: ../duplicity/collections.py:783
 msgid "Warning, found the following orphaned backup file:"
 msgid_plural "Warning, found the following orphaned backup files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:801
+#: ../duplicity/collections.py:800
 #, python-format
 msgid "Extracting backup chains from list of files: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:811
+#: ../duplicity/collections.py:810
 #, python-format
 msgid "File %s is part of known set"
 msgstr ""
 
-#: ../duplicity/collections.py:814
+#: ../duplicity/collections.py:813
 #, python-format
 msgid "File %s is not part of a known set; creating new set"
 msgstr ""
 
-#: ../duplicity/collections.py:819
+#: ../duplicity/collections.py:818
 #, python-format
 msgid "Ignoring file (rejected by backup set) '%s'"
 msgstr ""
 
-#: ../duplicity/collections.py:833
+#: ../duplicity/collections.py:831
 #, python-format
 msgid "Found backup chain %s"
 msgstr ""
 
-#: ../duplicity/collections.py:838
+#: ../duplicity/collections.py:836
 #, python-format
 msgid "Added set %s to pre-existing chain %s"
 msgstr ""
 
-#: ../duplicity/collections.py:842
+#: ../duplicity/collections.py:840
 #, python-format
 msgid "Found orphaned set %s"
 msgstr ""
 
-#: ../duplicity/collections.py:993
+#: ../duplicity/collections.py:992
 #, python-format
 msgid ""
 "No signature chain for the requested time.  Using oldest available chain, "
@@ -1123,59 +1178,59 @@
 msgid "Error listing directory %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:105 ../duplicity/diffdir.py:395
+#: ../duplicity/diffdir.py:103 ../duplicity/diffdir.py:394
 #, python-format
 msgid "Error %s getting delta for %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:119
+#: ../duplicity/diffdir.py:117
 #, python-format
 msgid "Getting delta of %s and %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:164
+#: ../duplicity/diffdir.py:162
 #, python-format
 msgid "A %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:171
+#: ../duplicity/diffdir.py:169
 #, python-format
 msgid "M %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:193
+#: ../duplicity/diffdir.py:191
 #, python-format
 msgid "Comparing %s and %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:201
+#: ../duplicity/diffdir.py:199
 #, python-format
 msgid "D %s"
 msgstr ""
 
-#: ../duplicity/lazy.py:334
+#: ../duplicity/lazy.py:325
 #, python-format
 msgid "Warning: oldindex %s >= newindex %s"
 msgstr ""
 
-#: ../duplicity/lazy.py:409
+#: ../duplicity/lazy.py:400
 #, python-format
 msgid "Error '%s' processing %s"
 msgstr ""
 
-#: ../duplicity/lazy.py:419
+#: ../duplicity/lazy.py:410
 #, python-format
 msgid "Skipping %s because of previous error"
 msgstr ""
 
-#: ../duplicity/backends/sshbackend.py:26
+#: ../duplicity/backends/sshbackend.py:25
 #, python-format
 msgid ""
 "Warning: Option %s is supported by ssh pexpect backend only and will be "
 "ignored."
 msgstr ""
 
-#: ../duplicity/backends/sshbackend.py:34
+#: ../duplicity/backends/sshbackend.py:33
 #, python-format
 msgid ""
 "Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will "
@@ -1187,7 +1242,12 @@
 msgid "Connection failed, please check your password: %s"
 msgstr ""
 
-#: ../duplicity/manifest.py:89
+#: ../duplicity/backends/giobackend.py:130
+#, python-format
+msgid "Writing %s"
+msgstr ""
+
+#: ../duplicity/manifest.py:87
 #, python-format
 msgid ""
 "Fatal Error: Backup source host has changed.\n"
@@ -1195,7 +1255,7 @@
 "Previous hostname: %s"
 msgstr ""
 
-#: ../duplicity/manifest.py:96
+#: ../duplicity/manifest.py:94
 #, python-format
 msgid ""
 "Fatal Error: Backup source directory has changed.\n"
@@ -1203,7 +1263,7 @@
 "Previous directory: %s"
 msgstr ""
 
-#: ../duplicity/manifest.py:105
+#: ../duplicity/manifest.py:103
 msgid ""
 "Aborting because you may have accidentally tried to backup two different "
 "data sets to the same remote location, or using the same archive directory.  "
@@ -1211,107 +1271,107 @@
 "seeing this message"
 msgstr ""
 
-#: ../duplicity/manifest.py:211
+#: ../duplicity/manifest.py:209
 msgid "Manifests not equal because different volume numbers"
 msgstr ""
 
-#: ../duplicity/manifest.py:216
+#: ../duplicity/manifest.py:214
 msgid "Manifests not equal because volume lists differ"
 msgstr ""
 
-#: ../duplicity/manifest.py:221
+#: ../duplicity/manifest.py:219
 msgid "Manifests not equal because hosts or directories differ"
 msgstr ""
 
-#: ../duplicity/manifest.py:368
+#: ../duplicity/manifest.py:366
 msgid "Warning, found extra Volume identifier"
 msgstr ""
 
-#: ../duplicity/manifest.py:394
+#: ../duplicity/manifest.py:392
 msgid "Other is not VolumeInfo"
 msgstr ""
 
-#: ../duplicity/manifest.py:397
+#: ../duplicity/manifest.py:395
 msgid "Volume numbers don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:400
+#: ../duplicity/manifest.py:398
 msgid "start_indicies don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:403
+#: ../duplicity/manifest.py:401
 msgid "end_index don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:410
+#: ../duplicity/manifest.py:408
 msgid "Hashes don't match"
 msgstr ""
 
-#: ../duplicity/path.py:224 ../duplicity/path.py:283
+#: ../duplicity/path.py:222 ../duplicity/path.py:281
 #, python-format
 msgid "Warning: %s has negative mtime, treating as 0."
 msgstr ""
 
-#: ../duplicity/path.py:348
+#: ../duplicity/path.py:346
 msgid "Difference found:"
 msgstr ""
 
-#: ../duplicity/path.py:354
+#: ../duplicity/path.py:352
 #, python-format
 msgid "New file %s"
 msgstr ""
 
-#: ../duplicity/path.py:357
+#: ../duplicity/path.py:355
 #, python-format
 msgid "File %s is missing"
 msgstr ""
 
-#: ../duplicity/path.py:360
+#: ../duplicity/path.py:358
 #, python-format
 msgid "File %%s has type %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:366 ../duplicity/path.py:392
+#: ../duplicity/path.py:364 ../duplicity/path.py:390
 #, python-format
 msgid "File %%s has permissions %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:371
+#: ../duplicity/path.py:369
 #, python-format
 msgid "File %%s has mtime %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:379
+#: ../duplicity/path.py:377
 #, python-format
 msgid "Data for file %s is different"
 msgstr ""
 
-#: ../duplicity/path.py:387
+#: ../duplicity/path.py:385
 #, python-format
 msgid "Symlink %%s points to %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:396
+#: ../duplicity/path.py:394
 #, python-format
 msgid "Device file %%s has numbers %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:556
+#: ../duplicity/path.py:554
 #, python-format
 msgid "Making directory %s"
 msgstr ""
 
-#: ../duplicity/path.py:566
+#: ../duplicity/path.py:564
 #, python-format
 msgid "Deleting %s"
 msgstr ""
 
-#: ../duplicity/path.py:575
+#: ../duplicity/path.py:573
 #, python-format
 msgid "Touching %s"
 msgstr ""
 
-#: ../duplicity/path.py:582
+#: ../duplicity/path.py:580
 #, python-format
 msgid "Deleting tree %s"
 msgstr ""
@@ -1325,7 +1385,7 @@
 msgid "GPG process %d terminated before wait()"
 msgstr ""
 
-#: ../duplicity/dup_time.py:49
+#: ../duplicity/dup_time.py:47
 #, python-format
 msgid ""
 "Bad interval string \"%s\"\n"
@@ -1335,7 +1395,7 @@
 "page for more information."
 msgstr ""
 
-#: ../duplicity/dup_time.py:55
+#: ../duplicity/dup_time.py:53
 #, python-format
 msgid ""
 "Bad time string \"%s\"\n"
@@ -1388,12 +1448,12 @@
 msgid "Cleanup of temporary directory %s failed - this is probably a bug."
 msgstr ""
 
-#: ../duplicity/util.py:93
+#: ../duplicity/util.py:91
 #, python-format
 msgid "IGNORED_ERROR: Warning: ignoring error as requested: %s: %s"
 msgstr ""
 
-#: ../duplicity/util.py:150
+#: ../duplicity/util.py:148
 #, python-format
 msgid "Releasing lockfile %s"
 msgstr ""

=== modified file 'setup.py'
--- setup.py	2014-05-11 11:50:12 +0000
+++ setup.py	2014-06-14 13:58:30 +0000
@@ -28,8 +28,8 @@
 
 version_string = "$version"
 
-if sys.version_info[:2] < (2, 6):
-    print "Sorry, duplicity requires version 2.6 or later of python"
+if sys.version_info[:2] < (2,4):
+    print "Sorry, duplicity requires version 2.4 or later of python"
     sys.exit(1)
 
 incdir_list = libdir_list = None
@@ -53,6 +53,8 @@
                 'README',
                 'README-REPO',
                 'README-LOG',
+                'tarfile-LICENSE',
+                'tarfile-CHANGES',
                 'CHANGELOG']),
               ]
 
@@ -79,7 +81,7 @@
 
         # make symlinks for test data
         if build_cmd.build_lib != top_dir:
-            for path in ['testfiles.tar.gz', 'gnupg']:
+            for path in ['testfiles.tar.gz', 'testtar.tar', 'gnupg']:
                 src = os.path.join(top_dir, 'testing', path)
                 target = os.path.join(build_cmd.build_lib, 'testing', path)
                 try:
@@ -145,7 +147,7 @@
                                libraries=["rsync"])],
       scripts = ['bin/rdiffdir', 'bin/duplicity'],
       data_files = data_files,
-      tests_require = ['lockfile', 'mock', 'pexpect'],
+      tests_require = ['lockfile', 'mock'],
       test_suite = 'testing',
       cmdclass={'test': TestCommand,
                 'install': InstallCommand,

=== added file 'tarfile-CHANGES'
--- tarfile-CHANGES	1970-01-01 00:00:00 +0000
+++ tarfile-CHANGES	2014-06-14 13:58:30 +0000
@@ -0,0 +1,3 @@
+tarfile.py is a copy of python2.7's tarfile.py.
+
+No changes besides 2.4 compatibility have been made.

=== added file 'tarfile-LICENSE'
--- tarfile-LICENSE	1970-01-01 00:00:00 +0000
+++ tarfile-LICENSE	2014-06-14 13:58:30 +0000
@@ -0,0 +1,92 @@
+irdu-backup uses tarfile, written by Lars Gust�l.  The following
+notice was included in the tarfile distribution:
+
+-----------------------------------------------------------------
+      tarfile    - python module for accessing TAR archives
+
+                   Lars Gust�l <lars@xxxxxxxxxxxx>
+-----------------------------------------------------------------
+
+
+Description
+-----------
+
+The tarfile module provides a set of functions for accessing  TAR
+format archives. Because  it is written  in pure Python,  it does
+not require any platform specific functions. GZIP  compressed TAR
+archives are seamlessly supported.
+
+
+Requirements
+------------
+
+tarfile needs at least Python version 2.2.
+(For a tarfile for Python 1.5.2 take a look on the webpage.)
+
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+IMPORTANT NOTE (*NIX only)
+--------------------------
+
+The addition of character and block devices is enabled by a C
+extension module (_tarfile.c), because Python does not yet
+provide the major() and minor() macros.
+Currently Linux and FreeBSD are implemented. If your OS is not
+supported, then please send me a patch.
+
+!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
+
+Download
+--------
+
+You can download the newest version at URL:
+http://www.gustaebel.de/lars/tarfile/
+
+
+Installation
+------------
+
+1. extract the tarfile-x.x.x.tar.gz archive to a temporary folder
+2. type "python setup.py install"
+
+
+Contact
+-------
+
+Suggestions, comments, bug reports and patches to:
+lars@xxxxxxxxxxxx
+
+
+License
+-------
+
+Copyright (C) 2002 Lars Gust�l <lars@xxxxxxxxxxxx>
+All rights reserved.
+
+Permission  is  hereby granted,  free  of charge,  to  any person
+obtaining a  copy of  this software  and associated documentation
+files  (the  "Software"),  to   deal  in  the  Software   without
+restriction,  including  without limitation  the  rights to  use,
+copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies  of  the  Software,  and to  permit  persons  to  whom the
+Software  is  furnished  to  do  so,  subject  to  the  following
+conditions:
+
+The above copyright  notice and this  permission notice shall  be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS  IS", WITHOUT WARRANTY OF ANY  KIND,
+EXPRESS OR IMPLIED, INCLUDING  BUT NOT LIMITED TO  THE WARRANTIES
+OF  MERCHANTABILITY,  FITNESS   FOR  A  PARTICULAR   PURPOSE  AND
+NONINFRINGEMENT.  IN  NO  EVENT SHALL  THE  AUTHORS  OR COPYRIGHT
+HOLDERS  BE LIABLE  FOR ANY  CLAIM, DAMAGES  OR OTHER  LIABILITY,
+WHETHER  IN AN  ACTION OF  CONTRACT, TORT  OR OTHERWISE,  ARISING
+FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+OTHER DEALINGS IN THE SOFTWARE.
+
+
+README Version
+--------------
+
+$Id: tarfile-LICENSE,v 1.1 2002/10/29 01:49:46 bescoto Exp $

=== modified file 'testing/__init__.py'
--- testing/__init__.py	2014-04-22 15:33:00 +0000
+++ testing/__init__.py	2014-06-14 13:58:30 +0000
@@ -24,23 +24,18 @@
 import unittest
 
 from duplicity import backend
-from duplicity import globals
 from duplicity import log
 
 _testing_dir = os.path.dirname(os.path.abspath(__file__))
 _top_dir = os.path.dirname(_testing_dir)
 _overrides_dir = os.path.join(_testing_dir, 'overrides')
-_bin_dir = os.path.join(_testing_dir, 'overrides', 'bin')
 
 # Adjust python path for duplicity and override modules
-sys.path = [_overrides_dir, _top_dir, _bin_dir] + sys.path
+sys.path = [_overrides_dir, _top_dir] + sys.path
 
 # Also set PYTHONPATH for any subprocesses
 os.environ['PYTHONPATH'] = _overrides_dir + ":" + _top_dir + ":" + os.environ.get('PYTHONPATH', '')
 
-# And PATH for any subprocesses
-os.environ['PATH'] = _bin_dir + ":" + os.environ.get('PATH', '')
-
 # Now set some variables that help standardize test behavior
 os.environ['LANG'] = ''
 os.environ['GNUPGHOME'] = os.path.join(_testing_dir, 'gnupg')
@@ -49,7 +44,6 @@
 os.environ['TZ'] = 'US/Central'
 time.tzset()
 
-
 class DuplicityTestCase(unittest.TestCase):
 
     sign_key = '56538CCF'
@@ -72,6 +66,7 @@
         os.chdir(_testing_dir)
 
     def tearDown(self):
+        from duplicity import globals
         for key in self.savedEnviron:
             self._update_env(key, self.savedEnviron[key])
         for key in self.savedGlobals:
@@ -96,7 +91,17 @@
         self._update_env(key, value)
 
     def set_global(self, key, value):
+        from duplicity import globals
         assert hasattr(globals, key)
         if key not in self.savedGlobals:
             self.savedGlobals[key] = getattr(globals, key)
         setattr(globals, key, value)
+
+# Automatically add all submodules into this namespace.  Helps python2.4
+# unittest work.
+if sys.version_info < (2, 5,):
+    for module in os.listdir(_top_dir):
+        if module == '__init__.py' or module[-3:] != '.py':
+            continue
+        __import__(module[:-3], locals(), globals())
+    del module

=== modified file 'testing/functional/__init__.py'
--- testing/functional/__init__.py	2014-04-26 12:54:37 +0000
+++ testing/functional/__init__.py	2014-06-14 13:58:30 +0000
@@ -18,15 +18,13 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-from future_builtins import map
-
 import os
-import pexpect
 import time
 import unittest
 
 from duplicity import backend
-from .. import DuplicityTestCase
+from duplicity import pexpect
+from testing import DuplicityTestCase
 
 
 class CmdError(Exception):
@@ -83,15 +81,13 @@
             child.expect('passphrase.*:')
             child.sendline(passphrase)
         child.wait()
-
         return_val = child.exitstatus
-        #output = child.read()
-        #print "Ran duplicity command: ", cmdline, "\n with return_val: ", return_val, "\n and output:\n", output
 
+        #print "Ran duplicity command: ", cmdline, "\n with return_val: ", child.exitstatus
         if fail:
-            self.assertEqual(30, return_val)
+            self.assertEqual(30, child.exitstatus)
         elif return_val:
-            raise CmdError(return_val)
+            raise CmdError(child.exitstatus)
 
     def backup(self, type, input_dir, options=[], **kwargs):
         """Run duplicity backup to default directory"""

=== modified file 'testing/functional/test_badupload.py'
--- testing/functional/test_badupload.py	2014-04-21 19:21:45 +0000
+++ testing/functional/test_badupload.py	2014-06-14 13:58:30 +0000
@@ -22,7 +22,7 @@
 
 import unittest
 
-from . import CmdError, FunctionalTestCase
+from testing.functional import CmdError, FunctionalTestCase
 
 
 class BadUploadTest(FunctionalTestCase):
@@ -36,8 +36,8 @@
         try:
             self.backup("full", "testfiles/dir1", options=["--skip-volume=1"])
             self.fail()
-        except CmdError as e:
-            self.assertEqual(e.exit_status, 44, str(e))
+        except CmdError, e:
+            self.assertEqual(e.exit_status, 44)
 
 if __name__ == "__main__":
     unittest.main()

=== modified file 'testing/functional/test_cleanup.py'
--- testing/functional/test_cleanup.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_cleanup.py	2014-06-14 13:58:30 +0000
@@ -21,7 +21,7 @@
 
 import unittest
 
-from . import FunctionalTestCase
+from testing.functional import FunctionalTestCase
 
 
 class CleanupTest(FunctionalTestCase):

=== modified file 'testing/functional/test_final.py'
--- testing/functional/test_final.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_final.py	2014-06-14 13:58:30 +0000
@@ -23,7 +23,7 @@
 import unittest
 
 from duplicity import path
-from . import CmdError, FunctionalTestCase
+from testing.functional import CmdError, FunctionalTestCase
 
 
 class FinalTest(FunctionalTestCase):

=== modified file 'testing/functional/test_log.py'
--- testing/functional/test_log.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_log.py	2014-06-14 13:58:30 +0000
@@ -21,7 +21,7 @@
 import unittest
 import os
 
-from . import FunctionalTestCase
+from testing.functional import FunctionalTestCase
 
 
 class LogTest(FunctionalTestCase):

=== modified file 'testing/functional/test_rdiffdir.py'
--- testing/functional/test_rdiffdir.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_rdiffdir.py	2014-06-14 13:58:30 +0000
@@ -22,7 +22,7 @@
 import unittest, os
 
 from duplicity import path
-from . import FunctionalTestCase
+from testing.functional import FunctionalTestCase
 
 
 class RdiffdirTest(FunctionalTestCase):

=== modified file 'testing/functional/test_restart.py'
--- testing/functional/test_restart.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_restart.py	2014-06-14 13:58:30 +0000
@@ -24,7 +24,7 @@
 import subprocess
 import unittest
 
-from . import FunctionalTestCase
+from testing.functional import FunctionalTestCase
 
 
 class RestartTest(FunctionalTestCase):
@@ -325,7 +325,7 @@
         self.backup("full", "testfiles/blocktartest")
         # Create an exact clone of the snapshot folder in the sigtar already.
         # Permissions and mtime must match.
-        os.mkdir("testfiles/snapshot", 0o755)
+        os.mkdir("testfiles/snapshot", 0755)
         os.utime("testfiles/snapshot", (1030384548, 1030384548))
         # Adjust the sigtar.gz file to have a bogus second snapshot/ entry
         # at the beginning.

=== modified file 'testing/gnupg/trustdb.gpg'
Binary files testing/gnupg/trustdb.gpg	2014-04-17 22:26:39 +0000 and testing/gnupg/trustdb.gpg	2014-06-14 13:58:30 +0000 differ
=== modified file 'testing/manual/backendtest' (properties changed: +x to -x)
--- testing/manual/backendtest	2014-05-11 11:50:12 +0000
+++ testing/manual/backendtest	2014-06-14 13:58:30 +0000
@@ -1,4 +1,7 @@
+<<<<<<< TREE
 #!/usr/bin/env python2
+=======
+>>>>>>> MERGE-SOURCE
 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
 #
 # Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
@@ -20,214 +23,297 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import os
-import sys
-import unittest
+import config
+import sys, unittest, os
+sys.path.insert(0, "../")
 
-_top_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..')
-sys.path.insert(0, _top_dir)
+import duplicity.backend
 try:
-    from testing.manual import config
-except ImportError:
-    # It's OK to not have copied config.py.tmpl over yet, if user is just
-    # calling us directly to test a specific backend.  If they aren't, we'll
-    # fail later when config.blah is used.
-    pass
-from testing.unit.test_backend_instance import BackendInstanceBase
-import duplicity.backend
-
-# undo the overrides support that our testing framework adds
-sys.path = [x for x in sys.path if '/overrides' not in x]
-os.environ['PATH'] = ':'.join([x for x in os.environ['PATH'].split(':')
-                               if '/overrides' not in x])
-os.environ['PYTHONPATH'] = ':'.join([x for x in os.environ['PYTHONPATH'].split(':')
-                                     if '/overrides' not in x])
-
-
-class ManualBackendBase(BackendInstanceBase):
-
-    url_string = None
-    password = None
-
-    def setUp(self):
-        super(ManualBackendBase, self).setUp()
-        self.set_global('num_retries', 1)
-        self.setBackendInfo()
-        if self.password is not None:
-            self.set_environ("FTP_PASSWORD", self.password)
-        if self.url_string is not None:
-            self.backend = duplicity.backend.get_backend_object(self.url_string)
-
-        # Clear out backend first
-        if self.backend is not None:
-            if hasattr(self.backend, '_delete_list'):
-                self.backend._delete_list(self.backend._list())
-            else:
-                for x in self.backend._list():
-                    self.backend._delete(x)
-
-    def setBackendInfo(self):
-        pass
-
-
-class sshParamikoTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _ssh_paramiko
-        duplicity.backend._backends['ssh'] = _ssh_paramiko.SSHParamikoBackend
-        self.set_global('use_scp', False)
-        self.url_string = config.ssh_url
-        self.password = config.ssh_password
-
-
-class sshParamikoScpTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _ssh_paramiko
-        duplicity.backend._backends['ssh'] = _ssh_paramiko.SSHParamikoBackend
-        self.set_global('use_scp', True)
-        self.url_string = config.ssh_url
-        self.password = config.ssh_password
-
-
-class sshPexpectTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _ssh_pexpect
-        duplicity.backend._backends['ssh'] = _ssh_pexpect.SSHPExpectBackend
-        self.set_global('use_scp', False)
-        self.url_string = config.ssh_url
-        self.password = config.ssh_password
-
-
-class sshPexpectScpTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _ssh_pexpect
-        duplicity.backend._backends['ssh'] = _ssh_pexpect.SSHPExpectBackend
-        self.set_global('use_scp', True)
-        self.url_string = config.ssh_url
-        self.password = config.ssh_password
-
-
-class ftpTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.ftp_url
-        self.password = config.ftp_password
-
-
-class ftpsTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.ftp_url.replace('ftp://', 'ftps://') if config.ftp_url else None
-        self.password = config.ftp_password
-
-
-class gsTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.gs_url
-        self.set_environ("GS_ACCESS_KEY_ID", config.gs_access_key)
-        self.set_environ("GS_SECRET_ACCESS_KEY", config.gs_secret_key)
-
-
-class s3SingleTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _boto_single
-        duplicity.backend._backends['s3+http'] = _boto_single.BotoBackend
-        self.set_global('s3_use_new_style', True)
-        self.set_environ("AWS_ACCESS_KEY_ID", config.s3_access_key)
-        self.set_environ("AWS_SECRET_ACCESS_KEY", config.s3_secret_key)
-        self.url_string = config.s3_url
-
-
-class s3MultiTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _boto_multi
-        duplicity.backend._backends['s3+http'] = _boto_multi.BotoBackend
-        self.set_global('s3_use_new_style', True)
-        self.set_environ("AWS_ACCESS_KEY_ID", config.s3_access_key)
-        self.set_environ("AWS_SECRET_ACCESS_KEY", config.s3_secret_key)
-        self.url_string = config.s3_url
-
-
-class cfCloudfilesTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _cf_cloudfiles
-        duplicity.backend._backends['cf+http'] = _cf_cloudfiles.CloudFilesBackend
-        self.set_environ("CLOUDFILES_USERNAME", config.cf_username)
-        self.set_environ("CLOUDFILES_APIKEY", config.cf_api_key)
-        self.url_string = config.cf_url
-
-
-class cfPyraxTest(ManualBackendBase):
-    def setBackendInfo(self):
-        from duplicity.backends import _cf_pyrax
-        duplicity.backend._backends['cf+http'] = _cf_pyrax.PyraxBackend
-        self.set_environ("CLOUDFILES_USERNAME", config.cf_username)
-        self.set_environ("CLOUDFILES_APIKEY", config.cf_api_key)
-        self.url_string = config.cf_url
-
-
-class swiftTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.swift_url
-        self.set_environ("SWIFT_USERNAME", config.swift_username)
-        self.set_environ("SWIFT_PASSWORD", config.swift_password)
-        self.set_environ("SWIFT_TENANTNAME", config.swift_tenant)
-        # Assumes you're just using the same storage as your cloudfiles config above
-        self.set_environ("SWIFT_AUTHURL", 'https://identity.api.rackspacecloud.com/v2.0/')
-        self.set_environ("SWIFT_AUTHVERSION", '2')
-
-
-class megaTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.mega_url
-        self.password = config.mega_password
-
-
-class webdavTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.webdav_url
-        self.password = config.webdav_password
-
-
-class webdavsTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.webdavs_url
-        self.password = config.webdavs_password
-        self.set_global('ssl_no_check_certificate', True)
-
-
-class gdocsTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.gdocs_url
-        self.password = config.gdocs_password
-
-
-class dpbxTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.dpbx_url
-
-
-class imapTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = config.imap_url
-        self.set_environ("IMAP_PASSWORD", config.imap_password)
-        self.set_global('imap_mailbox', 'deja-dup-testing')
-
-
-class gioSSHTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = 'gio+' + config.ssh_url if config.ssh_url else None
-        self.password = config.ssh_password
-
-
-class gioFTPTest(ManualBackendBase):
-    def setBackendInfo(self):
-        self.url_string = 'gio+' + config.ftp_url if config.ftp_url else None
-        self.password = config.ftp_password
-
+    import duplicity.backends.giobackend
+    gio_available = True
+except Exception:
+    gio_available = False
+from duplicity.errors import * #@UnusedWildImport
+from duplicity import path, file_naming, dup_time, globals, gpg
+
+config.setup()
+
+class UnivTest:
+    """Contains methods that help test any backend"""
+    def del_tmp(self):
+        """Remove all files from test directory"""
+        config.set_environ("FTP_PASSWORD", self.password)
+        backend = duplicity.backend.get_backend(self.url_string)
+        backend.delete(backend.list())
+        backend.close()
+        """Delete and create testfiles/output"""
+        assert not os.system("rm -rf testfiles/output")
+        assert not os.system("mkdir testfiles/output")
+
+    def test_basic(self):
+        """Test basic backend operations"""
+        if not self.url_string:
+            print "No URL for test %s...skipping... " % self.my_test_id,
+            return 0
+        config.set_environ("FTP_PASSWORD", self.password)
+        self.del_tmp()
+        self.try_basic(duplicity.backend.get_backend(self.url_string))
+
+    def test_fileobj_ops(self):
+        """Test fileobj operations"""
+        if not self.url_string:
+            print "No URL for test %s...skipping... " % self.my_test_id,
+            return 0
+        config.set_environ("FTP_PASSWORD", self.password)
+        self.try_fileobj_ops(duplicity.backend.get_backend(self.url_string))
+
+    def try_basic(self, backend):
+        """Try basic operations with given backend.
+
+        Requires backend be empty at first, and all operations are
+        allowed.
+
+        """
+        def cmp_list(l):
+            """Assert that backend.list is same as l"""
+            blist = backend.list()
+            blist.sort()
+            l.sort()
+            assert blist == l, \
+                   ("Got list: %s  Wanted: %s\n" % (repr(blist), repr(l)))
+
+        # Identify test that's running
+        print self.my_test_id, "... ",
+
+        assert not os.system("rm -rf testfiles/backend_tmp")
+        assert not os.system("mkdir testfiles/backend_tmp")
+
+        regpath = path.Path("testfiles/various_file_types/regular_file")
+        normal_file = "testfile"
+        colonfile = ("file%swith.%scolons_-and%s%setc" %
+                     ((globals.time_separator,) * 4))
+        tmpregpath = path.Path("testfiles/backend_tmp/regfile")
+
+        # Test list and put
+        cmp_list([])
+        backend.put(regpath, normal_file)
+        cmp_list([normal_file])
+        backend.put(regpath, colonfile)
+        cmp_list([normal_file, colonfile])
+
+        # Test get
+        regfilebuf = regpath.open("rb").read()
+        backend.get(colonfile, tmpregpath)
+        backendbuf = tmpregpath.open("rb").read()
+        assert backendbuf == regfilebuf
+
+        # Test delete
+        backend.delete([colonfile, normal_file])
+        cmp_list([])
+
+    def try_fileobj_filename(self, backend, filename):
+        """Use get_fileobj_write and get_fileobj_read on filename around"""
+        fout = backend.get_fileobj_write(filename)
+        fout.write("hello, world!")
+        fout.close()
+        assert filename in backend.list()
+
+        fin = backend.get_fileobj_read(filename)
+        buf = fin.read()
+        fin.close()
+        assert buf == "hello, world!", buf
+
+        backend.delete ([filename])
+
+    def try_fileobj_ops(self, backend):
+        """Test above try_fileobj_filename with a few filenames"""
+        # Must set dup_time strings because they are used by file_naming
+        dup_time.setcurtime(2000)
+        dup_time.setprevtime(1000)
+        # Also set profile for encryption
+        globals.gpg_profile = gpg.GPGProfile(passphrase = "foobar")
+
+        filename1 = file_naming.get('full', manifest = 1, gzipped = 1)
+        self.try_fileobj_filename(backend, filename1)
+
+        filename2 = file_naming.get('new-sig', encrypted = 1)
+        self.try_fileobj_filename(backend, filename2)
+
+
+class LocalTest(unittest.TestCase, UnivTest):
+    """ Test the Local backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "local"
+    url_string = config.file_url
+    password = config.file_password
+
+
+class scpTest(unittest.TestCase, UnivTest):
+    """ Test the SSH backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "ssh/scp"
+    url_string = config.ssh_url
+    password = config.ssh_password
+
+
+class ftpTest(unittest.TestCase, UnivTest):
+    """ Test the ftp backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "ftp"
+    url_string = config.ftp_url
+    password = config.ftp_password
+
+
+class ftpsTest(unittest.TestCase, UnivTest):
+    """ Test the ftp backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "ftps"
+    url_string = config.ftp_url
+    password = config.ftp_password
+
+
+class gsModuleTest(unittest.TestCase, UnivTest):
+    """ Test the gs module backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "gs/boto"
+    url_string = config.gs_url
+    password = None
+
+
+class rsyncAbsPathTest(unittest.TestCase, UnivTest):
+    """ Test the rsync abs path backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "rsync_abspath"
+    url_string = config.rsync_abspath_url
+    password = config.rsync_password
+
+
+class rsyncRelPathTest(unittest.TestCase, UnivTest):
+    """ Test the rsync relative path backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "rsync_relpath"
+    url_string = config.rsync_relpath_url
+    password = config.rsync_password
+
+
+class rsyncModuleTest(unittest.TestCase, UnivTest):
+    """ Test the rsync module backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "rsync_module"
+    url_string = config.rsync_module_url
+    password = config.rsync_password
+
+
+class s3ModuleTest(unittest.TestCase, UnivTest):
+    """ Test the s3 module backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "s3/boto"
+    url_string = config.s3_url
+    password = None
+
+
+class webdavModuleTest(unittest.TestCase, UnivTest):
+    """ Test the webdav module backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "webdav"
+    url_string = config.webdav_url
+    password = config.webdav_password
+
+
+class webdavsModuleTest(unittest.TestCase, UnivTest):
+    """ Test the webdavs module backend """
+    def setUp(self):
+        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+    def tearDown(self):
+        assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+    my_test_id = "webdavs"
+    url_string = config.webdavs_url
+    password = config.webdavs_password
+
+
+#if gio_available:
+    class GIOTest(UnivTest):
+        """ Generic gio module backend class """
+        def setUp(self):
+            duplicity.backend.force_backend(duplicity.backends.giobackend.GIOBackend)
+            assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
+
+        def tearDown(self):
+            duplicity.backend.force_backend(None)
+            assert not os.system("rm -rf testfiles tempdir temp2.tar")
+
+
+    class gioFileModuleTest(GIOTest, unittest.TestCase):
+        """ Test the gio file module backend """
+        my_test_id = "gio/file"
+        url_string = config.file_url
+        password = config.file_password
+
+
+    class gioSSHModuleTest(GIOTest, unittest.TestCase):
+        """ Test the gio ssh module backend """
+        my_test_id = "gio/ssh"
+        url_string = config.ssh_url
+        password = config.ssh_password
+
+
+    class gioFTPModuleTest(GIOTest, unittest.TestCase):
+        """ Test the gio ftp module backend """
+        my_test_id = "gio/ftp"
+        url_string = config.ftp_url
+        password = config.ftp_password
 
 if __name__ == "__main__":
-    defaultTest = None
-    if len(sys. argv) > 1:
-        class manualTest(ManualBackendBase):
-            def setBackendInfo(self):
-                self.url_string = sys.argv[1]
-        defaultTest = 'manualTest'
-    unittest.main(argv=[sys.argv[0]], defaultTest=defaultTest)
+    unittest.main()

=== modified file 'testing/manual/config.py.tmpl'
--- testing/manual/config.py.tmpl	2014-04-28 02:49:39 +0000
+++ testing/manual/config.py.tmpl	2014-06-14 13:58:30 +0000
@@ -19,8 +19,48 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import sys, os
+testing = os.path.dirname(sys.argv[0])
+newpath = os.path.abspath(os.path.join(testing, "../../."))
+sys.path.insert(0, newpath)
+
+import gettext
+gettext.install('duplicity', codeset='utf8')
+
+from duplicity import globals
+from duplicity import log
+from duplicity import backend
+from duplicity.backends import localbackend
+
+# config for duplicity unit tests
+
+# verbosity, default is log.WARNING
+verbosity = log.WARNING
+
+# to test GPG and friends
+# these must be without passwords
+encrypt_key1 = ""
+encrypt_key2 = ""
+
+# password required on this one
+sign_key = ""
+sign_passphrase = ""
+
 # URLs for testing
-# NOTE: if the ***_url is None, the test is skipped
+# NOTE: if the ***_url is None or "" the test
+# is skipped and is noted in the test results.
+
+file_url = "file:///tmp/testdup"
+file_password = None
+
+# To set up rsyncd for test:
+# /etc/rsyncd.conf contains
+# [testdup]
+# path = /tmp/testdup
+# comment = Test area for duplicity
+# read only = false
+#
+# NOTE: chmod 777 /tmp/testdup
 
 ftp_url = None
 ftp_password = None
@@ -41,23 +81,6 @@
 s3_access_key = None
 s3_secret_key = None
 
-cf_url = None
-cf_username = None
-cf_api_key = None
-
-swift_url = None
-swift_tenant = None
-swift_username = None
-swift_password = None
-
-dpbx_url = None
-
-imap_url = None
-imap_password = None
-
-mega_url = None
-mega_password = None
-
 webdav_url = None
 webdav_password = None
 
@@ -66,3 +89,48 @@
 
 gdocs_url = None
 gdocs_password = None
+
+def setup():
+    """ setup for unit tests """
+    # The following is for starting remote debugging in Eclipse with Pydev.
+    # Adjust the path to your location and version of Eclipse and Pydev.  Comment out
+    # to run normally, or this process will hang at pydevd.settrace() waiting for the
+    # remote debugger to start.
+#    pysrc = "/opt/Aptana Studio 2/plugins/org.python.pydev.debug_2.1.0.2011052613/pysrc/"
+#    sys.path.append(pysrc)
+#    import pydevd #@UnresolvedImport
+#    pydevd.settrace()
+    # end remote debugger startup
+
+    log.setup()
+    log.setverbosity(verbosity)
+    globals.print_statistics = 0
+
+    globals.num_retries = 2
+
+    backend.import_backends()
+
+    set_environ("FTP_PASSWORD", None)
+    set_environ("PASSPHRASE", None)
+    if gs_access_key:
+        set_environ("GS_ACCESS_KEY_ID", gs_access_key)
+        set_environ("GS_SECRET_ACCESS_KEY", gs_secret_key)
+    else:
+        set_environ("GS_ACCESS_KEY_ID", None)
+        set_environ("GS_SECRET_ACCESS_KEY", None)
+    if s3_access_key:
+        set_environ("AWS_ACCESS_KEY_ID", s3_access_key)
+        set_environ("AWS_SECRET_ACCESS_KEY", s3_secret_key)
+    else:
+        set_environ("AWS_ACCESS_KEY_ID", None)
+        set_environ("AWS_SECRET_ACCESS_KEY", None)
+
+
+def set_environ(varname, value):
+    if value is not None:
+        os.environ[varname] = value
+    else:
+        try:
+            del os.environ[varname]
+        except Exception:
+            pass

=== renamed file 'testing/overrides/bin/hsi' => 'testing/overrides/bin/hsi.THIS'
=== renamed file 'testing/overrides/bin/lftp' => 'testing/overrides/bin/lftp.THIS'
=== renamed file 'testing/overrides/bin/ncftpget' => 'testing/overrides/bin/ncftpget.THIS'
=== renamed file 'testing/overrides/bin/ncftpls' => 'testing/overrides