← Back to team overview

duplicity-team team mailing list archive

[Merge] lp:~mterry/duplicity/backend-unification into lp:duplicity

 

Michael Terry has proposed merging lp:~mterry/duplicity/backend-unification into lp:duplicity.

Requested reviews:
  duplicity-team (duplicity-team)

For more details, see:
https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764

Reorganize and simplify backend code.  Specifically:

- Formalize the expected API between backends and duplicity.  See the new file duplicity/backends/README for the instructions I've given authors.

- Add some tests for our backend wrapper class as well as some tests for individual backends.  For several backends that have some commands do all the heavy lifting (hsi, tahoe, ftp), I've added fake little mock commands so that we can test them locally.  This doesn't truly test our integration with those commands, but at least lets us test the backend glue code itself.

- Removed a lot of duplicate and unused code which backends were using (or not using).  This branch drops 700 lines of code (~20%) in duplicity/backends!

- Simplified expectations of backends.  Our wrapper code now does all the retrying, and all the exception handling.  Backends can 'fire and forget' trusting our wrappers to give the user a reasonable error message.  Obviously, backends can also add more details and make nicer error messages.  But they don't *have* to.

- Separate out the backend classes from our wrapper class.  Now there is no possibility of namespace collision.  All our API methods use one underscore.  Anything else (zero or two underscores) are for the backend class's use.

- Added the concept of a 'backend prefix' which is used by par2 and gio backends to provide generic support for "schema+" in urls -- like par2+ or gio+.  I've since marked the '--gio' flag as deprecated, in favor of 'gio+'.  Now you can even nest such backends like par2+gio+file://blah/blah.

- The switch to control which cloudfiles backend had a typo.  I fixed this, but I'm not sure I should have?  If we haven't had complaints, maybe we can just drop the old backend.

- I manually tested all the backends we have (except hsi and tahoe -- but those are simple wrappers around commands and I did test those via mocks per above).  I also added a bunch more manual backend tests to ./testing/manual/backendtest.py, which can now be run like the above to test all the files you have configured in config.py or you can pass it a URL which it will use for testing (useful for backend authors).
-- 
https://code.launchpad.net/~mterry/duplicity/backend-unification/+merge/216764
Your team duplicity-team is requested to review the proposed merge of lp:~mterry/duplicity/backend-unification into lp:duplicity.
=== modified file 'bin/duplicity'
--- bin/duplicity	2014-04-19 19:54:54 +0000
+++ bin/duplicity	2014-04-28 02:49:55 +0000
@@ -289,8 +289,6 @@
 
     def validate_block(orig_size, dest_filename):
         info = backend.query_info([dest_filename])[dest_filename]
-        if 'size' not in info:
-            return # backend didn't know how to query size
         size = info['size']
         if size is None:
             return # error querying file

=== modified file 'duplicity/backend.py'
--- duplicity/backend.py	2014-04-25 23:53:46 +0000
+++ duplicity/backend.py	2014-04-28 02:49:55 +0000
@@ -24,6 +24,7 @@
 intended to be used by the backends themselves.
 """
 
+import errno
 import os
 import sys
 import socket
@@ -31,6 +32,7 @@
 import re
 import getpass
 import gettext
+import types
 import urllib
 import urlparse
 
@@ -38,11 +40,14 @@
 from duplicity import file_naming
 from duplicity import globals
 from duplicity import log
+from duplicity import path
 from duplicity import progress
+from duplicity import util
 
 from duplicity.util import exception_traceback
 
-from duplicity.errors import BackendException, FatalBackendError
+from duplicity.errors import BackendException
+from duplicity.errors import FatalBackendException
 from duplicity.errors import TemporaryLoadException
 from duplicity.errors import ConflictingScheme
 from duplicity.errors import InvalidBackendURL
@@ -54,8 +59,8 @@
 # todo: this should really NOT be done here
 socket.setdefaulttimeout(globals.timeout)
 
-_forced_backend = None
 _backends = {}
+_backend_prefixes = {}
 
 # These URL schemes have a backend with a notion of an RFC "network location".
 # The 'file' and 's3+http' schemes should not be in this list.
@@ -69,7 +74,6 @@
 uses_netloc = ['ftp',
                'ftps',
                'hsi',
-               'rsync',
                's3',
                'scp', 'ssh', 'sftp',
                'webdav', 'webdavs',
@@ -96,8 +100,6 @@
         if fn.endswith("backend.py"):
             fn = fn[:-3]
             imp = "duplicity.backends.%s" % (fn,)
-            # ignore gio as it is explicitly loaded in commandline.parse_cmdline_options()
-            if fn == "giobackend": continue
             try:
                 __import__(imp)
                 res = "Succeeded"
@@ -110,14 +112,6 @@
             continue
 
 
-def force_backend(backend):
-    """
-    Forces the use of a particular backend, regardless of schema
-    """
-    global _forced_backend
-    _forced_backend = backend
-
-
 def register_backend(scheme, backend_factory):
     """
     Register a given backend factory responsible for URL:s with the
@@ -144,6 +138,32 @@
     _backends[scheme] = backend_factory
 
 
+def register_backend_prefix(scheme, backend_factory):
+    """
+    Register a given backend factory responsible for URL:s with the
+    given scheme prefix.
+
+    The backend must be a callable which, when called with a URL as
+    the single parameter, returns an object implementing the backend
+    protocol (i.e., a subclass of Backend).
+
+    Typically the callable will be the Backend subclass itself.
+
+    This function is not thread-safe and is intended to be called
+    during module importation or start-up.
+    """
+    global _backend_prefixes
+
+    assert callable(backend_factory), "backend factory must be callable"
+
+    if scheme in _backend_prefixes:
+        raise ConflictingScheme("the prefix %s already has a backend "
+                                "associated with it"
+                                "" % (scheme,))
+
+    _backend_prefixes[scheme] = backend_factory
+
+
 def is_backend_url(url_string):
     """
     @return Whether the given string looks like a backend URL.
@@ -157,9 +177,9 @@
         return False
 
 
-def get_backend(url_string):
+def get_backend_object(url_string):
     """
-    Instantiate a backend suitable for the given URL, or return None
+    Find the right backend class instance for the given URL, or return None
     if the given string looks like a local path rather than a URL.
 
     Raise InvalidBackendURL if the URL is not a valid URL.
@@ -167,22 +187,44 @@
     if not is_backend_url(url_string):
         return None
 
+    global _backends, _backend_prefixes
+
     pu = ParsedUrl(url_string)
-
-    # Implicit local path
     assert pu.scheme, "should be a backend url according to is_backend_url"
 
-    global _backends, _forced_backend
-
-    if _forced_backend:
-        return _forced_backend(pu)
-    elif not pu.scheme in _backends:
-        raise UnsupportedBackendScheme(url_string)
-    else:
-        try:
-            return _backends[pu.scheme](pu)
-        except ImportError:
-            raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
+    factory = None
+
+    for prefix in _backend_prefixes:
+        if url_string.startswith(prefix + '+'):
+            factory = _backend_prefixes[prefix]
+            pu = ParsedUrl(url_string.lstrip(prefix + '+'))
+            break
+
+    if factory is None:
+        if not pu.scheme in _backends:
+            raise UnsupportedBackendScheme(url_string)
+        else:
+            factory = _backends[pu.scheme]
+
+    try:
+        return factory(pu)
+    except ImportError:
+        raise BackendException(_("Could not initialize backend: %s") % str(sys.exc_info()[1]))
+
+
+def get_backend(url_string):
+    """
+    Instantiate a backend suitable for the given URL, or return None
+    if the given string looks like a local path rather than a URL.
+
+    Raise InvalidBackendURL if the URL is not a valid URL.
+    """
+    if globals.use_gio:
+        url_string = 'gio+' + url_string
+    obj = get_backend_object(url_string)
+    if obj:
+        obj = BackendWrapper(obj)
+    return obj
 
 
 class ParsedUrl:
@@ -296,165 +338,74 @@
     # Replace the full network location with the stripped copy.
     return parsed_url.geturl().replace(parsed_url.netloc, straight_netloc, 1)
 
-
-# Decorator for backend operation functions to simplify writing one that
-# retries.  Make sure to add a keyword argument 'raise_errors' to your function
-# and if it is true, raise an exception on an error.  If false, fatal-log it.
-def retry(fn):
-    def iterate(*args):
-        for n in range(1, globals.num_retries):
-            try:
-                kwargs = {"raise_errors" : True}
-                return fn(*args, **kwargs)
-            except Exception as e:
-                log.Warn(_("Attempt %s failed: %s: %s")
-                         % (n, e.__class__.__name__, str(e)))
-                log.Debug(_("Backtrace of previous error: %s")
-                          % exception_traceback())
-                if isinstance(e, TemporaryLoadException):
-                    time.sleep(30) # wait longer before trying again
-                else:
-                    time.sleep(10) # wait a bit before trying again
-        # Now try one last time, but fatal-log instead of raising errors
-        kwargs = {"raise_errors" : False}
-        return fn(*args, **kwargs)
-    return iterate
-
-# same as above, a bit dumber and always dies fatally if last trial fails
-# hence no need for the raise_errors var ;), we really catch everything here
-# as we don't know what the underlying code comes up with and we really *do*
-# want to retry globals.num_retries times under all circumstances
-def retry_fatal(fn):
-    def _retry_fatal(self, *args):
-        try:
-            n = 0
-            for n in range(1, globals.num_retries):
+def _get_code_from_exception(backend, operation, e):
+    if isinstance(e, BackendException) and e.code != log.ErrorCode.backend_error:
+        return e.code
+    elif hasattr(backend, '_error_code'):
+        return backend._error_code(operation, e) or log.ErrorCode.backend_error
+    elif hasattr(e, 'errno'):
+        # A few backends return such errors (local, paramiko, etc)
+        if e.errno == errno.EACCES:
+            return log.ErrorCode.backend_permission_denied
+        elif e.errno == errno.ENOENT:
+            return log.ErrorCode.backend_not_found
+        elif e.errno == errno.ENOSPC:
+            return log.ErrorCode.backend_no_space
+    return log.ErrorCode.backend_error
+
+def retry(operation, fatal=True):
+    # Decorators with arguments introduce a new level of indirection.  So we
+    # have to return a decorator function (which itself returns a function!)
+    def outer_retry(fn):
+        def inner_retry(self, *args):
+            for n in range(1, globals.num_retries + 1):
                 try:
-                    self.retry_count = n
                     return fn(self, *args)
-                except FatalBackendError as e:
+                except FatalBackendException as e:
                     # die on fatal errors
                     raise e
                 except Exception as e:
                     # retry on anything else
-                    log.Warn(_("Attempt %s failed. %s: %s")
-                             % (n, e.__class__.__name__, str(e)))
                     log.Debug(_("Backtrace of previous error: %s")
                               % exception_traceback())
-                    time.sleep(10) # wait a bit before trying again
-        # final trial, die on exception
-            self.retry_count = n+1
-            return fn(self, *args)
-        except Exception as e:
-            log.Debug(_("Backtrace of previous error: %s")
-                        % exception_traceback())
-            log.FatalError(_("Giving up after %s attempts. %s: %s")
-                         % (self.retry_count, e.__class__.__name__, str(e)),
-                          log.ErrorCode.backend_error)
-        self.retry_count = 0
-
-    return _retry_fatal
+                    at_end = n == globals.num_retries
+                    code = _get_code_from_exception(self.backend, operation, e)
+                    if code == log.ErrorCode.backend_not_found:
+                        # If we tried to do something, but the file just isn't there,
+                        # no need to retry.
+                        at_end = True
+                    if at_end and fatal:
+                        def make_filename(f):
+                            if isinstance(f, path.ROPath):
+                                return util.escape(f.name)
+                            else:
+                                return util.escape(f)
+                        extra = ' '.join([operation] + [make_filename(x) for x in args if x])
+                        log.FatalError(_("Giving up after %s attempts. %s: %s")
+                                       % (n, e.__class__.__name__,
+                                          str(e)), code=code, extra=extra)
+                    else:
+                        log.Warn(_("Attempt %s failed. %s: %s")
+                                 % (n, e.__class__.__name__, str(e)))
+                    if not at_end:
+                        if isinstance(e, TemporaryLoadException):
+                            time.sleep(90) # wait longer before trying again
+                        else:
+                            time.sleep(30) # wait a bit before trying again
+                        if hasattr(self.backend, '_retry_cleanup'):
+                            self.backend._retry_cleanup()
+
+        return inner_retry
+    return outer_retry
+
 
 class Backend(object):
     """
-    Represents a generic duplicity backend, capable of storing and
-    retrieving files.
-
-    Concrete sub-classes are expected to implement:
-
-      - put
-      - get
-      - list
-      - delete
-      - close (if needed)
-
-    Optional:
-
-      - move
+    See README in backends directory for information on how to write a backend.
     """
-    
     def __init__(self, parsed_url):
         self.parsed_url = parsed_url
 
-    def put(self, source_path, remote_filename = None):
-        """
-        Transfer source_path (Path object) to remote_filename (string)
-
-        If remote_filename is None, get the filename from the last
-        path component of pathname.
-        """
-        raise NotImplementedError()
-
-    def move(self, source_path, remote_filename = None):
-        """
-        Move source_path (Path object) to remote_filename (string)
-
-        Same as put(), but unlinks source_path in the process.  This allows the
-        local backend to do this more efficiently using rename.
-        """
-        self.put(source_path, remote_filename)
-        source_path.delete()
-
-    def get(self, remote_filename, local_path):
-        """Retrieve remote_filename and place in local_path"""
-        raise NotImplementedError()
-
-    def list(self):
-        """
-        Return list of filenames (byte strings) present in backend
-        """
-        def tobytes(filename):
-            "Convert a (maybe unicode) filename to bytes"
-            if isinstance(filename, unicode):
-                # There shouldn't be any encoding errors for files we care
-                # about, since duplicity filenames are ascii.  But user files
-                # may be in the same directory.  So just replace characters.
-                return filename.encode(sys.getfilesystemencoding(), 'replace')
-            else:
-                return filename
-
-        if hasattr(self, '_list'):
-            # Make sure that duplicity internals only ever see byte strings
-            # for filenames, no matter what the backend thinks it is talking.
-            return [tobytes(x) for x in self._list()]
-        else:
-            raise NotImplementedError()
-
-    def delete(self, filename_list):
-        """
-        Delete each filename in filename_list, in order if possible.
-        """
-        raise NotImplementedError()
-
-    # Should never cause FatalError.
-    # Returns a dictionary of dictionaries.  The outer dictionary maps
-    # filenames to metadata dictionaries.  Supported metadata are:
-    #
-    # 'size': if >= 0, size of file
-    #         if -1, file is not found
-    #         if None, error querying file
-    #
-    # Returned dictionary is guaranteed to contain a metadata dictionary for
-    # each filename, but not all metadata are guaranteed to be present.
-    def query_info(self, filename_list, raise_errors=True):
-        """
-        Return metadata about each filename in filename_list
-        """
-        info = {}
-        if hasattr(self, '_query_list_info'):
-            info = self._query_list_info(filename_list)
-        elif hasattr(self, '_query_file_info'):
-            for filename in filename_list:
-                info[filename] = self._query_file_info(filename)
-
-        # Fill out any missing entries (may happen if backend has no support
-        # or its query_list support is lazy)
-        for filename in filename_list:
-            if filename not in info:
-                info[filename] = {}
-
-        return info
-
     """ use getpass by default, inherited backends may overwrite this behaviour """
     use_getpass = True
 
@@ -493,27 +444,7 @@
         else:
             return commandline
 
-    """
-    DEPRECATED:
-    run_command(_persist) - legacy wrappers for subprocess_popen(_persist)
-    """
-    def run_command(self, commandline):
-        return self.subprocess_popen(commandline)
-    def run_command_persist(self, commandline):
-        return self.subprocess_popen_persist(commandline)
-
-    """
-    DEPRECATED:
-    popen(_persist) - legacy wrappers for subprocess_popen(_persist)
-    """
-    def popen(self, commandline):
-        result, stdout, stderr = self.subprocess_popen(commandline)
-        return stdout
-    def popen_persist(self, commandline):
-        result, stdout, stderr = self.subprocess_popen_persist(commandline)
-        return stdout
-
-    def _subprocess_popen(self, commandline):
+    def __subprocess_popen(self, commandline):
         """
         For internal use.
         Execute the given command line, interpreted as a shell command.
@@ -525,6 +456,10 @@
 
         return p.returncode, stdout, stderr
 
+    """ a dictionary for breaking exceptions, syntax is
+        { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
+    popen_breaks = {}
+
     def subprocess_popen(self, commandline):
         """
         Execute the given command line with error check.
@@ -534,54 +469,179 @@
         """
         private = self.munge_password(commandline)
         log.Info(_("Reading results of '%s'") % private)
-        result, stdout, stderr = self._subprocess_popen(commandline)
+        result, stdout, stderr = self.__subprocess_popen(commandline)
         if result != 0:
-            raise BackendException("Error running '%s'" % private)
-        return result, stdout, stderr
-
-    """ a dictionary for persist breaking exceptions, syntax is
-        { 'command' : [ code1, code2 ], ... } see ftpbackend for an example """
-    popen_persist_breaks = {}
-
-    def subprocess_popen_persist(self, commandline):
-        """
-        Execute the given command line with error check.
-        Retries globals.num_retries times with 30s delay.
-        Returns int Exitcode, string StdOut, string StdErr
-
-        Raise a BackendException on failure.
-        """
-        private = self.munge_password(commandline)
-
-        for n in range(1, globals.num_retries+1):
-            # sleep before retry
-            if n > 1:
-                time.sleep(30)
-            log.Info(_("Reading results of '%s'") % private)
-            result, stdout, stderr = self._subprocess_popen(commandline)
-            if result == 0:
-                return result, stdout, stderr
-
             try:
                 m = re.search("^\s*([\S]+)", commandline)
                 cmd = m.group(1)
-                ignores = self.popen_persist_breaks[ cmd ]
+                ignores = self.popen_breaks[ cmd ]
                 ignores.index(result)
                 """ ignore a predefined set of error codes """
                 return 0, '', ''
             except (KeyError, ValueError):
-                pass
-
-            log.Warn(ngettext("Running '%s' failed with code %d (attempt #%d)",
-                              "Running '%s' failed with code %d (attempt #%d)", n) %
-                               (private, result, n))
-            if stdout or stderr:
-                    log.Warn(_("Error is:\n%s") % stderr + (stderr and stdout and "\n") + stdout)
-
-        log.Warn(ngettext("Giving up trying to execute '%s' after %d attempt",
-                          "Giving up trying to execute '%s' after %d attempts",
-                          globals.num_retries) % (private, globals.num_retries))
-        raise BackendException("Error running '%s'" % private)
+                raise BackendException("Error running '%s': returned %d, with output:\n%s" %
+                                       (private, result, stdout + '\n' + stderr))
+        return result, stdout, stderr
+
+
+class BackendWrapper(object):
+    """
+    Represents a generic duplicity backend, capable of storing and
+    retrieving files.
+    """
+    
+    def __init__(self, backend):
+        self.backend = backend
+
+    def __do_put(self, source_path, remote_filename):
+        if hasattr(self.backend, '_put'):
+            log.Info(_("Writing %s") % remote_filename)
+            self.backend._put(source_path, remote_filename)
+        else:
+            raise NotImplementedError()
+
+    @retry('put', fatal=True)
+    def put(self, source_path, remote_filename=None):
+        """
+        Transfer source_path (Path object) to remote_filename (string)
+
+        If remote_filename is None, get the filename from the last
+        path component of pathname.
+        """
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        self.__do_put(source_path, remote_filename)
+
+    @retry('move', fatal=True)
+    def move(self, source_path, remote_filename=None):
+        """
+        Move source_path (Path object) to remote_filename (string)
+
+        Same as put(), but unlinks source_path in the process.  This allows the
+        local backend to do this more efficiently using rename.
+        """
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        if hasattr(self.backend, '_move'):
+            if self.backend._move(source_path, remote_filename) is not False:
+                source_path.setdata()
+                return
+        self.__do_put(source_path, remote_filename)
+        source_path.delete()
+
+    @retry('get', fatal=True)
+    def get(self, remote_filename, local_path):
+        """Retrieve remote_filename and place in local_path"""
+        if hasattr(self.backend, '_get'):
+            self.backend._get(remote_filename, local_path)
+            if not local_path.exists():
+                raise BackendException(_("File %s not found locally after get "
+                                         "from backend") % util.ufn(local_path.name))
+            local_path.setdata()
+        else:
+            raise NotImplementedError()
+
+    @retry('list', fatal=True)
+    def list(self):
+        """
+        Return list of filenames (byte strings) present in backend
+        """
+        def tobytes(filename):
+            "Convert a (maybe unicode) filename to bytes"
+            if isinstance(filename, unicode):
+                # There shouldn't be any encoding errors for files we care
+                # about, since duplicity filenames are ascii.  But user files
+                # may be in the same directory.  So just replace characters.
+                return filename.encode(sys.getfilesystemencoding(), 'replace')
+            else:
+                return filename
+
+        if hasattr(self.backend, '_list'):
+            # Make sure that duplicity internals only ever see byte strings
+            # for filenames, no matter what the backend thinks it is talking.
+            return [tobytes(x) for x in self.backend._list()]
+        else:
+            raise NotImplementedError()
+
+    def delete(self, filename_list):
+        """
+        Delete each filename in filename_list, in order if possible.
+        """
+        assert type(filename_list) is not types.StringType
+        if hasattr(self.backend, '_delete_list'):
+            self._do_delete_list(filename_list)
+        elif hasattr(self.backend, '_delete'):
+            for filename in filename_list:
+                self._do_delete(filename)
+        else:
+            raise NotImplementedError()
+
+    @retry('delete', fatal=False)
+    def _do_delete_list(self, filename_list):
+        self.backend._delete_list(filename_list)
+
+    @retry('delete', fatal=False)
+    def _do_delete(self, filename):
+        self.backend._delete(filename)
+
+    # Should never cause FatalError.
+    # Returns a dictionary of dictionaries.  The outer dictionary maps
+    # filenames to metadata dictionaries.  Supported metadata are:
+    #
+    # 'size': if >= 0, size of file
+    #         if -1, file is not found
+    #         if None, error querying file
+    #
+    # Returned dictionary is guaranteed to contain a metadata dictionary for
+    # each filename, and all metadata are guaranteed to be present.
+    def query_info(self, filename_list):
+        """
+        Return metadata about each filename in filename_list
+        """
+        info = {}
+        if hasattr(self.backend, '_query_list'):
+            info = self._do_query_list(filename_list)
+            if info is None:
+                info = {}
+        elif hasattr(self.backend, '_query'):
+            for filename in filename_list:
+                info[filename] = self._do_query(filename)
+
+        # Fill out any missing entries (may happen if backend has no support
+        # or its query_list support is lazy)
+        for filename in filename_list:
+            if filename not in info or info[filename] is None:
+                info[filename] = {}
+            for metadata in ['size']:
+                info[filename].setdefault(metadata, None)
+
+        return info
+
+    @retry('query', fatal=False)
+    def _do_query_list(self, filename_list):
+        info = self.backend._query_list(filename_list)
+        if info is None:
+            info = {}
+        return info
+
+    @retry('query', fatal=False)
+    def _do_query(self, filename):
+        try:
+            return self.backend._query(filename)
+        except Exception as e:
+            code = _get_code_from_exception(self.backend, 'query', e)
+            if code == log.ErrorCode.backend_not_found:
+                return {'size': -1}
+            else:
+                raise e
+
+    def close(self):
+        """
+        Close the backend, releasing any resources held and
+        invalidating any file objects obtained from the backend.
+        """
+        if hasattr(self.backend, '_close'):
+            self.backend._close()
 
     def get_fileobj_read(self, filename, parseresults = None):
         """
@@ -598,37 +658,6 @@
         tdp.setdata()
         return tdp.filtered_open_with_delete("rb")
 
-    def get_fileobj_write(self, filename,
-                          parseresults = None,
-                          sizelist = None):
-        """
-        Return fileobj opened for writing, which will cause the file
-        to be written to the backend on close().
-
-        The file will be encoded as specified in parseresults (or as
-        read from the filename), and stored in a temp file until it
-        can be copied over and deleted.
-
-        If sizelist is not None, it should be set to an empty list.
-        The number of bytes will be inserted into the list.
-        """
-        if not parseresults:
-            parseresults = file_naming.parse(filename)
-            assert parseresults, u"Filename %s not correctly parsed" % util.ufn(filename)
-        tdp = dup_temp.new_tempduppath(parseresults)
-
-        def close_file_hook():
-            """This is called when returned fileobj is closed"""
-            self.put(tdp, filename)
-            if sizelist is not None:
-                tdp.setdata()
-                sizelist.append(tdp.getsize())
-            tdp.delete()
-
-        fh = dup_temp.FileobjHooked(tdp.filtered_open("wb"))
-        fh.addhook(close_file_hook)
-        return fh
-
     def get_data(self, filename, parseresults = None):
         """
         Retrieve a file from backend, process it, return contents.
@@ -637,18 +666,3 @@
         buf = fin.read()
         assert not fin.close()
         return buf
-
-    def put_data(self, buffer, filename, parseresults = None):
-        """
-        Put buffer into filename on backend after processing.
-        """
-        fout = self.get_fileobj_write(filename, parseresults)
-        fout.write(buffer)
-        assert not fout.close()
-
-    def close(self):
-        """
-        Close the backend, releasing any resources held and
-        invalidating any file objects obtained from the backend.
-        """
-        pass

=== added file 'duplicity/backends/README'
--- duplicity/backends/README	1970-01-01 00:00:00 +0000
+++ duplicity/backends/README	2014-04-28 02:49:55 +0000
@@ -0,0 +1,79 @@
+= How to write a backend, in five easy steps! =
+
+There are five main methods you want to implement:
+
+__init__  - Initial setup
+_get
+ - Get one file
+ - Retried if an exception is thrown
+_put
+ - Upload one file
+ - Retried if an exception is thrown
+_list
+ - List all files in the backend
+ - Return a list of filenames
+ - Retried if an exception is thrown
+_delete
+ - Delete one file
+ - Retried if an exception is thrown
+
+There are other methods you may optionally implement:
+
+_delete_list
+ - Delete list of files
+ - This is used in preference of _delete if defined
+ - Must gracefully handle individual file errors itself
+ - Retried if an exception is thrown
+_query
+ - Query metadata of one file
+ - Return a dict with a 'size' key, and a file size value (-1 for not found)
+ - Retried if an exception is thrown
+_query_list
+ - Query metadata of a list of files
+ - Return a dict of filenames mapping to a dict with a 'size' key,
+   and a file size value (-1 for not found)
+ - This is used in preference of _query if defined
+ - Must gracefully handle individual file errors itself
+ - Retried if an exception is thrown
+_retry_cleanup
+ - If the backend wants to do any bookkeeping or connection resetting inbetween
+   retries, do it here.
+_error_code
+ - Passed an exception thrown by your backend, return a log.ErrorCode that
+   corresponds to that exception
+_move
+ - If your backend can more optimally move a local file into its backend,
+   implement this.  If it's not implemented or returns False, _put will be
+   called instead (and duplicity will delete the source file after).
+ - Retried if an exception is thrown
+_close
+ - If your backend needs to clean up after itself, do that here.
+
+== Subclassing ==
+
+Always subclass from duplicity.backend.Backend
+
+== Registering ==
+
+You can register your class as a single backend like so:
+
+duplicity.backend.register_backend("foo", FooBackend)
+
+This will allow a URL like so: foo://hostname/path
+
+Or you can register your class as a meta backend like so:
+duplicity.backend.register_backend_prefix("bar", BarBackend)
+
+Which will allow a URL like so: bar+foo://hostname/path and your class will
+be passed the inner URL to either interpret how you like or create a new
+inner backend instance with duplicity.backend.get_backend_object(url).
+
+== Naming ==
+
+Any method that duplicity calls will start with one underscore.  Please use
+zero or two underscores in your method names to avoid conflicts.
+
+== Testing ==
+
+Use "./testing/manual/backendtest.py foo://hostname/path" to test your new
+backend.  It will load your backend from your current branch.

=== modified file 'duplicity/backends/_boto_multi.py'
--- duplicity/backends/_boto_multi.py	2014-04-17 21:54:04 +0000
+++ duplicity/backends/_boto_multi.py	2014-04-28 02:49:55 +0000
@@ -98,8 +98,8 @@
 
         self._pool = multiprocessing.Pool(processes=number_of_procs)
 
-    def close(self):
-        BotoSingleBackend.close(self)
+    def _close(self):
+        BotoSingleBackend._close(self)
         log.Debug("Closing pool")
         self._pool.terminate()
         self._pool.join()

=== modified file 'duplicity/backends/_boto_single.py'
--- duplicity/backends/_boto_single.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/_boto_single.py	2014-04-28 02:49:55 +0000
@@ -25,9 +25,7 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
-from duplicity.util import exception_traceback
-from duplicity.backend import retry
+from duplicity.errors import FatalBackendException, BackendException
 from duplicity import progress
 
 BOTO_MIN_VERSION = "2.1.1"
@@ -163,7 +161,7 @@
         self.resetConnection()
         self._listed_keys = {}
 
-    def close(self):
+    def _close(self):
         del self._listed_keys
         self._listed_keys = {}
         self.bucket = None
@@ -185,137 +183,69 @@
         self.conn = get_connection(self.scheme, self.parsed_url, self.storage_uri)
         self.bucket = self.conn.lookup(self.bucket_name)
 
-    def put(self, source_path, remote_filename=None):
+    def _retry_cleanup(self):
+        self.resetConnection()
+
+    def _put(self, source_path, remote_filename):
         from boto.s3.connection import Location
         if globals.s3_european_buckets:
             if not globals.s3_use_new_style:
-                log.FatalError("European bucket creation was requested, but not new-style "
-                               "bucket addressing (--s3-use-new-style)",
-                               log.ErrorCode.s3_bucket_not_style)
-        #Network glitch may prevent first few attempts of creating/looking up a bucket
-        for n in range(1, globals.num_retries+1):
-            if self.bucket:
-                break
-            if n > 1:
-                time.sleep(30)
-                self.resetConnection()
+                raise FatalBackendException("European bucket creation was requested, but not new-style "
+                                            "bucket addressing (--s3-use-new-style)",
+                                            code=log.ErrorCode.s3_bucket_not_style)
+
+        if self.bucket is None:
             try:
-                try:
-                    self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
-                except Exception as e:
-                    if "NoSuchBucket" in str(e):
-                        if globals.s3_european_buckets:
-                            self.bucket = self.conn.create_bucket(self.bucket_name,
-                                                                  location=Location.EU)
-                        else:
-                            self.bucket = self.conn.create_bucket(self.bucket_name)
+                self.bucket = self.conn.get_bucket(self.bucket_name, validate=True)
+            except Exception as e:
+                if "NoSuchBucket" in str(e):
+                    if globals.s3_european_buckets:
+                        self.bucket = self.conn.create_bucket(self.bucket_name,
+                                                              location=Location.EU)
                     else:
-                        raise e
-            except Exception as e:
-                log.Warn("Failed to create bucket (attempt #%d) '%s' failed (reason: %s: %s)"
-                         "" % (n, self.bucket_name,
-                               e.__class__.__name__,
-                               str(e)))
+                        self.bucket = self.conn.create_bucket(self.bucket_name)
+                else:
+                    raise
 
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
         key = self.bucket.new_key(self.key_prefix + remote_filename)
 
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
-                time.sleep(10)
-
-            if globals.s3_use_rrs:
-                storage_class = 'REDUCED_REDUNDANCY'
-            else:
-                storage_class = 'STANDARD'
-            log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
-            try:
-                if globals.s3_use_sse:
-                    headers = {
-                    'Content-Type': 'application/octet-stream',
-                    'x-amz-storage-class': storage_class,
-                    'x-amz-server-side-encryption': 'AES256'
-                }
-                else:
-                    headers = {
-                    'Content-Type': 'application/octet-stream',
-                    'x-amz-storage-class': storage_class
-                }
-                
-                upload_start = time.time()
-                self.upload(source_path.name, key, headers)
-                upload_end = time.time()
-                total_s = abs(upload_end-upload_start) or 1  # prevent a zero value!
-                rough_upload_speed = os.path.getsize(source_path.name)/total_s
-                self.resetConnection()
-                log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
-                return
-            except Exception as e:
-                log.Warn("Upload '%s/%s' failed (attempt #%d, reason: %s: %s)"
-                         "" % (self.straight_url,
-                               remote_filename,
-                               n,
-                               e.__class__.__name__,
-                               str(e)))
-                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
-                self.resetConnection()
-        log.Warn("Giving up trying to upload %s/%s after %d attempts" %
-                 (self.straight_url, remote_filename, globals.num_retries))
-        raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
-
-    def get(self, remote_filename, local_path):
+        if globals.s3_use_rrs:
+            storage_class = 'REDUCED_REDUNDANCY'
+        else:
+            storage_class = 'STANDARD'
+        log.Info("Uploading %s/%s to %s Storage" % (self.straight_url, remote_filename, storage_class))
+        if globals.s3_use_sse:
+            headers = {
+            'Content-Type': 'application/octet-stream',
+            'x-amz-storage-class': storage_class,
+            'x-amz-server-side-encryption': 'AES256'
+        }
+        else:
+            headers = {
+            'Content-Type': 'application/octet-stream',
+            'x-amz-storage-class': storage_class
+        }
+        
+        upload_start = time.time()
+        self.upload(source_path.name, key, headers)
+        upload_end = time.time()
+        total_s = abs(upload_end-upload_start) or 1  # prevent a zero value!
+        rough_upload_speed = os.path.getsize(source_path.name)/total_s
+        log.Debug("Uploaded %s/%s to %s Storage at roughly %f bytes/second" % (self.straight_url, remote_filename, storage_class, rough_upload_speed))
+
+    def _get(self, remote_filename, local_path):
         key_name = self.key_prefix + remote_filename
         self.pre_process_download(remote_filename, wait=True)
         key = self._listed_keys[key_name]
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
-                time.sleep(10)
-            log.Info("Downloading %s/%s" % (self.straight_url, remote_filename))
-            try:
-                self.resetConnection()
-                key.get_contents_to_filename(local_path.name)
-                local_path.setdata()
-                return
-            except Exception as e:
-                log.Warn("Download %s/%s failed (attempt #%d, reason: %s: %s)"
-                         "" % (self.straight_url,
-                               remote_filename,
-                               n,
-                               e.__class__.__name__,
-                               str(e)), 1)
-                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
-
-        log.Warn("Giving up trying to download %s/%s after %d attempts" %
-                (self.straight_url, remote_filename, globals.num_retries))
-        raise BackendException("Error downloading %s/%s" % (self.straight_url, remote_filename))
+        self.resetConnection()
+        key.get_contents_to_filename(local_path.name)
 
     def _list(self):
         if not self.bucket:
             raise BackendException("No connection to backend")
-
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(30)
-                self.resetConnection()
-            log.Info("Listing %s" % self.straight_url)
-            try:
-                return self._list_filenames_in_bucket()
-            except Exception as e:
-                log.Warn("List %s failed (attempt #%d, reason: %s: %s)"
-                         "" % (self.straight_url,
-                               n,
-                               e.__class__.__name__,
-                               str(e)), 1)
-                log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
-        log.Warn("Giving up trying to list %s after %d attempts" %
-                (self.straight_url, globals.num_retries))
-        raise BackendException("Error listng %s" % self.straight_url)
-
-    def _list_filenames_in_bucket(self):
+        return self.list_filenames_in_bucket()
+
+    def list_filenames_in_bucket(self):
         # We add a 'd' to the prefix to make sure it is not null (for boto) and
         # to optimize the listing of our filenames, which always begin with 'd'.
         # This will cause a failure in the regression tests as below:
@@ -336,76 +266,37 @@
                 pass
         return filename_list
 
-    def delete(self, filename_list):
-        for filename in filename_list:
-            self.bucket.delete_key(self.key_prefix + filename)
-            log.Debug("Deleted %s/%s" % (self.straight_url, filename))
+    def _delete(self, filename):
+        self.bucket.delete_key(self.key_prefix + filename)
 
-    @retry
-    def _query_file_info(self, filename, raise_errors=False):
-        try:
-            key = self.bucket.lookup(self.key_prefix + filename)
-            if key is None:
-                return {'size': -1}
-            return {'size': key.size}
-        except Exception as e:
-            log.Warn("Query %s/%s failed: %s"
-                     "" % (self.straight_url,
-                           filename,
-                           str(e)))
-            self.resetConnection()
-            if raise_errors:
-                raise e
-            else:
-                return {'size': None}
+    def _query(self, filename):
+        key = self.bucket.lookup(self.key_prefix + filename)
+        if key is None:
+            return {'size': -1}
+        return {'size': key.size}
 
     def upload(self, filename, key, headers):
-            key.set_contents_from_filename(filename, headers,
-                                           cb=progress.report_transfer,
-                                           num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
-                                           )  # Max num of callbacks = 8 times x megabyte
-            key.close()
+        key.set_contents_from_filename(filename, headers,
+                                       cb=progress.report_transfer,
+                                       num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
+                                       )  # Max num of callbacks = 8 times x megabyte
+        key.close()
 
-    def pre_process_download(self, files_to_download, wait=False):
+    def pre_process_download(self, remote_filename, wait=False):
         # Used primarily to move files in Glacier to S3
-        if isinstance(files_to_download, (bytes, str, unicode)):
-            files_to_download = [files_to_download]
+        key_name = self.key_prefix + remote_filename
+        if not self._listed_keys.get(key_name, False):
+            self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
+        key = self._listed_keys[key_name]
 
-        for remote_filename in files_to_download:
-            success = False
-            for n in range(1, globals.num_retries+1):
-                if n > 1:
-                    # sleep before retry (new connection to a **hopeful** new host, so no need to wait so long)
-                    time.sleep(10)
+        if key.storage_class == "GLACIER":
+            # We need to move the file out of glacier
+            if not self.bucket.get_key(key.key).ongoing_restore:
+                log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
+                key.restore(days=1)  # Shouldn't need this again after 1 day
+            if wait:
+                log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
+                while self.bucket.get_key(key.key).ongoing_restore:
+                    time.sleep(60)
                     self.resetConnection()
-                try:
-                    key_name = self.key_prefix + remote_filename
-                    if not self._listed_keys.get(key_name, False):
-                        self._listed_keys[key_name] = list(self.bucket.list(key_name))[0]
-                    key = self._listed_keys[key_name]
-
-                    if key.storage_class == "GLACIER":
-                        # We need to move the file out of glacier
-                        if not self.bucket.get_key(key.key).ongoing_restore:
-                            log.Info("File %s is in Glacier storage, restoring to S3" % remote_filename)
-                            key.restore(days=1)  # Shouldn't need this again after 1 day
-                        if wait:
-                            log.Info("Waiting for file %s to restore from Glacier" % remote_filename)
-                            while self.bucket.get_key(key.key).ongoing_restore:
-                                time.sleep(60)
-                                self.resetConnection()
-                            log.Info("File %s was successfully restored from Glacier" % remote_filename)
-                    success = True
-                    break
-                except Exception as e:
-                    log.Warn("Restoration from Glacier for file %s/%s failed (attempt #%d, reason: %s: %s)"
-                             "" % (self.straight_url,
-                                   remote_filename,
-                                   n,
-                                   e.__class__.__name__,
-                                   str(e)), 1)
-                    log.Debug("Backtrace of previous error: %s" % (exception_traceback(),))
-            if not success:
-                log.Warn("Giving up trying to restore %s/%s after %d attempts" %
-                        (self.straight_url, remote_filename, globals.num_retries))
-                raise BackendException("Error restoring %s/%s from Glacier to S3" % (self.straight_url, remote_filename))
+                log.Info("File %s was successfully restored from Glacier" % remote_filename)

=== modified file 'duplicity/backends/_cf_cloudfiles.py'
--- duplicity/backends/_cf_cloudfiles.py	2014-04-17 22:03:10 +0000
+++ duplicity/backends/_cf_cloudfiles.py	2014-04-28 02:49:55 +0000
@@ -19,14 +19,10 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-import time
 
 import duplicity.backend
-from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
-from duplicity.util import exception_traceback
-from duplicity.backend import retry
+from duplicity.errors import BackendException
 
 class CloudFilesBackend(duplicity.backend.Backend):
     """
@@ -69,124 +65,37 @@
                            log.ErrorCode.connection_failed)
         self.container = conn.create_container(container)
 
-    def put(self, source_path, remote_filename = None):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
-        for n in range(1, globals.num_retries+1):
-            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
-            try:
-                sobject = self.container.create_object(remote_filename)
-                sobject.load_from_filename(source_path.name)
-                return
-            except self.resp_exc as error:
-                log.Warn("Upload of '%s' failed (attempt %d): CloudFiles returned: %s %s"
-                         % (remote_filename, n, error.status, error.reason))
-            except Exception as e:
-                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
-                        % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up uploading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error uploading '%s'" % remote_filename)
-
-    def get(self, remote_filename, local_path):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
-            try:
-                sobject = self.container.create_object(remote_filename)
-                f = open(local_path.name, 'w')
-                for chunk in sobject.stream():
-                    f.write(chunk)
-                local_path.setdata()
-                return
-            except self.resp_exc as resperr:
-                log.Warn("Download of '%s' failed (attempt %s): CloudFiles returned: %s %s"
-                         % (remote_filename, n, resperr.status, resperr.reason))
-            except Exception as e:
-                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up downloading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error downloading '%s/%s'"
-                               % (self.container, remote_filename))
+    def _error_code(self, operation, e):
+        from cloudfiles.errors import NoSuchObject
+        if isinstance(e, NoSuchObject):
+            return log.ErrorCode.backend_not_found
+        elif isinstance(e, self.resp_exc):
+            if e.status == 404:
+                return log.ErrorCode.backend_not_found
+
+    def _put(self, source_path, remote_filename):
+        sobject = self.container.create_object(remote_filename)
+        sobject.load_from_filename(source_path.name)
+
+    def _get(self, remote_filename, local_path):
+        sobject = self.container.create_object(remote_filename)
+        with open(local_path.name, 'wb') as f:
+            for chunk in sobject.stream():
+                f.write(chunk)
 
     def _list(self):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Listing '%s'" % (self.container))
-            try:
-                # Cloud Files will return a max of 10,000 objects.  We have
-                # to make multiple requests to get them all.
-                objs = self.container.list_objects()
-                keys = objs
-                while len(objs) == 10000:
-                    objs = self.container.list_objects(marker=keys[-1])
-                    keys += objs
-                return keys
-            except self.resp_exc as resperr:
-                log.Warn("Listing of '%s' failed (attempt %s): CloudFiles returned: %s %s"
-                         % (self.container, n, resperr.status, resperr.reason))
-            except Exception as e:
-                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
-                         % (self.container, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up listing of '%s' after %s attempts"
-                 % (self.container, globals.num_retries))
-        raise BackendException("Error listing '%s'"
-                               % (self.container))
-
-    def delete_one(self, remote_filename):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
-            try:
-                self.container.delete_object(remote_filename)
-                return
-            except self.resp_exc as resperr:
-                if n > 1 and resperr.status == 404:
-                    # We failed on a timeout, but delete succeeded on the server
-                    log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename )
-                    return
-                log.Warn("Delete of '%s' failed (attempt %s): CloudFiles returned: %s %s"
-                         % (remote_filename, n, resperr.status, resperr.reason))
-            except Exception as e:
-                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up deleting '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error deleting '%s/%s'"
-                               % (self.container, remote_filename))
-
-    def delete(self, filename_list):
-        for file in filename_list:
-            self.delete_one(file)
-            log.Debug("Deleted '%s/%s'" % (self.container, file))
-
-    @retry
-    def _query_file_info(self, filename, raise_errors=False):
-        from cloudfiles.errors import NoSuchObject
-        try:
-            sobject = self.container.get_object(filename)
-            return {'size': sobject.size}
-        except NoSuchObject:
-            return {'size': -1}
-        except Exception as e:
-            log.Warn("Error querying '%s/%s': %s"
-                     "" % (self.container,
-                           filename,
-                           str(e)))
-            if raise_errors:
-                raise e
-            else:
-                return {'size': None}
-
-duplicity.backend.register_backend("cf+http", CloudFilesBackend)
+        # Cloud Files will return a max of 10,000 objects.  We have
+        # to make multiple requests to get them all.
+        objs = self.container.list_objects()
+        keys = objs
+        while len(objs) == 10000:
+            objs = self.container.list_objects(marker=keys[-1])
+            keys += objs
+        return keys
+
+    def _delete(self, filename):
+        self.container.delete_object(filename)
+
+    def _query(self, filename):
+        sobject = self.container.get_object(filename)
+        return {'size': sobject.size}

=== modified file 'duplicity/backends/_cf_pyrax.py'
--- duplicity/backends/_cf_pyrax.py	2014-04-17 22:03:10 +0000
+++ duplicity/backends/_cf_pyrax.py	2014-04-28 02:49:55 +0000
@@ -19,14 +19,11 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-import time
 
 import duplicity.backend
-from duplicity import globals
 from duplicity import log
-from duplicity.errors import *  # @UnusedWildImport
-from duplicity.util import exception_traceback
-from duplicity.backend import retry
+from duplicity.errors import BackendException
+
 
 class PyraxBackend(duplicity.backend.Backend):
     """
@@ -69,126 +66,39 @@
 
         self.client_exc = pyrax.exceptions.ClientException
         self.nso_exc = pyrax.exceptions.NoSuchObject
-        self.cloudfiles = pyrax.cloudfiles
         self.container = pyrax.cloudfiles.create_container(container)
 
-    def put(self, source_path, remote_filename = None):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
-        for n in range(1, globals.num_retries + 1):
-            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
-            try:
-                self.container.upload_file(source_path.name, remote_filename)
-                return
-            except self.client_exc as error:
-                log.Warn("Upload of '%s' failed (attempt %d): pyrax returned: %s %s"
-                         % (remote_filename, n, error.__class__.__name__, error.message))
-            except Exception as e:
-                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
-                        % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up uploading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error uploading '%s'" % remote_filename)
-
-    def get(self, remote_filename, local_path):
-        for n in range(1, globals.num_retries + 1):
-            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
-            try:
-                sobject = self.container.get_object(remote_filename)
-                f = open(local_path.name, 'w')
-                f.write(sobject.get())
-                local_path.setdata()
-                return
-            except self.nso_exc:
-                return
-            except self.client_exc as resperr:
-                log.Warn("Download of '%s' failed (attempt %s): pyrax returned: %s %s"
-                         % (remote_filename, n, resperr.__class__.__name__, resperr.message))
-            except Exception as e:
-                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up downloading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error downloading '%s/%s'"
-                               % (self.container, remote_filename))
+    def _error_code(self, operation, e):
+        if isinstance(e, self.nso_exc):
+            return log.ErrorCode.backend_not_found
+        elif isinstance(e, self.client_exc):
+            if e.status == 404:
+                return log.ErrorCode.backend_not_found
+        elif hasattr(e, 'http_status'):
+            if e.http_status == 404:
+                return log.ErrorCode.backend_not_found
+
+    def _put(self, source_path, remote_filename):
+        self.container.upload_file(source_path.name, remote_filename)
+
+    def _get(self, remote_filename, local_path):
+        sobject = self.container.get_object(remote_filename)
+        with open(local_path.name, 'wb') as f:
+            f.write(sobject.get())
 
     def _list(self):
-        for n in range(1, globals.num_retries + 1):
-            log.Info("Listing '%s'" % (self.container))
-            try:
-                # Cloud Files will return a max of 10,000 objects.  We have
-                # to make multiple requests to get them all.
-                objs = self.container.get_object_names()
-                keys = objs
-                while len(objs) == 10000:
-                    objs = self.container.get_object_names(marker = keys[-1])
-                    keys += objs
-                return keys
-            except self.client_exc as resperr:
-                log.Warn("Listing of '%s' failed (attempt %s): pyrax returned: %s %s"
-                         % (self.container, n, resperr.__class__.__name__, resperr.message))
-            except Exception as e:
-                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
-                         % (self.container, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up listing of '%s' after %s attempts"
-                 % (self.container, globals.num_retries))
-        raise BackendException("Error listing '%s'"
-                               % (self.container))
-
-    def delete_one(self, remote_filename):
-        for n in range(1, globals.num_retries + 1):
-            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
-            try:
-                self.container.delete_object(remote_filename)
-                return
-            except self.client_exc as resperr:
-                if n > 1 and resperr.status == 404:
-                    # We failed on a timeout, but delete succeeded on the server
-                    log.Warn("Delete of '%s' missing after retry - must have succeded earler" % remote_filename)
-                    return
-                log.Warn("Delete of '%s' failed (attempt %s): pyrax returned: %s %s"
-                         % (remote_filename, n, resperr.__class__.__name__, resperr.message))
-            except Exception as e:
-                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up deleting '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error deleting '%s/%s'"
-                               % (self.container, remote_filename))
-
-    def delete(self, filename_list):
-        for file_ in filename_list:
-            self.delete_one(file_)
-            log.Debug("Deleted '%s/%s'" % (self.container, file_))
-
-    @retry
-    def _query_file_info(self, filename, raise_errors = False):
-        try:
-            sobject = self.container.get_object(filename)
-            return {'size': sobject.total_bytes}
-        except self.nso_exc:
-            return {'size': -1}
-        except Exception as e:
-            log.Warn("Error querying '%s/%s': %s"
-                     "" % (self.container,
-                           filename,
-                           str(e)))
-            if raise_errors:
-                raise e
-            else:
-                return {'size': None}
-
-duplicity.backend.register_backend("cf+http", PyraxBackend)
+        # Cloud Files will return a max of 10,000 objects.  We have
+        # to make multiple requests to get them all.
+        objs = self.container.get_object_names()
+        keys = objs
+        while len(objs) == 10000:
+            objs = self.container.get_object_names(marker = keys[-1])
+            keys += objs
+        return keys
+
+    def _delete(self, filename):
+        self.container.delete_object(filename)
+
+    def _query(self, filename):
+        sobject = self.container.get_object(filename)
+        return {'size': sobject.total_bytes}

=== modified file 'duplicity/backends/_ssh_paramiko.py'
--- duplicity/backends/_ssh_paramiko.py	2014-04-17 20:50:57 +0000
+++ duplicity/backends/_ssh_paramiko.py	2014-04-28 02:49:55 +0000
@@ -28,7 +28,6 @@
 import os
 import errno
 import sys
-import time
 import getpass
 import logging
 from binascii import hexlify
@@ -36,7 +35,7 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import *
+from duplicity.errors import BackendException
 
 read_blocksize=65635            # for doing scp retrievals, where we need to read ourselves
 
@@ -232,7 +231,6 @@
             except Exception as e:
                 raise BackendException("sftp negotiation failed: %s" % e)
 
-
             # move to the appropriate directory, possibly after creating it and its parents
             dirs = self.remote_dir.split(os.sep)
             if len(dirs) > 0:
@@ -257,157 +255,91 @@
                     except Exception as e:
                         raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
 
-    def put(self, source_path, remote_filename = None):
-        """transfers a single file to the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the remote directory or file name
-        contain single quotes."""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-        
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(self.retry_delay)
-            try:
-                if (globals.use_scp):
-                    f=file(source_path.name,'rb')
-                    try:
-                        chan=self.client.get_transport().open_session()
-                        chan.settimeout(globals.timeout)
-                        chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
-                    except Exception as e:
-                        raise BackendException("scp execution failed: %s" % e)
-                    # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
-                    # if there's a problem: 0x1 or 0x02 and some error text
-                    response=chan.recv(1)
-                    if (response!="\0"):
-                        raise BackendException("scp remote error: %s" % chan.recv(-1))
-                    fstat=os.stat(source_path.name)
-                    chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
-                    response=chan.recv(1)
-                    if (response!="\0"):
-                        raise BackendException("scp remote error: %s" % chan.recv(-1))
-                    chan.sendall(f.read()+'\0')
-                    f.close()
-                    response=chan.recv(1)
-                    if (response!="\0"):
-                        raise BackendException("scp remote error: %s" % chan.recv(-1))
-                    chan.close()
-                    return
-                else:
-                    try:
-                        self.sftp.put(source_path.name,remote_filename)
-                        return
-                    except Exception as e:
-                        raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
-            except Exception as e:
-                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
-        raise BackendException("Giving up trying to upload '%s' after %d attempts" % (remote_filename,n))
-
-
-    def get(self, remote_filename, local_path):
-        """retrieves a single file from the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the remote directory or file names
-        contain single quotes."""
-        
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(self.retry_delay)
-            try:
-                if (globals.use_scp):
-                    try:
-                        chan=self.client.get_transport().open_session()
-                        chan.settimeout(globals.timeout)
-                        chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
-                    except Exception as e:
-                        raise BackendException("scp execution failed: %s" % e)
-
-                    chan.send('\0')     # overall ready indicator
-                    msg=chan.recv(-1)
-                    m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
-                    if (m==None or m.group(3)!=remote_filename):
-                        raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
-                    chan.recv(1)        # dispose of the newline trailing the C message
-
-                    size=int(m.group(2))
-                    togo=size
-                    f=file(local_path.name,'wb')
-                    chan.send('\0')     # ready for data
-                    try:
-                        while togo>0:
-                            if togo>read_blocksize:
-                                blocksize = read_blocksize
-                            else:
-                                blocksize = togo
-                            buff=chan.recv(blocksize)
-                            f.write(buff)
-                            togo-=len(buff)
-                    except Exception as e:
-                        raise BackendException("scp get %s failed: %s" % (remote_filename,e))
-
-                    msg=chan.recv(1)    # check the final status
-                    if msg!='\0':
-                        raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
-                    f.close()
-                    chan.send('\0')     # send final done indicator
-                    chan.close()
-                    return
-                else:
-                    try:
-                        self.sftp.get(remote_filename,local_path.name)
-                        return
-                    except Exception as e:
-                        raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
-                local_path.setdata()
-            except Exception as e:
-                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
-        raise BackendException("Giving up trying to download '%s' after %d attempts" % (remote_filename,n))
+    def _put(self, source_path, remote_filename):
+        if globals.use_scp:
+            f=file(source_path.name,'rb')
+            try:
+                chan=self.client.get_transport().open_session()
+                chan.settimeout(globals.timeout)
+                chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
+            except Exception as e:
+                raise BackendException("scp execution failed: %s" % e)
+            # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
+            # if there's a problem: 0x1 or 0x02 and some error text
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            fstat=os.stat(source_path.name)
+            chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            chan.sendall(f.read()+'\0')
+            f.close()
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            chan.close()
+        else:
+            self.sftp.put(source_path.name,remote_filename)
+
+    def _get(self, remote_filename, local_path):
+        if globals.use_scp:
+            try:
+                chan=self.client.get_transport().open_session()
+                chan.settimeout(globals.timeout)
+                chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
+            except Exception as e:
+                raise BackendException("scp execution failed: %s" % e)
+
+            chan.send('\0')     # overall ready indicator
+            msg=chan.recv(-1)
+            m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
+            if (m==None or m.group(3)!=remote_filename):
+                raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
+            chan.recv(1)        # dispose of the newline trailing the C message
+
+            size=int(m.group(2))
+            togo=size
+            f=file(local_path.name,'wb')
+            chan.send('\0')     # ready for data
+            try:
+                while togo>0:
+                    if togo>read_blocksize:
+                        blocksize = read_blocksize
+                    else:
+                        blocksize = togo
+                    buff=chan.recv(blocksize)
+                    f.write(buff)
+                    togo-=len(buff)
+            except Exception as e:
+                raise BackendException("scp get %s failed: %s" % (remote_filename,e))
+
+            msg=chan.recv(1)    # check the final status
+            if msg!='\0':
+                raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
+            f.close()
+            chan.send('\0')     # send final done indicator
+            chan.close()
+        else:
+            self.sftp.get(remote_filename,local_path.name)
 
     def _list(self):
-        """lists the contents of the one-and-only duplicity dir on the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the directory name
-        contains single quotes."""
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(self.retry_delay)
-            try:
-                if (globals.use_scp):
-                    output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")
-                    return output.splitlines()
-                else:
-                    try:
-                        return self.sftp.listdir()
-                    except Exception as e:
-                        raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
-            except Exception as e:
-                log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
-        raise BackendException("Giving up trying to list '%s' after %d attempts" % (self.remote_dir,n))
-
-    def delete(self, filename_list):
-        """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
-        will cause failures if filenames containing single quotes are encountered."""
-        for fn in filename_list:
-            # Try to delete each file several times before giving up completely.
-            for n in range(1, globals.num_retries+1):
-                try:
-                    if (globals.use_scp):
-                        self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
-                    else:
-                        try:
-                            self.sftp.remove(fn)
-                        except Exception as e:
-                            raise BackendException("sftp rm %s failed: %s" % (fn,e))
-
-                    # If we get here, we deleted this file successfully. Move on to the next one.
-                    break
-                except Exception as e:
-                    if n == globals.num_retries:
-                        log.FatalError(str(e), log.ErrorCode.backend_error)
-                    else:
-                        log.Warn("%s (Try %d of %d) Will retry in %d seconds." % (e,n,globals.num_retries,self.retry_delay))
-                        time.sleep(self.retry_delay)
+        # In scp mode unavoidable quoting issues will make this fail if the
+        # directory name contains single quotes.
+        if globals.use_scp:
+            output = self.runremote("ls -1 '%s'" % self.remote_dir, False, "scp dir listing ")
+            return output.splitlines()
+        else:
+            return self.sftp.listdir()
+
+    def _delete(self, filename):
+        # In scp mode unavoidable quoting issues will cause failures if
+        # filenames containing single quotes are encountered.
+        if globals.use_scp:
+            self.runremote("rm '%s/%s'" % (self.remote_dir, filename), False, "scp rm ")
+        else:
+            self.sftp.remove(filename)
 
     def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
         """small convenience function that opens a shell channel, runs remote command and returns
@@ -438,7 +370,3 @@
             raise BackendException("could not load '%s', maybe corrupt?" % (file))
         
         return sshconfig.lookup(host)
-
-duplicity.backend.register_backend("sftp", SSHParamikoBackend)
-duplicity.backend.register_backend("scp", SSHParamikoBackend)
-duplicity.backend.register_backend("ssh", SSHParamikoBackend)

=== modified file 'duplicity/backends/_ssh_pexpect.py'
--- duplicity/backends/_ssh_pexpect.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/_ssh_pexpect.py	2014-04-28 02:49:55 +0000
@@ -24,18 +24,20 @@
 # have the same syntax.  Also these strings will be executed by the
 # shell, so shouldn't have strange characters in them.
 
+from future_builtins import map
+
 import re
 import string
-import time
 import os
 
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import BackendException
 
 class SSHPExpectBackend(duplicity.backend.Backend):
-    """This backend copies files using scp.  List not supported"""
+    """This backend copies files using scp.  List not supported.  Filenames
+       should not need any quoting or this will break."""
     def __init__(self, parsed_url):
         """scpBackend initializer"""
         duplicity.backend.Backend.__init__(self, parsed_url)
@@ -76,74 +78,67 @@
     def run_scp_command(self, commandline):
         """ Run an scp command, responding to password prompts """
         import pexpect
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(self.retry_delay)
-            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
-            child = pexpect.spawn(commandline, timeout = None)
-            if globals.ssh_askpass:
-                state = "authorizing"
-            else:
-                state = "copying"
-            while 1:
-                if state == "authorizing":
-                    match = child.expect([pexpect.EOF,
-                                          "(?i)timeout, server not responding",
-                                          "(?i)pass(word|phrase .*):",
-                                          "(?i)permission denied",
-                                          "authenticity"])
-                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                    if match == 0:
-                        log.Warn("Failed to authenticate")
-                        break
-                    elif match == 1:
-                        log.Warn("Timeout waiting to authenticate")
-                        break
-                    elif match == 2:
-                        child.sendline(self.password)
-                        state = "copying"
-                    elif match == 3:
-                        log.Warn("Invalid SSH password")
-                        break
-                    elif match == 4:
-                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
-                        break
-                elif state == "copying":
-                    match = child.expect([pexpect.EOF,
-                                          "(?i)timeout, server not responding",
-                                          "stalled",
-                                          "authenticity",
-                                          "ETA"])
-                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                    if match == 0:
-                        break
-                    elif match == 1:
-                        log.Warn("Timeout waiting for response")
-                        break
-                    elif match == 2:
-                        state = "stalled"
-                    elif match == 3:
-                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
-                        break
-                elif state == "stalled":
-                    match = child.expect([pexpect.EOF,
-                                          "(?i)timeout, server not responding",
-                                          "ETA"])
-                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
-                    if match == 0:
-                        break
-                    elif match == 1:
-                        log.Warn("Stalled for too long, aborted copy")
-                        break
-                    elif match == 2:
-                        state = "copying"
-            child.close(force = True)
-            if child.exitstatus == 0:
-                return
-            log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
-        log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
-        raise BackendException("Error running '%s'" % commandline)
+        log.Info("Running '%s'" % commandline)
+        child = pexpect.spawn(commandline, timeout = None)
+        if globals.ssh_askpass:
+            state = "authorizing"
+        else:
+            state = "copying"
+        while 1:
+            if state == "authorizing":
+                match = child.expect([pexpect.EOF,
+                                      "(?i)timeout, server not responding",
+                                      "(?i)pass(word|phrase .*):",
+                                      "(?i)permission denied",
+                                      "authenticity"])
+                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                if match == 0:
+                    log.Warn("Failed to authenticate")
+                    break
+                elif match == 1:
+                    log.Warn("Timeout waiting to authenticate")
+                    break
+                elif match == 2:
+                    child.sendline(self.password)
+                    state = "copying"
+                elif match == 3:
+                    log.Warn("Invalid SSH password")
+                    break
+                elif match == 4:
+                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                    break
+            elif state == "copying":
+                match = child.expect([pexpect.EOF,
+                                      "(?i)timeout, server not responding",
+                                      "stalled",
+                                      "authenticity",
+                                      "ETA"])
+                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                if match == 0:
+                    break
+                elif match == 1:
+                    log.Warn("Timeout waiting for response")
+                    break
+                elif match == 2:
+                    state = "stalled"
+                elif match == 3:
+                    log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                    break
+            elif state == "stalled":
+                match = child.expect([pexpect.EOF,
+                                      "(?i)timeout, server not responding",
+                                      "ETA"])
+                log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                if match == 0:
+                    break
+                elif match == 1:
+                    log.Warn("Stalled for too long, aborted copy")
+                    break
+                elif match == 2:
+                    state = "copying"
+        child.close(force = True)
+        if child.exitstatus != 0:
+            raise BackendException("Error running '%s'" % commandline)
 
     def run_sftp_command(self, commandline, commands):
         """ Run an sftp command, responding to password prompts, passing commands from list """
@@ -160,76 +155,69 @@
                      "Couldn't delete file",
                      "open(.*): Failure"]
         max_response_len = max([len(p) for p in responses[1:]])
-        for n in range(1, globals.num_retries+1):
-            if n > 1:
-                # sleep before retry
-                time.sleep(self.retry_delay)
-            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
-            child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
-            cmdloc = 0
-            passprompt = 0
-            while 1:
-                msg = ""
-                match = child.expect(responses,
-                                     searchwindowsize=maxread+max_response_len)
-                log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
-                if match == 0:
-                    break
-                elif match == 1:
-                    msg = "Timeout waiting for response"
-                    break
-                if match == 2:
-                    if cmdloc < len(commands):
-                        command = commands[cmdloc]
-                        log.Info("sftp command: '%s'" % (command,))
-                        child.sendline(command)
-                        cmdloc += 1
-                    else:
-                        command = 'quit'
-                        child.sendline(command)
-                        res = child.before
-                elif match == 3:
-                    passprompt += 1
-                    child.sendline(self.password)
-                    if (passprompt>1):
-                        raise BackendException("Invalid SSH password.")
-                elif match == 4:
-                    if not child.before.strip().startswith("mkdir"):
-                        msg = "Permission denied"
-                        break
-                elif match == 5:
-                    msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
-                    break
-                elif match == 6:
-                    if not child.before.strip().startswith("rm"):
-                        msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
-                        break
-                elif match == 7:
-                    if not child.before.strip().startswith("Removing"):
-                        msg = "Could not delete file in command='%s'" % (commandline,)
-                        break;
-                elif match == 8:
+        log.Info("Running '%s'" % (commandline))
+        child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
+        cmdloc = 0
+        passprompt = 0
+        while 1:
+            msg = ""
+            match = child.expect(responses,
+                                 searchwindowsize=maxread+max_response_len)
+            log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
+            if match == 0:
+                break
+            elif match == 1:
+                msg = "Timeout waiting for response"
+                break
+            if match == 2:
+                if cmdloc < len(commands):
+                    command = commands[cmdloc]
+                    log.Info("sftp command: '%s'" % (command,))
+                    child.sendline(command)
+                    cmdloc += 1
+                else:
+                    command = 'quit'
+                    child.sendline(command)
+                    res = child.before
+            elif match == 3:
+                passprompt += 1
+                child.sendline(self.password)
+                if (passprompt>1):
+                    raise BackendException("Invalid SSH password.")
+            elif match == 4:
+                if not child.before.strip().startswith("mkdir"):
+                    msg = "Permission denied"
+                    break
+            elif match == 5:
+                msg = "Host key authenticity could not be verified (missing known_hosts entry?)"
+                break
+            elif match == 6:
+                if not child.before.strip().startswith("rm"):
+                    msg = "Remote file or directory does not exist in command='%s'" % (commandline,)
+                    break
+            elif match == 7:
+                if not child.before.strip().startswith("Removing"):
                     msg = "Could not delete file in command='%s'" % (commandline,)
-                    break
-                elif match == 9:
-                    msg = "Could not open file in command='%s'" % (commandline,)
-                    break
-            child.close(force = True)
-            if child.exitstatus == 0:
-                return res
-            log.Warn("Running '%s' with commands:\n %s\n failed (attempt #%d): %s" % (commandline, "\n ".join(commands), n, msg))
-        raise BackendException("Giving up trying to execute '%s' with commands:\n %s\n after %d attempts" % (commandline, "\n ".join(commands), globals.num_retries))
+                    break;
+            elif match == 8:
+                msg = "Could not delete file in command='%s'" % (commandline,)
+                break
+            elif match == 9:
+                msg = "Could not open file in command='%s'" % (commandline,)
+                break
+        child.close(force = True)
+        if child.exitstatus == 0:
+            return res
+        else:
+            raise BackendException("Error running '%s': %s" % (commandline, msg))
 
-    def put(self, source_path, remote_filename = None):
+    def _put(self, source_path, remote_filename):
         if globals.use_scp:
-            self.put_scp(source_path, remote_filename = remote_filename)
+            self.put_scp(source_path, remote_filename)
         else:
-            self.put_sftp(source_path, remote_filename = remote_filename)
+            self.put_sftp(source_path, remote_filename)
 
-    def put_sftp(self, source_path, remote_filename = None):
-        """Use sftp to copy source_dir/filename to remote computer"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def put_sftp(self, source_path, remote_filename):
         commands = ["put \"%s\" \"%s.%s.part\"" %
                     (source_path.name, self.remote_prefix, remote_filename),
                     "rename \"%s.%s.part\" \"%s%s\"" %
@@ -239,53 +227,36 @@
                                      self.host_string))
         self.run_sftp_command(commandline, commands)
 
-    def put_scp(self, source_path, remote_filename = None):
-        """Use scp to copy source_dir/filename to remote computer"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def put_scp(self, source_path, remote_filename):
         commandline = "%s %s %s %s:%s%s" % \
             (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
              self.remote_prefix, remote_filename)
         self.run_scp_command(commandline)
 
-    def get(self, remote_filename, local_path):
+    def _get(self, remote_filename, local_path):
         if globals.use_scp:
             self.get_scp(remote_filename, local_path)
         else:
             self.get_sftp(remote_filename, local_path)
 
     def get_sftp(self, remote_filename, local_path):
-        """Use sftp to get a remote file"""
         commands = ["get \"%s%s\" \"%s\"" %
                     (self.remote_prefix, remote_filename, local_path.name)]
         commandline = ("%s %s %s" % (self.sftp_command,
                                      globals.ssh_options,
                                      self.host_string))
         self.run_sftp_command(commandline, commands)
-        local_path.setdata()
-        if not local_path.exists():
-            raise BackendException("File %s not found locally after get "
-                                   "from backend" % local_path.name)
 
     def get_scp(self, remote_filename, local_path):
-        """Use scp to get a remote file"""
         commandline = "%s %s %s:%s%s %s" % \
             (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
              remote_filename, local_path.name)
         self.run_scp_command(commandline)
-        local_path.setdata()
-        if not local_path.exists():
-            raise BackendException("File %s not found locally after get "
-                                   "from backend" % local_path.name)
 
     def _list(self):
-        """
-        List files available for scp
-
-        Note that this command can get confused when dealing with
-        files with newlines in them, as the embedded newlines cannot
-        be distinguished from the file boundaries.
-        """
+        # Note that this command can get confused when dealing with
+        # files with newlines in them, as the embedded newlines cannot
+        # be distinguished from the file boundaries.
         dirs = self.remote_dir.split(os.sep)
         if len(dirs) > 0:
             if not dirs[0] :
@@ -304,16 +275,8 @@
 
         return [x for x in map(string.strip, l) if x]
 
-    def delete(self, filename_list):
-        """
-        Runs sftp rm to delete files.  Files must not require quoting.
-        """
+    def _delete(self, filename):
         commands = ["cd \"%s\"" % (self.remote_dir,)]
-        for fn in filename_list:
-            commands.append("rm \"%s\"" % fn)
+        commands.append("rm \"%s\"" % filename)
         commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
         self.run_sftp_command(commandline, commands)
-
-duplicity.backend.register_backend("ssh", SSHPExpectBackend)
-duplicity.backend.register_backend("scp", SSHPExpectBackend)
-duplicity.backend.register_backend("sftp", SSHPExpectBackend)

=== modified file 'duplicity/backends/botobackend.py'
--- duplicity/backends/botobackend.py	2014-04-17 21:54:04 +0000
+++ duplicity/backends/botobackend.py	2014-04-28 02:49:55 +0000
@@ -22,14 +22,12 @@
 
 import duplicity.backend
 from duplicity import globals
-from ._boto_multi import BotoBackend as BotoMultiUploadBackend
-from ._boto_single import BotoBackend as BotoSingleUploadBackend
 
 if globals.s3_use_multiprocessing:
-    duplicity.backend.register_backend("gs", BotoMultiUploadBackend)
-    duplicity.backend.register_backend("s3", BotoMultiUploadBackend)
-    duplicity.backend.register_backend("s3+http", BotoMultiUploadBackend)
+    from ._boto_multi import BotoBackend
 else:
-    duplicity.backend.register_backend("gs", BotoSingleUploadBackend)
-    duplicity.backend.register_backend("s3", BotoSingleUploadBackend)
-    duplicity.backend.register_backend("s3+http", BotoSingleUploadBackend)
+    from ._boto_single import BotoBackend
+
+duplicity.backend.register_backend("gs", BotoBackend)
+duplicity.backend.register_backend("s3", BotoBackend)
+duplicity.backend.register_backend("s3+http", BotoBackend)

=== modified file 'duplicity/backends/cfbackend.py'
--- duplicity/backends/cfbackend.py	2014-04-17 21:54:04 +0000
+++ duplicity/backends/cfbackend.py	2014-04-28 02:49:55 +0000
@@ -18,10 +18,13 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import duplicity.backend
 from duplicity import globals
 
 if (globals.cf_backend and
     globals.cf_backend.lower().strip() == 'pyrax'):
-    from . import _cf_pyrax
+    from ._cf_pyrax import PyraxBackend as CFBackend
 else:
-    from . import _cf_cloudfiles
+    from ._cf_cloudfiles import CloudFilesBackend as CFBackend
+
+duplicity.backend.register_backend("cf+http", CFBackend)

=== modified file 'duplicity/backends/dpbxbackend.py'
--- duplicity/backends/dpbxbackend.py	2014-04-17 21:49:37 +0000
+++ duplicity/backends/dpbxbackend.py	2014-04-28 02:49:55 +0000
@@ -32,14 +32,10 @@
 from functools import reduce
 
 import traceback, StringIO
-from exceptions import Exception
 
 import duplicity.backend
-from duplicity import globals
 from duplicity import log
-from duplicity.errors import *
-from duplicity import tempdir
-from duplicity.backend import retry_fatal
+from duplicity.errors import BackendException
 
 
 # This application key is registered in my name (jno at pisem dot net).
@@ -76,14 +72,14 @@
         def wrapper(self, *args):
             from dropbox import rest
             if login_required and not self.sess.is_linked():
-              log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
+              raise BackendException("dpbx Cannot login: check your credentials", log.ErrorCode.dpbx_nologin)
               return
 
             try:
                 return f(self, *args)
             except TypeError as e:
                 log_exception(e)
-                log.FatalError('dpbx type error "%s"' % (e,), log.ErrorCode.backend_code_error)
+                raise BackendException('dpbx type error "%s"' % (e,))
             except rest.ErrorResponse as e:
                 msg = e.user_error_msg or str(e)
                 log.Error('dpbx error: %s' % (msg,), log.ErrorCode.backend_command_error)
@@ -165,25 +161,22 @@
           if not self.sess.is_linked(): # stil not logged in
             log.FatalError("dpbx Cannot login: check your credentials",log.ErrorCode.dpbx_nologin)
 
-    @retry_fatal
+    def _error_code(self, operation, e):
+        from dropbox import rest
+        if isinstance(e, rest.ErrorResponse):
+            if e.status == 404:
+                return log.ErrorCode.backend_not_found
+
     @command()
-    def put(self, source_path, remote_filename = None):
-        """Transfer source_path to remote_filename"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
+    def _put(self, source_path, remote_filename):
         remote_dir  = urllib.unquote(self.parsed_url.path.lstrip('/'))
         remote_path = os.path.join(remote_dir, remote_filename).rstrip()
-
         from_file = open(source_path.name, "rb")
-
         resp = self.api_client.put_file(remote_path, from_file)
         log.Debug( 'dpbx,put(%s,%s): %s'%(source_path.name, remote_path, resp))
 
-    @retry_fatal
     @command()
-    def get(self, remote_filename, local_path):
-        """Get remote filename, saving it to local_path"""
+    def _get(self, remote_filename, local_path):
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
 
         to_file = open( local_path.name, 'wb' )
@@ -196,10 +189,8 @@
 
         local_path.setdata()
 
-    @retry_fatal
     @command()
-    def _list(self,none=None):
-        """List files in directory"""
+    def _list(self):
         # Do a long listing to avoid connection reset
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
         resp = self.api_client.metadata(remote_dir)
@@ -214,21 +205,15 @@
                 l.append(name.encode(encoding))
         return l
 
-    @retry_fatal
     @command()
-    def delete(self, filename_list):
-        """Delete files in filename_list"""
-        if not filename_list :
-          log.Debug('dpbx.delete(): no op')
-          return
+    def _delete(self, filename):
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
-        for filename in filename_list:
-          remote_name = os.path.join( remote_dir, filename )
-          resp = self.api_client.file_delete( remote_name )
-          log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
+        remote_name = os.path.join( remote_dir, filename )
+        resp = self.api_client.file_delete( remote_name )
+        log.Debug('dpbx.delete(%s): %s'%(remote_name,resp))
 
     @command()
-    def close(self):
+    def _close(self):
       """close backend session? no! just "flush" the data"""
       info = self.api_client.account_info()
       log.Debug('dpbx.close():')

=== modified file 'duplicity/backends/ftpbackend.py'
--- duplicity/backends/ftpbackend.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/ftpbackend.py	2014-04-28 02:49:55 +0000
@@ -25,7 +25,6 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
 from duplicity import tempdir
 
 class FTPBackend(duplicity.backend.Backend):
@@ -65,7 +64,7 @@
         # This squelches the "file not found" result from ncftpls when
         # the ftp backend looks for a collection that does not exist.
         # version 3.2.2 has error code 5, 1280 is some legacy value
-        self.popen_persist_breaks[ 'ncftpls' ] = [ 5, 1280 ]
+        self.popen_breaks[ 'ncftpls' ] = [ 5, 1280 ]
 
         # Use an explicit directory name.
         if self.url_string[-1] != '/':
@@ -88,36 +87,28 @@
         if parsed_url.port != None and parsed_url.port != 21:
             self.flags += " -P '%s'" % (parsed_url.port)
 
-    def put(self, source_path, remote_filename = None):
-        """Transfer source_path to remote_filename"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
         commandline = "ncftpput %s -m -V -C '%s' '%s'" % \
             (self.flags, source_path.name, remote_path)
-        self.run_command_persist(commandline)
+        self.subprocess_popen(commandline)
 
-    def get(self, remote_filename, local_path):
-        """Get remote filename, saving it to local_path"""
+    def _get(self, remote_filename, local_path):
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
         commandline = "ncftpget %s -V -C '%s' '%s' '%s'" % \
             (self.flags, self.parsed_url.hostname, remote_path.lstrip('/'), local_path.name)
-        self.run_command_persist(commandline)
-        local_path.setdata()
+        self.subprocess_popen(commandline)
 
     def _list(self):
-        """List files in directory"""
         # Do a long listing to avoid connection reset
         commandline = "ncftpls %s -l '%s'" % (self.flags, self.url_string)
-        l = self.popen_persist(commandline).split('\n')
+        _, l, _ = self.subprocess_popen(commandline)
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l if x and not x.startswith("total ")]
+        return [x.split()[-1] for x in l.split('\n') if x and not x.startswith("total ")]
 
-    def delete(self, filename_list):
-        """Delete files in filename_list"""
-        for filename in filename_list:
-            commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
-                (self.flags, filename, self.url_string)
-            self.popen_persist(commandline)
+    def _delete(self, filename):
+        commandline = "ncftpls %s -l -X 'DELE %s' '%s'" % \
+            (self.flags, filename, self.url_string)
+        self.subprocess_popen(commandline)
 
 duplicity.backend.register_backend("ftp", FTPBackend)

=== modified file 'duplicity/backends/ftpsbackend.py'
--- duplicity/backends/ftpsbackend.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/ftpsbackend.py	2014-04-28 02:49:55 +0000
@@ -28,7 +28,6 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import *
 from duplicity import tempdir
 
 class FTPSBackend(duplicity.backend.Backend):
@@ -85,42 +84,29 @@
             os.write(self.tempfile, "user %s %s\n" % (self.parsed_url.username, self.password))
         os.close(self.tempfile)
 
-        self.flags = "-f %s" % self.tempname
-
-    def put(self, source_path, remote_filename = None):
-        """Transfer source_path to remote_filename"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path.lstrip('/')), remote_filename).rstrip()
         commandline = "lftp -c 'source %s;put \'%s\' -o \'%s\''" % \
             (self.tempname, source_path.name, remote_path)
-        l = self.run_command_persist(commandline)
+        self.subprocess_popen(commandline)
 
-    def get(self, remote_filename, local_path):
-        """Get remote filename, saving it to local_path"""
+    def _get(self, remote_filename, local_path):
         remote_path = os.path.join(urllib.unquote(self.parsed_url.path), remote_filename).rstrip()
         commandline = "lftp -c 'source %s;get %s -o %s'" % \
             (self.tempname, remote_path.lstrip('/'), local_path.name)
-        self.run_command_persist(commandline)
-        local_path.setdata()
+        self.subprocess_popen(commandline)
 
     def _list(self):
-        """List files in directory"""
         # Do a long listing to avoid connection reset
         remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
         commandline = "lftp -c 'source %s;ls \'%s\''" % (self.tempname, remote_dir)
-        l = self.popen_persist(commandline).split('\n')
+        _, l, _ = self.subprocess_popen(commandline)
         # Look for our files as the last element of a long list line
-        return [x.split()[-1] for x in l if x]
+        return [x.split()[-1] for x in l.split('\n') if x]
 
-    def delete(self, filename_list):
-        """Delete files in filename_list"""
-        filelist = ""
-        for filename in filename_list:
-            filelist += "\'%s\' " % filename
-        if filelist.rstrip():
-            remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
-            commandline = "lftp -c 'source %s;cd \'%s\';rm %s'" % (self.tempname, remote_dir, filelist.rstrip())
-            self.popen_persist(commandline)
+    def _delete(self, filename):
+        remote_dir = urllib.unquote(self.parsed_url.path.lstrip('/')).rstrip()
+        commandline = "lftp -c 'source %s;cd \'%s\';rm \'%s\''" % (self.tempname, remote_dir, filename)
+        self.subprocess_popen(commandline)
 
 duplicity.backend.register_backend("ftps", FTPSBackend)

=== modified file 'duplicity/backends/gdocsbackend.py'
--- duplicity/backends/gdocsbackend.py	2014-04-17 20:50:57 +0000
+++ duplicity/backends/gdocsbackend.py	2014-04-28 02:49:55 +0000
@@ -23,9 +23,7 @@
 import urllib
 
 import duplicity.backend
-from duplicity.backend import retry
-from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import BackendException
 
 
 class GDocsBackend(duplicity.backend.Backend):
@@ -53,14 +51,14 @@
         self.client = gdata.docs.client.DocsClient(source='duplicity $version')
         self.client.ssl = True
         self.client.http_client.debug = False
-        self.__authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
+        self._authorize(parsed_url.username + '@' + parsed_url.hostname, self.get_password())
 
         # Fetch destination folder entry (and crete hierarchy if required).
         folder_names = string.split(parsed_url.path[1:], '/')
         parent_folder = None
         parent_folder_id = GDocsBackend.ROOT_FOLDER_ID
         for folder_name in folder_names:
-            entries = self.__fetch_entries(parent_folder_id, 'folder', folder_name)
+            entries = self._fetch_entries(parent_folder_id, 'folder', folder_name)
             if entries is not None:
                 if len(entries) == 1:
                     parent_folder = entries[0]
@@ -77,106 +75,54 @@
                 raise BackendException("Error while fetching destination folder '%s'." % folder_name)
         self.folder = parent_folder
 
-    @retry
-    def put(self, source_path, remote_filename=None, raise_errors=False):
-        """Transfer source_path to remote_filename"""
-        # Default remote file name.
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
-        # Upload!
-        try:
-            # If remote file already exists in destination folder, remove it.
-            entries = self.__fetch_entries(self.folder.resource_id.text,
-                                           GDocsBackend.BACKUP_DOCUMENT_TYPE,
-                                           remote_filename)
-            for entry in entries:
-                self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
-
-            # Set uploader instance. Note that resumable uploads are required in order to
-            # enable uploads for all file types.
-            # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
-            file = source_path.open()
-            uploader = gdata.client.ResumableUploader(
-              self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
-              chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
-              desired_class=gdata.docs.data.Resource)
-            if uploader:
-                # Chunked upload.
-                entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
-                uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
-                entry = uploader.UploadFile(uri, entry=entry)
-                if not entry:
-                    self.__handle_error("Failed to upload file '%s' to remote folder '%s'"
-                                        % (source_path.get_filename(), self.folder.title.text), raise_errors)
-            else:
-                self.__handle_error("Failed to initialize upload of file '%s' to remote folder '%s'"
-                         % (source_path.get_filename(), self.folder.title.text), raise_errors)
-            assert not file.close()
-        except Exception as e:
-            self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
-                                % (source_path.get_filename(), self.folder.title.text, str(e)), raise_errors)
-
-    @retry
-    def get(self, remote_filename, local_path, raise_errors=False):
-        """Get remote filename, saving it to local_path"""
-        try:
-            entries = self.__fetch_entries(self.folder.resource_id.text,
-                                           GDocsBackend.BACKUP_DOCUMENT_TYPE,
-                                           remote_filename)
-            if len(entries) == 1:
-                entry = entries[0]
-                self.client.DownloadResource(entry, local_path.name)
-                local_path.setdata()
-                return
-            else:
-                self.__handle_error("Failed to find file '%s' in remote folder '%s'"
-                                    % (remote_filename, self.folder.title.text), raise_errors)
-        except Exception as e:
-            self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
-                                 % (remote_filename, self.folder.title.text, str(e)), raise_errors)
-
-    @retry
-    def _list(self, raise_errors=False):
-        """List files in folder"""
-        try:
-            entries = self.__fetch_entries(self.folder.resource_id.text,
-                                           GDocsBackend.BACKUP_DOCUMENT_TYPE)
-            return [entry.title.text for entry in entries]
-        except Exception as e:
-            self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
-                                % (self.folder.title.text, str(e)), raise_errors)
-
-    @retry
-    def delete(self, filename_list, raise_errors=False):
-        """Delete files in filename_list"""
-        for filename in filename_list:
-            try:
-                entries = self.__fetch_entries(self.folder.resource_id.text,
-                                               GDocsBackend.BACKUP_DOCUMENT_TYPE,
-                                               filename)
-                if len(entries) > 0:
-                    success = True
-                    for entry in entries:
-                        if not self.client.delete(entry.get_edit_link().href + '?delete=true', force=True):
-                            success = False
-                    if not success:
-                        self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
-                                            % (filename, self.folder.title.text), raise_errors)
-                else:
-                    log.Warn("Failed to fetch file '%s' in remote folder '%s'"
-                             % (filename, self.folder.title.text))
-            except Exception as e:
-                self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
-                                    % (filename, self.folder.title.text, str(e)), raise_errors)
-
-    def __handle_error(self, message, raise_errors=True):
-        if raise_errors:
-            raise BackendException(message)
-        else:
-            log.FatalError(message, log.ErrorCode.backend_error)
-
-    def __authorize(self, email, password, captcha_token=None, captcha_response=None):
+    def _put(self, source_path, remote_filename):
+        self._delete(remote_filename)
+
+        # Set uploader instance. Note that resumable uploads are required in order to
+        # enable uploads for all file types.
+        # (see http://googleappsdeveloper.blogspot.com/2011/05/upload-all-file-types-to-any-google.html)
+        file = source_path.open()
+        uploader = gdata.client.ResumableUploader(
+          self.client, file, GDocsBackend.BACKUP_DOCUMENT_TYPE, os.path.getsize(file.name),
+          chunk_size=gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE,
+          desired_class=gdata.docs.data.Resource)
+        if uploader:
+            # Chunked upload.
+            entry = gdata.docs.data.Resource(title=atom.data.Title(text=remote_filename))
+            uri = self.folder.get_resumable_create_media_link().href + '?convert=false'
+            entry = uploader.UploadFile(uri, entry=entry)
+            if not entry:
+                raise BackendException("Failed to upload file '%s' to remote folder '%s'"
+                                       % (source_path.get_filename(), self.folder.title.text))
+        else:
+            raise BackendException("Failed to initialize upload of file '%s' to remote folder '%s'"
+                                   % (source_path.get_filename(), self.folder.title.text))
+        assert not file.close()
+
+    def _get(self, remote_filename, local_path):
+        entries = self._fetch_entries(self.folder.resource_id.text,
+                                      GDocsBackend.BACKUP_DOCUMENT_TYPE,
+                                      remote_filename)
+        if len(entries) == 1:
+            entry = entries[0]
+            self.client.DownloadResource(entry, local_path.name)
+        else:
+            raise BackendException("Failed to find file '%s' in remote folder '%s'"
+                                   % (remote_filename, self.folder.title.text))
+
+    def _list(self):
+        entries = self._fetch_entries(self.folder.resource_id.text,
+                                      GDocsBackend.BACKUP_DOCUMENT_TYPE)
+        return [entry.title.text for entry in entries]
+
+    def _delete(self, filename):
+        entries = self._fetch_entries(self.folder.resource_id.text,
+                                      GDocsBackend.BACKUP_DOCUMENT_TYPE,
+                                      filename)
+        for entry in entries:
+            self.client.delete(entry.get_edit_link().href + '?delete=true', force=True)
+
+    def _authorize(self, email, password, captcha_token=None, captcha_response=None):
         try:
             self.client.client_login(email,
                                      password,
@@ -189,17 +135,15 @@
             answer = None
             while not answer:
                 answer = raw_input('Answer to the challenge? ')
-            self.__authorize(email, password, challenge.captcha_token, answer)
+            self._authorize(email, password, challenge.captcha_token, answer)
         except gdata.client.BadAuthentication:
-            self.__handle_error('Invalid user credentials given. Be aware that accounts '
-                                'that use 2-step verification require creating an application specific '
-                                'access code for using this Duplicity backend. Follow the instrucction in '
-                                'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
-                                'and create your application-specific password to run duplicity backups.')
-        except Exception as e:
-            self.__handle_error('Error while authenticating client: %s.' % str(e))
+            raise BackendException('Invalid user credentials given. Be aware that accounts '
+                                   'that use 2-step verification require creating an application specific '
+                                   'access code for using this Duplicity backend. Follow the instruction in '
+                                   'http://www.google.com/support/accounts/bin/static.py?page=guide.cs&guide=1056283&topic=1056286 '
+                                   'and create your application-specific password to run duplicity backups.')
 
-    def __fetch_entries(self, folder_id, type, title=None):
+    def _fetch_entries(self, folder_id, type, title=None):
         # Build URI.
         uri = '/feeds/default/private/full/%s/contents' % folder_id
         if type == 'folder':
@@ -211,34 +155,31 @@
         if title:
             uri += '&title=' + urllib.quote(title) + '&title-exact=true'
 
-        try:
-            # Fetch entries.
-            entries = self.client.get_all_resources(uri=uri)
-
-            # When filtering by entry title, API is returning (don't know why) documents in other
-            # folders (apart from folder_id) matching the title, so some extra filtering is required.
-            if title:
-                result = []
-                for entry in entries:
-                    resource_type = entry.get_resource_type()
-                    if (not type) \
-                       or (type == 'folder' and resource_type == 'folder') \
-                       or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
-
-                        if folder_id != GDocsBackend.ROOT_FOLDER_ID:
-                            for link in entry.in_collections():
-                                folder_entry = self.client.get_entry(link.href, None, None,
-                                                                     desired_class=gdata.docs.data.Resource)
-                                if folder_entry and (folder_entry.resource_id.text == folder_id):
-                                    result.append(entry)
-                        elif len(entry.in_collections()) == 0:
-                            result.append(entry)
-            else:
-                result = entries
-
-            # Done!
-            return result
-        except Exception as e:
-            self.__handle_error('Error while fetching remote entries: %s.' % str(e))
+        # Fetch entries.
+        entries = self.client.get_all_resources(uri=uri)
+
+        # When filtering by entry title, API is returning (don't know why) documents in other
+        # folders (apart from folder_id) matching the title, so some extra filtering is required.
+        if title:
+            result = []
+            for entry in entries:
+                resource_type = entry.get_resource_type()
+                if (not type) \
+                   or (type == 'folder' and resource_type == 'folder') \
+                   or (type == GDocsBackend.BACKUP_DOCUMENT_TYPE and resource_type != 'folder'):
+
+                    if folder_id != GDocsBackend.ROOT_FOLDER_ID:
+                        for link in entry.in_collections():
+                            folder_entry = self.client.get_entry(link.href, None, None,
+                                                                 desired_class=gdata.docs.data.Resource)
+                            if folder_entry and (folder_entry.resource_id.text == folder_id):
+                                result.append(entry)
+                    elif len(entry.in_collections()) == 0:
+                        result.append(entry)
+        else:
+            result = entries
+
+        # Done!
+        return result
 
 duplicity.backend.register_backend('gdocs', GDocsBackend)

=== modified file 'duplicity/backends/giobackend.py'
--- duplicity/backends/giobackend.py	2014-04-17 20:50:57 +0000
+++ duplicity/backends/giobackend.py	2014-04-28 02:49:55 +0000
@@ -19,18 +19,12 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-import types
 import subprocess
 import atexit
 import signal
-from gi.repository import Gio #@UnresolvedImport
-from gi.repository import GLib #@UnresolvedImport
 
 import duplicity.backend
-from duplicity.backend import retry
 from duplicity import log
-from duplicity import util
-from duplicity.errors import * #@UnusedWildImport
 
 def ensure_dbus():
     # GIO requires a dbus session bus which can start the gvfs daemons
@@ -46,36 +40,39 @@
                     atexit.register(os.kill, int(parts[1]), signal.SIGTERM)
                 os.environ[parts[0]] = parts[1]
 
-class DupMountOperation(Gio.MountOperation):
-    """A simple MountOperation that grabs the password from the environment
-       or the user.
-    """
-    def __init__(self, backend):
-        Gio.MountOperation.__init__(self)
-        self.backend = backend
-        self.connect('ask-password', self.ask_password_cb)
-        self.connect('ask-question', self.ask_question_cb)
-
-    def ask_password_cb(self, *args, **kwargs):
-        self.set_password(self.backend.get_password())
-        self.reply(Gio.MountOperationResult.HANDLED)
-
-    def ask_question_cb(self, *args, **kwargs):
-        # Obviously just always answering with the first choice is a naive
-        # approach.  But there's no easy way to allow for answering questions
-        # in duplicity's typical run-from-cron mode with environment variables.
-        # And only a couple gvfs backends ask questions: 'sftp' does about
-        # new hosts and 'afc' does if the device is locked.  0 should be a
-        # safe choice.
-        self.set_choice(0)
-        self.reply(Gio.MountOperationResult.HANDLED)
-
 class GIOBackend(duplicity.backend.Backend):
     """Use this backend when saving to a GIO URL.
        This is a bit of a meta-backend, in that it can handle multiple schemas.
        URLs look like schema://user@server/path.
     """
     def __init__(self, parsed_url):
+        from gi.repository import Gio #@UnresolvedImport
+        from gi.repository import GLib #@UnresolvedImport
+
+        class DupMountOperation(Gio.MountOperation):
+            """A simple MountOperation that grabs the password from the environment
+               or the user.
+            """
+            def __init__(self, backend):
+                Gio.MountOperation.__init__(self)
+                self.backend = backend
+                self.connect('ask-password', self.ask_password_cb)
+                self.connect('ask-question', self.ask_question_cb)
+
+            def ask_password_cb(self, *args, **kwargs):
+                self.set_password(self.backend.get_password())
+                self.reply(Gio.MountOperationResult.HANDLED)
+
+            def ask_question_cb(self, *args, **kwargs):
+                # Obviously just always answering with the first choice is a naive
+                # approach.  But there's no easy way to allow for answering questions
+                # in duplicity's typical run-from-cron mode with environment variables.
+                # And only a couple gvfs backends ask questions: 'sftp' does about
+                # new hosts and 'afc' does if the device is locked.  0 should be a
+                # safe choice.
+                self.set_choice(0)
+                self.reply(Gio.MountOperationResult.HANDLED)
+
         duplicity.backend.Backend.__init__(self, parsed_url)
 
         ensure_dbus()
@@ -86,8 +83,8 @@
         op = DupMountOperation(self)
         loop = GLib.MainLoop()
         self.remote_file.mount_enclosing_volume(Gio.MountMountFlags.NONE,
-                                                op, None, self.done_with_mount,
-                                                loop)
+                                                op, None,
+                                                self.__done_with_mount, loop)
         loop.run() # halt program until we're done mounting
 
         # Now make the directory if it doesn't exist
@@ -97,7 +94,9 @@
             if e.code != Gio.IOErrorEnum.EXISTS:
                 raise
 
-    def done_with_mount(self, fileobj, result, loop):
+    def __done_with_mount(self, fileobj, result, loop):
+        from gi.repository import Gio #@UnresolvedImport
+        from gi.repository import GLib #@UnresolvedImport
         try:
             fileobj.mount_enclosing_volume_finish(result)
         except GLib.GError as e:
@@ -107,97 +106,63 @@
                                % str(e), log.ErrorCode.connection_failed)
         loop.quit()
 
-    def handle_error(self, raise_error, e, op, file1=None, file2=None):
-        if raise_error:
-            raise e
-        code = log.ErrorCode.backend_error
+    def __copy_progress(self, *args, **kwargs):
+        pass
+
+    def __copy_file(self, source, target):
+        from gi.repository import Gio #@UnresolvedImport
+        source.copy(target,
+                    Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
+                    None, self.__copy_progress, None)
+
+    def _error_code(self, operation, e):
+        from gi.repository import Gio #@UnresolvedImport
+        from gi.repository import GLib #@UnresolvedImport
         if isinstance(e, GLib.GError):
-            if e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
-                code = log.ErrorCode.backend_permission_denied
+            if e.code == Gio.IOErrorEnum.FAILED and operation == 'delete':
+                # Sometimes delete will return a generic failure on a file not
+                # found (notably the FTP does that)
+                return log.ErrorCode.backend_not_found
+            elif e.code == Gio.IOErrorEnum.PERMISSION_DENIED:
+                return log.ErrorCode.backend_permission_denied
             elif e.code == Gio.IOErrorEnum.NOT_FOUND:
-                code = log.ErrorCode.backend_not_found
+                return log.ErrorCode.backend_not_found
             elif e.code == Gio.IOErrorEnum.NO_SPACE:
-                code = log.ErrorCode.backend_no_space
-        extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
-        extra = ' '.join([op, extra])
-        log.FatalError(str(e), code, extra)
-
-    def copy_progress(self, *args, **kwargs):
-        pass
-
-    @retry
-    def copy_file(self, op, source, target, raise_errors=False):
-        log.Info(_("Writing %s") % target.get_parse_name())
-        try:
-            source.copy(target,
-                        Gio.FileCopyFlags.OVERWRITE | Gio.FileCopyFlags.NOFOLLOW_SYMLINKS,
-                        None, self.copy_progress, None)
-        except Exception as e:
-            self.handle_error(raise_errors, e, op, source.get_parse_name(),
-                              target.get_parse_name())
-
-    def put(self, source_path, remote_filename = None):
-        """Copy file to remote"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+                return log.ErrorCode.backend_no_space
+
+    def _put(self, source_path, remote_filename):
+        from gi.repository import Gio #@UnresolvedImport
         source_file = Gio.File.new_for_path(source_path.name)
         target_file = self.remote_file.get_child(remote_filename)
-        self.copy_file('put', source_file, target_file)
+        self.__copy_file(source_file, target_file)
 
-    def get(self, filename, local_path):
-        """Get file and put in local_path (Path object)"""
+    def _get(self, filename, local_path):
+        from gi.repository import Gio #@UnresolvedImport
         source_file = self.remote_file.get_child(filename)
         target_file = Gio.File.new_for_path(local_path.name)
-        self.copy_file('get', source_file, target_file)
-        local_path.setdata()
+        self.__copy_file(source_file, target_file)
 
-    @retry
-    def _list(self, raise_errors=False):
-        """List files in that directory"""
+    def _list(self):
+        from gi.repository import Gio #@UnresolvedImport
         files = []
-        try:
-            enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
-                                                       Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
-                                                       None)
+        enum = self.remote_file.enumerate_children(Gio.FILE_ATTRIBUTE_STANDARD_NAME,
+                                                   Gio.FileQueryInfoFlags.NOFOLLOW_SYMLINKS,
+                                                   None)
+        info = enum.next_file(None)
+        while info:
+            files.append(info.get_name())
             info = enum.next_file(None)
-            while info:
-                files.append(info.get_name())
-                info = enum.next_file(None)
-        except Exception as e:
-            self.handle_error(raise_errors, e, 'list',
-                              self.remote_file.get_parse_name())
         return files
 
-    @retry
-    def delete(self, filename_list, raise_errors=False):
-        """Delete all files in filename list"""
-        assert type(filename_list) is not types.StringType
-        for filename in filename_list:
-            target_file = self.remote_file.get_child(filename)
-            try:
-                target_file.delete(None)
-            except Exception as e:
-                if isinstance(e, GLib.GError):
-                    if e.code == Gio.IOErrorEnum.NOT_FOUND:
-                        continue
-                self.handle_error(raise_errors, e, 'delete',
-                                  target_file.get_parse_name())
-                return
-
-    @retry
-    def _query_file_info(self, filename, raise_errors=False):
-        """Query attributes on filename"""
-        target_file = self.remote_file.get_child(filename)
-        attrs = Gio.FILE_ATTRIBUTE_STANDARD_SIZE
-        try:
-            info = target_file.query_info(attrs, Gio.FileQueryInfoFlags.NONE,
-                                          None)
-            return {'size': info.get_size()}
-        except Exception as e:
-            if isinstance(e, GLib.GError):
-                if e.code == Gio.IOErrorEnum.NOT_FOUND:
-                    return {'size': -1} # early exit, no need to retry
-            if raise_errors:
-                raise e
-            else:
-                return {'size': None}
+    def _delete(self, filename):
+        target_file = self.remote_file.get_child(filename)
+        target_file.delete(None)
+
+    def _query(self, filename):
+        from gi.repository import Gio #@UnresolvedImport
+        target_file = self.remote_file.get_child(filename)
+        info = target_file.query_info(Gio.FILE_ATTRIBUTE_STANDARD_SIZE,
+                                      Gio.FileQueryInfoFlags.NONE, None)
+        return {'size': info.get_size()}
+
+duplicity.backend.register_backend_prefix('gio', GIOBackend)

=== modified file 'duplicity/backends/hsibackend.py'
--- duplicity/backends/hsibackend.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/hsibackend.py	2014-04-28 02:49:55 +0000
@@ -20,9 +20,7 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-
 import duplicity.backend
-from duplicity.errors import * #@UnusedWildImport
 
 hsi_command = "hsi"
 class HSIBackend(duplicity.backend.Backend):
@@ -35,35 +33,23 @@
         else:
             self.remote_prefix = ""
 
-    def put(self, source_path, remote_filename = None):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         commandline = '%s "put %s : %s%s"' % (hsi_command,source_path.name,self.remote_prefix,remote_filename)
-        try:
-            self.run_command(commandline)
-        except Exception:
-            print commandline
+        self.subprocess_popen(commandline)
 
-    def get(self, remote_filename, local_path):
+    def _get(self, remote_filename, local_path):
         commandline = '%s "get %s : %s%s"' % (hsi_command, local_path.name, self.remote_prefix, remote_filename)
-        self.run_command(commandline)
-        local_path.setdata()
-        if not local_path.exists():
-            raise BackendException("File %s not found" % local_path.name)
+        self.subprocess_popen(commandline)
 
-    def list(self):
+    def _list(self):
         commandline = '%s "ls -l %s"' % (hsi_command, self.remote_dir)
         l = os.popen3(commandline)[2].readlines()[3:]
         for i in range(0,len(l)):
             l[i] = l[i].split()[-1]
         return [x for x in l if x]
 
-    def delete(self, filename_list):
-        assert len(filename_list) > 0
-        for fn in filename_list:
-            commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, fn)
-            self.run_command(commandline)
+    def _delete(self, filename):
+        commandline = '%s "rm %s%s"' % (hsi_command, self.remote_prefix, filename)
+        self.subprocess_popen(commandline)
 
 duplicity.backend.register_backend("hsi", HSIBackend)
-
-

=== modified file 'duplicity/backends/imapbackend.py'
--- duplicity/backends/imapbackend.py	2014-04-17 22:03:10 +0000
+++ duplicity/backends/imapbackend.py	2014-04-28 02:49:55 +0000
@@ -44,7 +44,7 @@
                   (self.__class__.__name__, parsed_url.scheme, parsed_url.hostname, parsed_url.username))
 
         #  Store url for reconnection on error
-        self._url = parsed_url
+        self.url = parsed_url
 
         #  Set the username
         if ( parsed_url.username is None ):
@@ -61,12 +61,12 @@
         else:
             password = parsed_url.password
 
-        self._username = username
-        self._password = password
-        self._resetConnection()
+        self.username = username
+        self.password = password
+        self.resetConnection()
 
-    def _resetConnection(self):
-        parsed_url = self._url
+    def resetConnection(self):
+        parsed_url = self.url
         try:
             imap_server = os.environ['IMAP_SERVER']
         except KeyError:
@@ -74,32 +74,32 @@
 
         #  Try to close the connection cleanly
         try:
-            self._conn.close()
+            self.conn.close()
         except Exception:
             pass
 
         if (parsed_url.scheme == "imap"):
             cl = imaplib.IMAP4
-            self._conn = cl(imap_server, 143)
+            self.conn = cl(imap_server, 143)
         elif (parsed_url.scheme == "imaps"):
             cl = imaplib.IMAP4_SSL
-            self._conn = cl(imap_server, 993)
+            self.conn = cl(imap_server, 993)
 
         log.Debug("Type of imap class: %s" % (cl.__name__))
         self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
 
         #  Login
         if (not(globals.imap_full_address)):
-            self._conn.login(self._username, self._password)
-            self._conn.select(globals.imap_mailbox)
+            self.conn.login(self.username, self.password)
+            self.conn.select(globals.imap_mailbox)
             log.Info("IMAP connected")
         else:
-            self._conn.login(self._username + "@" + parsed_url.hostname, self._password)
-            self._conn.select(globals.imap_mailbox)
+            self.conn.login(self.username + "@" + parsed_url.hostname, self.password)
+            self.conn.select(globals.imap_mailbox)
             log.Info("IMAP connected")
 
 
-    def _prepareBody(self,f,rname):
+    def prepareBody(self,f,rname):
         mp = email.MIMEMultipart.MIMEMultipart()
 
         # I am going to use the remote_dir as the From address so that
@@ -117,9 +117,7 @@
 
         return mp.as_string()
 
-    def put(self, source_path, remote_filename = None):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         f=source_path.open("rb")
         allowedTimeout = globals.timeout
         if (allowedTimeout == 0):
@@ -127,12 +125,12 @@
             allowedTimeout = 2880
         while allowedTimeout > 0:
             try:
-                self._conn.select(remote_filename)
-                body=self._prepareBody(f,remote_filename)
+                self.conn.select(remote_filename)
+                body=self.prepareBody(f,remote_filename)
                 # If we don't select the IMAP folder before
                 # append, the message goes into the INBOX.
-                self._conn.select(globals.imap_mailbox)
-                self._conn.append(globals.imap_mailbox, None, None, body)
+                self.conn.select(globals.imap_mailbox)
+                self.conn.append(globals.imap_mailbox, None, None, body)
                 break
             except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                 allowedTimeout -= 1
@@ -140,7 +138,7 @@
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
-                        self._resetConnection()
+                        self.resetConnection()
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
@@ -149,15 +147,15 @@
 
         log.Info("IMAP mail with '%s' subject stored" % remote_filename)
 
-    def get(self, remote_filename, local_path):
+    def _get(self, remote_filename, local_path):
         allowedTimeout = globals.timeout
         if (allowedTimeout == 0):
             # Allow a total timeout of 1 day
             allowedTimeout = 2880
         while allowedTimeout > 0:
             try:
-                self._conn.select(globals.imap_mailbox)
-                (result,list) = self._conn.search(None, 'Subject', remote_filename)
+                self.conn.select(globals.imap_mailbox)
+                (result,list) = self.conn.search(None, 'Subject', remote_filename)
                 if result != "OK":
                     raise Exception(list[0])
 
@@ -165,7 +163,7 @@
                 if list[0] == '':
                     raise Exception("no mail with subject %s")
 
-                (result,list) = self._conn.fetch(list[0],"(RFC822)")
+                (result,list) = self.conn.fetch(list[0],"(RFC822)")
 
                 if result != "OK":
                     raise Exception(list[0])
@@ -185,7 +183,7 @@
                 time.sleep(30)
                 while allowedTimeout > 0:
                     try:
-                        self._resetConnection()
+                        self.resetConnection()
                         break
                     except (imaplib.IMAP4.abort, socket.error, socket.sslerror):
                         allowedTimeout -= 1
@@ -199,7 +197,7 @@
 
     def _list(self):
         ret = []
-        (result,list) = self._conn.select(globals.imap_mailbox)
+        (result,list) = self.conn.select(globals.imap_mailbox)
         if result != "OK":
             raise BackendException(list[0])
 
@@ -207,14 +205,14 @@
         # address
 
         # Search returns an error if you haven't selected an IMAP folder.
-        (result,list) = self._conn.search(None, 'FROM', self.remote_dir)
+        (result,list) = self.conn.search(None, 'FROM', self.remote_dir)
         if result!="OK":
             raise Exception(list[0])
         if list[0]=='':
             return ret
         nums=list[0].split(" ")
         set="%s:%s"%(nums[0],nums[-1])
-        (result,list) = self._conn.fetch(set,"(BODY[HEADER])")
+        (result,list) = self.conn.fetch(set,"(BODY[HEADER])")
         if result!="OK":
             raise Exception(list[0])
 
@@ -232,34 +230,32 @@
                     log.Info("IMAP LIST: %s %s" % (subj,header_from))
         return ret
 
-    def _imapf(self,fun,*args):
+    def imapf(self,fun,*args):
         (ret,list)=fun(*args)
         if ret != "OK":
             raise Exception(list[0])
         return list
 
-    def _delete_single_mail(self,i):
-        self._imapf(self._conn.store,i,"+FLAGS",'\\DELETED')
-
-    def _expunge(self):
-        list=self._imapf(self._conn.expunge)
-
-    def delete(self, filename_list):
-        assert len(filename_list) > 0
+    def delete_single_mail(self,i):
+        self.imapf(self.conn.store,i,"+FLAGS",'\\DELETED')
+
+    def expunge(self):
+        list=self.imapf(self.conn.expunge)
+
+    def _delete_list(self, filename_list):
         for filename in filename_list:
-            list = self._imapf(self._conn.search,None,"(SUBJECT %s)"%filename)
+            list = self.imapf(self.conn.search,None,"(SUBJECT %s)"%filename)
             list = list[0].split()
-            if len(list)==0 or list[0]=="":raise Exception("no such mail with subject '%s'"%filename)
-            self._delete_single_mail(list[0])
-            log.Notice("marked %s to be deleted" % filename)
-        self._expunge()
-        log.Notice("IMAP expunged %s files" % len(list))
+            if len(list) > 0 and list[0] != "":
+                self.delete_single_mail(list[0])
+                log.Notice("marked %s to be deleted" % filename)
+        self.expunge()
+        log.Notice("IMAP expunged %s files" % len(filename_list))
 
-    def close(self):
-        self._conn.select(globals.imap_mailbox)
-        self._conn.close()
-        self._conn.logout()
+    def _close(self):
+        self.conn.select(globals.imap_mailbox)
+        self.conn.close()
+        self.conn.logout()
 
 duplicity.backend.register_backend("imap", ImapBackend);
 duplicity.backend.register_backend("imaps", ImapBackend);
-

=== modified file 'duplicity/backends/localbackend.py'
--- duplicity/backends/localbackend.py	2014-04-17 20:50:57 +0000
+++ duplicity/backends/localbackend.py	2014-04-28 02:49:55 +0000
@@ -20,14 +20,11 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-import types
-import errno
 
 import duplicity.backend
 from duplicity import log
 from duplicity import path
-from duplicity import util
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import BackendException
 
 
 class LocalBackend(duplicity.backend.Backend):
@@ -43,90 +40,37 @@
         if not parsed_url.path.startswith('//'):
             raise BackendException("Bad file:// path syntax.")
         self.remote_pathdir = path.Path(parsed_url.path[2:])
-
-    def handle_error(self, e, op, file1 = None, file2 = None):
-        code = log.ErrorCode.backend_error
-        if hasattr(e, 'errno'):
-            if e.errno == errno.EACCES:
-                code = log.ErrorCode.backend_permission_denied
-            elif e.errno == errno.ENOENT:
-                code = log.ErrorCode.backend_not_found
-            elif e.errno == errno.ENOSPC:
-                code = log.ErrorCode.backend_no_space
-        extra = ' '.join([util.escape(x) for x in [file1, file2] if x])
-        extra = ' '.join([op, extra])
-        if op != 'delete' and op != 'query':
-            log.FatalError(str(e), code, extra)
-        else:
-            log.Warn(str(e), code, extra)
-
-    def move(self, source_path, remote_filename = None):
-        self.put(source_path, remote_filename, rename_instead = True)
-
-    def put(self, source_path, remote_filename = None, rename_instead = False):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-        target_path = self.remote_pathdir.append(remote_filename)
-        log.Info("Writing %s" % target_path.name)
-        """Try renaming first (if allowed to), copying if doesn't work"""
-        if rename_instead:
-            try:
-                source_path.rename(target_path)
-            except OSError:
-                pass
-            except Exception as e:
-                self.handle_error(e, 'put', source_path.name, target_path.name)
-            else:
-                return
-        try:
-            target_path.writefileobj(source_path.open("rb"))
-        except Exception as e:
-            self.handle_error(e, 'put', source_path.name, target_path.name)
-
-        """If we get here, renaming failed previously"""
-        if rename_instead:
-            """We need to simulate its behaviour"""
-            source_path.delete()
-
-    def get(self, filename, local_path):
-        """Get file and put in local_path (Path object)"""
+        try:
+            os.makedirs(self.remote_pathdir.base)
+        except Exception:
+            pass
+
+    def _move(self, source_path, remote_filename):
+        target_path = self.remote_pathdir.append(remote_filename)
+        try:
+            source_path.rename(target_path)
+            return True
+        except OSError:
+            return False
+
+    def _put(self, source_path, remote_filename):
+        target_path = self.remote_pathdir.append(remote_filename)
+        target_path.writefileobj(source_path.open("rb"))
+
+    def _get(self, filename, local_path):
         source_path = self.remote_pathdir.append(filename)
-        try:
-            local_path.writefileobj(source_path.open("rb"))
-        except Exception as e:
-            self.handle_error(e, 'get', source_path.name, local_path.name)
+        local_path.writefileobj(source_path.open("rb"))
 
     def _list(self):
-        """List files in that directory"""
-        try:
-                os.makedirs(self.remote_pathdir.base)
-        except Exception:
-                pass
-        try:
-            return self.remote_pathdir.listdir()
-        except Exception as e:
-            self.handle_error(e, 'list', self.remote_pathdir.name)
-
-    def delete(self, filename_list):
-        """Delete all files in filename list"""
-        assert type(filename_list) is not types.StringType
-        for filename in filename_list:
-            try:
-                self.remote_pathdir.append(filename).delete()
-            except Exception as e:
-                self.handle_error(e, 'delete', self.remote_pathdir.append(filename).name)
-
-    def _query_file_info(self, filename):
-        """Query attributes on filename"""
-        try:
-            target_file = self.remote_pathdir.append(filename)
-            if not os.path.exists(target_file.name):
-                return {'size': -1}
-            target_file.setdata()
-            size = target_file.getsize()
-            return {'size': size}
-        except Exception as e:
-            self.handle_error(e, 'query', target_file.name)
-            return {'size': None}
+        return self.remote_pathdir.listdir()
+
+    def _delete(self, filename):
+        self.remote_pathdir.append(filename).delete()
+
+    def _query(self, filename):
+        target_file = self.remote_pathdir.append(filename)
+        target_file.setdata()
+        size = target_file.getsize() if target_file.exists() else -1
+        return {'size': size}
 
 duplicity.backend.register_backend("file", LocalBackend)

=== modified file 'duplicity/backends/megabackend.py'
--- duplicity/backends/megabackend.py	2014-04-17 20:50:57 +0000
+++ duplicity/backends/megabackend.py	2014-04-28 02:49:55 +0000
@@ -22,9 +22,8 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import duplicity.backend
-from duplicity.backend import retry
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import BackendException
 
 
 class MegaBackend(duplicity.backend.Backend):
@@ -63,113 +62,64 @@
 
         self.folder = parent_folder
 
-    @retry
-    def put(self, source_path, remote_filename=None, raise_errors=False):
-        """Transfer source_path to remote_filename"""
-        # Default remote file name.
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
-        try:
-            # If remote file already exists in destination folder, remove it.
-            files = self.client.get_files()
-            entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
-
-            for entry in entries:
-                self.client.delete(entry)
-
-            self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
-
-        except Exception as e:
-            self.__handle_error("Failed to upload file '%s' to remote folder '%s': %s"
-                                % (source_path.get_canonical(), self.__get_node_name(self.folder), str(e)), raise_errors)
-
-    @retry
-    def get(self, remote_filename, local_path, raise_errors=False):
-        """Get remote filename, saving it to local_path"""
-        try:
-            files = self.client.get_files()
-            entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
-
-            if len(entries):
-                # get first matching remote file
-                entry = entries.keys()[0]
-                self.client.download((entry, entries[entry]), dest_filename=local_path.name)
-                local_path.setdata()
-                return
-            else:
-                self.__handle_error("Failed to find file '%s' in remote folder '%s'"
-                                    % (remote_filename, self.__get_node_name(self.folder)), raise_errors)
-        except Exception as e:
-            self.__handle_error("Failed to download file '%s' in remote folder '%s': %s"
-                                 % (remote_filename, self.__get_node_name(self.folder), str(e)), raise_errors)
-
-    @retry
-    def _list(self, raise_errors=False):
-        """List files in folder"""
-        try:
-            entries = self.client.get_files_in_node(self.folder)
-            return [ self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
-        except Exception as e:
-            self.__handle_error("Failed to fetch list of files in remote folder '%s': %s"
-                                % (self.__get_node_name(self.folder), str(e)), raise_errors)
-
-    @retry
-    def delete(self, filename_list, raise_errors=False):
-        """Delete files in filename_list"""
-        files = self.client.get_files()
-        for filename in filename_list:
-            entries = self.__filter_entries(files, self.folder, filename)
-            try:
-                if len(entries) > 0:
-                    for entry in entries:
-                        if self.client.destroy(entry):
-                            self.__handle_error("Failed to remove file '%s' in remote folder '%s'"
-                                % (filename, self.__get_node_name(self.folder)), raise_errors)
-                else:
-                    log.Warn("Failed to fetch file '%s' in remote folder '%s'"
-                             % (filename, self.__get_node_name(self.folder)))
-            except Exception as e:
-                self.__handle_error("Failed to remove file '%s' in remote folder '%s': %s"
-                                    % (filename, self.__get_node_name(self.folder), str(e)), raise_errors)
+    def _put(self, source_path, remote_filename):
+        try:
+            self._delete(remote_filename)
+        except Exception:
+            pass
+        self.client.upload(source_path.get_canonical(), self.folder, dest_filename=remote_filename)
+
+    def _get(self, remote_filename, local_path):
+        files = self.client.get_files()
+        entries = self.__filter_entries(files, self.folder, remote_filename, 'file')
+        if len(entries):
+            # get first matching remote file
+            entry = entries.keys()[0]
+            self.client.download((entry, entries[entry]), dest_filename=local_path.name)
+        else:
+            raise BackendException("Failed to find file '%s' in remote folder '%s'"
+                                   % (remote_filename, self.__get_node_name(self.folder)),
+                                   code=log.ErrorCode.backend_not_found)
+
+    def _list(self):
+        entries = self.client.get_files_in_node(self.folder)
+        return [self.client.get_name_from_file({entry:entries[entry]}) for entry in entries]
+
+    def _delete(self, filename):
+        files = self.client.get_files()
+        entries = self.__filter_entries(files, self.folder, filename, 'file')
+        if len(entries):
+            self.client.destroy(entries.keys()[0])
+        else:
+            raise BackendException("Failed to find file '%s' in remote folder '%s'"
+                                   % (filename, self.__get_node_name(self.folder)),
+                                   code=log.ErrorCode.backend_not_found)
 
     def __get_node_name(self, handle):
         """get node name from public handle"""
         files = self.client.get_files()
         return self.client.get_name_from_file({handle:files[handle]})
-        
-    def __handle_error(self, message, raise_errors=True):
-        if raise_errors:
-            raise BackendException(message)
-        else:
-            log.FatalError(message, log.ErrorCode.backend_error)
 
     def __authorize(self, email, password):
-        try:
-            self.client.login(email, password)
-        except Exception as e:
-            self.__handle_error('Error while authenticating client: %s.' % str(e))
+        self.client.login(email, password)
 
     def __filter_entries(self, entries, parent_id=None, title=None, type=None):
         result = {}
         type_map = { 'folder': 1, 'file': 0 }
 
-        try:
-            for k, v in entries.items():
-                try:
-                    if parent_id != None:
-                        assert(v['p'] == parent_id)
-                    if title != None:
-                        assert(v['a']['n'] == title)
-                    if type != None:
-                        assert(v['t'] == type_map[type])
-                except AssertionError:
-                    continue
-
-                result.update({k:v})
-
-            return result
-        except Exception as e:
-            self.__handle_error('Error while fetching remote entries: %s.' % str(e))
+        for k, v in entries.items():
+            try:
+                if parent_id != None:
+                    assert(v['p'] == parent_id)
+                if title != None:
+                    assert(v['a']['n'] == title)
+                if type != None:
+                    assert(v['t'] == type_map[type])
+            except AssertionError:
+                continue
+
+            result.update({k:v})
+
+        return result
 
 duplicity.backend.register_backend('mega', MegaBackend)

=== renamed file 'duplicity/backends/~par2wrapperbackend.py' => 'duplicity/backends/par2backend.py'
--- duplicity/backends/~par2wrapperbackend.py	2014-04-17 19:53:30 +0000
+++ duplicity/backends/par2backend.py	2014-04-28 02:49:55 +0000
@@ -16,14 +16,16 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+from future_builtins import filter
+
 import os
 import re
 from duplicity import backend
-from duplicity.errors import UnsupportedBackendScheme, BackendException
+from duplicity.errors import BackendException
 from duplicity import log
 from duplicity import globals
 
-class Par2WrapperBackend(backend.Backend):
+class Par2Backend(backend.Backend):
     """This backend wrap around other backends and create Par2 recovery files
     before the file and the Par2 files are transfered with the wrapped backend.
     
@@ -37,13 +39,15 @@
         except AttributeError:
             self.redundancy = 10
 
-        try:
-            url_string = self.parsed_url.url_string.lstrip('par2+')
-            self.wrapped_backend = backend.get_backend(url_string)
-        except:
-            raise UnsupportedBackendScheme(self.parsed_url.url_string)
-
-    def put(self, source_path, remote_filename = None):
+        self.wrapped_backend = backend.get_backend_object(parsed_url.url_string)
+
+        for attr in ['_get', '_put', '_list', '_delete', '_delete_list',
+                     '_query', '_query_list', '_retry_cleanup', '_error_code',
+                     '_move', '_close']:
+            if hasattr(self.wrapped_backend, attr):
+                setattr(self, attr, getattr(self, attr[1:]))
+
+    def transfer(self, method, source_path, remote_filename):
         """create Par2 files and transfer the given file and the Par2 files
         with the wrapped backend.
         
@@ -52,13 +56,14 @@
         the soure_path with remote_filename into this. 
         """
         import pexpect
-        if remote_filename is None:
-            remote_filename = source_path.get_filename()
 
         par2temp = source_path.get_temp_in_same_dir()
         par2temp.mkdir()
         source_symlink = par2temp.append(remote_filename)
-        os.symlink(source_path.get_canonical(), source_symlink.get_canonical())
+        source_target = source_path.get_canonical()
+        if not os.path.isabs(source_target):
+            source_target = os.path.join(os.getcwd(), source_target)
+        os.symlink(source_target, source_symlink.get_canonical())
         source_symlink.setdata()
 
         log.Info("Create Par2 recovery files")
@@ -70,16 +75,17 @@
             for file in par2temp.listdir():
                 files_to_transfer.append(par2temp.append(file))
 
-        ret = self.wrapped_backend.put(source_path, remote_filename)
+        method(source_path, remote_filename)
         for file in files_to_transfer:
-            self.wrapped_backend.put(file, file.get_filename())
+            method(file, file.get_filename())
 
         par2temp.deltree()
-        return ret
-
-    def move(self, source_path, remote_filename = None):
-        self.put(source_path, remote_filename)
-        source_path.delete()
+
+    def put(self, local, remote):
+        self.transfer(self.wrapped_backend._put, local, remote)
+
+    def move(self, local, remote):
+        self.transfer(self.wrapped_backend._move, local, remote)
 
     def get(self, remote_filename, local_path):
         """transfer remote_filename and the related .par2 file into
@@ -94,22 +100,23 @@
         par2temp.mkdir()
         local_path_temp = par2temp.append(remote_filename)
 
-        ret = self.wrapped_backend.get(remote_filename, local_path_temp)
+        self.wrapped_backend._get(remote_filename, local_path_temp)
 
         try:
             par2file = par2temp.append(remote_filename + '.par2')
-            self.wrapped_backend.get(par2file.get_filename(), par2file)
+            self.wrapped_backend._get(par2file.get_filename(), par2file)
 
             par2verify = 'par2 v -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
             out, returncode = pexpect.run(par2verify, -1, True)
 
             if returncode:
                 log.Warn("File is corrupt. Try to repair %s" % remote_filename)
-                par2volumes = self.list(re.compile(r'%s\.vol[\d+]*\.par2' % remote_filename))
+                par2volumes = filter(re.compile((r'%s\.vol[\d+]*\.par2' % remote_filename).match,
+                                     self.wrapped_backend._list()))
 
                 for filename in par2volumes:
                     file = par2temp.append(filename)
-                    self.wrapped_backend.get(filename, file)
+                    self.wrapped_backend._get(filename, file)
 
                 par2repair = 'par2 r -q -q %s %s' % (par2file.get_canonical(), local_path_temp.get_canonical())
                 out, returncode = pexpect.run(par2repair, -1, True)
@@ -124,25 +131,23 @@
         finally:
             local_path_temp.rename(local_path)
             par2temp.deltree()
-        return ret
 
-    def list(self, filter = re.compile(r'(?!.*\.par2$)')):
-        """default filter all files that ends with ".par"
-        filter can be a re.compile instance or False for all remote files
+    def delete(self, filename):
+        """delete given filename and its .par2 files
         """
-        list = self.wrapped_backend.list()
-        if not filter:
-            return list
-        filtered_list = []
-        for item in list:
-            if filter.match(item):
-                filtered_list.append(item)
-        return filtered_list
-
-    def delete(self, filename_list):
+        self.wrapped_backend._delete(filename)
+
+        remote_list = self.list()
+        filename_list = [filename]
+        c =  re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
+        for remote_filename in remote_list:
+            if c.match(remote_filename):
+                self.wrapped_backend._delete(remote_filename)
+
+    def delete_list(self, filename_list):
         """delete given filename_list and all .par2 files that belong to them
         """
-        remote_list = self.list(False)
+        remote_list = self.list()
 
         for filename in filename_list[:]:
             c =  re.compile(r'%s(?:\.vol[\d+]*)?\.par2' % filename)
@@ -150,46 +155,25 @@
                 if c.match(remote_filename):
                     filename_list.append(remote_filename)
 
-        return self.wrapped_backend.delete(filename_list)
-
-    """just return the output of coresponding wrapped backend
-    for all other functions
-    """
-    def query_info(self, filename_list, raise_errors=True):
-        return self.wrapped_backend.query_info(filename_list, raise_errors)
-
-    def get_password(self):
-        return self.wrapped_backend.get_password()
-
-    def munge_password(self, commandline):
-        return self.wrapped_backend.munge_password(commandline)
-
-    def run_command(self, commandline):
-        return self.wrapped_backend.run_command(commandline)
-    def run_command_persist(self, commandline):
-        return self.wrapped_backend.run_command_persist(commandline)
-
-    def popen(self, commandline):
-        return self.wrapped_backend.popen(commandline)
-    def popen_persist(self, commandline):
-        return self.wrapped_backend.popen_persist(commandline)
-
-    def _subprocess_popen(self, commandline):
-        return self.wrapped_backend._subprocess_popen(commandline)
-
-    def subprocess_popen(self, commandline):
-        return self.wrapped_backend.subprocess_popen(commandline)
-
-    def subprocess_popen_persist(self, commandline):
-        return self.wrapped_backend.subprocess_popen_persist(commandline)
+        return self.wrapped_backend._delete_list(filename_list)
+
+
+    def list(self):
+        return self.wrapped_backend._list()
+
+    def retry_cleanup(self):
+        self.wrapped_backend._retry_cleanup()
+
+    def error_code(self, operation, e):
+        return self.wrapped_backend._error_code(operation, e)
+
+    def query(self, filename):
+        return self.wrapped_backend._query(filename)
+
+    def query_list(self, filename_list):
+        return self.wrapped_backend._query(filename_list)
 
     def close(self):
-        return self.wrapped_backend.close()
-
-"""register this backend with leading "par2+" for all already known backends
-
-files must be sorted in duplicity.backend.import_backends to catch
-all supported backends
-"""
-for item in backend._backends.keys():
-    backend.register_backend('par2+' + item, Par2WrapperBackend)
+        self.wrapped_backend._close()
+
+backend.register_backend_prefix('par2', Par2Backend)

=== modified file 'duplicity/backends/rsyncbackend.py'
--- duplicity/backends/rsyncbackend.py	2014-04-25 23:20:12 +0000
+++ duplicity/backends/rsyncbackend.py	2014-04-28 02:49:55 +0000
@@ -23,7 +23,7 @@
 import tempfile
 
 import duplicity.backend
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import InvalidBackendURL
 from duplicity import globals, tempdir, util
 
 class RsyncBackend(duplicity.backend.Backend):
@@ -58,12 +58,13 @@
             if port:
                 port = " --port=%s" % port
         else:
+            host_string = host + ":" if host else ""
             if parsed_url.path.startswith("//"):
                 # its an absolute path
-                self.url_string = "%s:/%s" % (host, parsed_url.path.lstrip('/'))
+                self.url_string = "%s/%s" % (host_string, parsed_url.path.lstrip('/'))
             else:
                 # its a relative path
-                self.url_string = "%s:%s" % (host, parsed_url.path.lstrip('/'))
+                self.url_string = "%s%s" % (host_string, parsed_url.path.lstrip('/'))
             if parsed_url.port:
                 port = " -p %s" % parsed_url.port
         # add trailing slash if missing
@@ -105,29 +106,17 @@
         raise InvalidBackendURL("Could not determine rsync path: %s"
                                     "" % self.munge_password( url ) )
 
-    def run_command(self, commandline):
-        result, stdout, stderr = self.subprocess_popen_persist(commandline)
-        return result, stdout
-
-    def put(self, source_path, remote_filename = None):
-        """Use rsync to copy source_dir/filename to remote computer"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         remote_path = os.path.join(self.url_string, remote_filename)
         commandline = "%s %s %s" % (self.cmd, source_path.name, remote_path)
-        self.run_command(commandline)
+        self.subprocess_popen(commandline)
 
-    def get(self, remote_filename, local_path):
-        """Use rsync to get a remote file"""
+    def _get(self, remote_filename, local_path):
         remote_path = os.path.join (self.url_string, remote_filename)
         commandline = "%s %s %s" % (self.cmd, remote_path, local_path.name)
-        self.run_command(commandline)
-        local_path.setdata()
-        if not local_path.exists():
-            raise BackendException("File %s not found" % local_path.name)
+        self.subprocess_popen(commandline)
 
-    def list(self):
-        """List files"""
+    def _list(self):
         def split (str):
             line = str.split ()
             if len (line) > 4 and line[4] != '.':
@@ -135,20 +124,17 @@
             else:
                 return None
         commandline = "%s %s" % (self.cmd, self.url_string)
-        result, stdout = self.run_command(commandline)
+        result, stdout, stderr = self.subprocess_popen(commandline)
         return [x for x in map (split, stdout.split('\n')) if x]
 
-    def delete(self, filename_list):
-        """Delete files."""
+    def _delete_list(self, filename_list):
         delete_list = filename_list
         dont_delete_list = []
-        for file in self.list ():
+        for file in self._list ():
             if file in delete_list:
                 delete_list.remove (file)
             else:
                 dont_delete_list.append (file)
-        if len (delete_list) > 0:
-            raise BackendException("Files %s not found" % str (delete_list))
 
         dir = tempfile.mkdtemp()
         exclude, exclude_name = tempdir.default().mkstemp_file()
@@ -162,7 +148,7 @@
         exclude.close()
         commandline = ("%s --recursive --delete --exclude-from=%s %s/ %s" %
                                    (self.cmd, exclude_name, dir, self.url_string))
-        self.run_command(commandline)
+        self.subprocess_popen(commandline)
         for file in to_delete:
             util.ignore_missing(os.unlink, file)
         os.rmdir (dir)

=== modified file 'duplicity/backends/sshbackend.py'
--- duplicity/backends/sshbackend.py	2014-04-17 21:54:04 +0000
+++ duplicity/backends/sshbackend.py	2014-04-28 02:49:55 +0000
@@ -18,6 +18,7 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import duplicity.backend
 from duplicity import globals, log
 
 def warn_option(option, optionvar):
@@ -26,11 +27,15 @@
 
 if (globals.ssh_backend and
     globals.ssh_backend.lower().strip() == 'pexpect'):
-    from . import _ssh_pexpect
+    from ._ssh_pexpect import SSHPExpectBackend as SSHBackend
 else:
     # take user by the hand to prevent typo driven bug reports
     if globals.ssh_backend.lower().strip() != 'paramiko':
         log.Warn(_("Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will use default paramiko instead.") % globals.ssh_backend)
     warn_option("--scp-command", globals.scp_command)
     warn_option("--sftp-command", globals.sftp_command)
-    from . import _ssh_paramiko
+    from ._ssh_paramiko import SSHParamikoBackend as SSHBackend
+
+duplicity.backend.register_backend("sftp", SSHBackend)
+duplicity.backend.register_backend("scp", SSHBackend)
+duplicity.backend.register_backend("ssh", SSHBackend)

=== modified file 'duplicity/backends/swiftbackend.py'
--- duplicity/backends/swiftbackend.py	2014-04-17 22:03:10 +0000
+++ duplicity/backends/swiftbackend.py	2014-04-28 02:49:55 +0000
@@ -19,14 +19,11 @@
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
 import os
-import time
 
 import duplicity.backend
-from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
-from duplicity.util import exception_traceback
-from duplicity.backend import retry
+from duplicity.errors import BackendException
+
 
 class SwiftBackend(duplicity.backend.Backend):
     """
@@ -82,121 +79,30 @@
                            % (e.__class__.__name__, str(e)),
                            log.ErrorCode.connection_failed)
 
-    def put(self, source_path, remote_filename = None):
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-
-        for n in range(1, globals.num_retries+1):
-            log.Info("Uploading '%s/%s' " % (self.container, remote_filename))
-            try:
-                self.conn.put_object(self.container,
-                                     remote_filename, 
-                                     file(source_path.name))
-                return
-            except self.resp_exc as error:
-                log.Warn("Upload of '%s' failed (attempt %d): Swift server returned: %s %s"
-                         % (remote_filename, n, error.http_status, error.message))
-            except Exception as e:
-                log.Warn("Upload of '%s' failed (attempt %s): %s: %s"
-                        % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up uploading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error uploading '%s'" % remote_filename)
-
-    def get(self, remote_filename, local_path):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Downloading '%s/%s'" % (self.container, remote_filename))
-            try:
-                headers, body = self.conn.get_object(self.container,
-                                                     remote_filename)
-                f = open(local_path.name, 'w')
-                for chunk in body:
-                    f.write(chunk)
-                local_path.setdata()
-                return
-            except self.resp_exc as resperr:
-                log.Warn("Download of '%s' failed (attempt %s): Swift server returned: %s %s"
-                         % (remote_filename, n, resperr.http_status, resperr.message))
-            except Exception as e:
-                log.Warn("Download of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up downloading '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error downloading '%s/%s'"
-                               % (self.container, remote_filename))
+    def _error_code(self, operation, e):
+        if isinstance(e, self.resp_exc):
+            if e.http_status == 404:
+                return log.ErrorCode.backend_not_found
+
+    def _put(self, source_path, remote_filename):
+        self.conn.put_object(self.container, remote_filename,
+                             file(source_path.name))
+
+    def _get(self, remote_filename, local_path):
+        headers, body = self.conn.get_object(self.container, remote_filename)
+        with open(local_path.name, 'wb') as f:
+            for chunk in body:
+                f.write(chunk)
 
     def _list(self):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Listing '%s'" % (self.container))
-            try:
-                # Cloud Files will return a max of 10,000 objects.  We have
-                # to make multiple requests to get them all.
-                headers, objs = self.conn.get_container(self.container)
-                return [ o['name'] for o in objs ]
-            except self.resp_exc as resperr:
-                log.Warn("Listing of '%s' failed (attempt %s): Swift server returned: %s %s"
-                         % (self.container, n, resperr.http_status, resperr.message))
-            except Exception as e:
-                log.Warn("Listing of '%s' failed (attempt %s): %s: %s"
-                         % (self.container, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up listing of '%s' after %s attempts"
-                 % (self.container, globals.num_retries))
-        raise BackendException("Error listing '%s'"
-                               % (self.container))
-
-    def delete_one(self, remote_filename):
-        for n in range(1, globals.num_retries+1):
-            log.Info("Deleting '%s/%s'" % (self.container, remote_filename))
-            try:
-                self.conn.delete_object(self.container, remote_filename)
-                return
-            except self.resp_exc as resperr:
-                if n > 1 and resperr.http_status == 404:
-                    # We failed on a timeout, but delete succeeded on the server
-                    log.Warn("Delete of '%s' missing after retry - must have succeded earlier" % remote_filename )
-                    return
-                log.Warn("Delete of '%s' failed (attempt %s): Swift server returned: %s %s"
-                         % (remote_filename, n, resperr.http_status, resperr.message))
-            except Exception as e:
-                log.Warn("Delete of '%s' failed (attempt %s): %s: %s"
-                         % (remote_filename, n, e.__class__.__name__, str(e)))
-                log.Debug("Backtrace of previous error: %s"
-                          % exception_traceback())
-            time.sleep(30)
-        log.Warn("Giving up deleting '%s' after %s attempts"
-                 % (remote_filename, globals.num_retries))
-        raise BackendException("Error deleting '%s/%s'"
-                               % (self.container, remote_filename))
-
-    def delete(self, filename_list):
-        for file in filename_list:
-            self.delete_one(file)
-            log.Debug("Deleted '%s/%s'" % (self.container, file))
-
-    @retry
-    def _query_file_info(self, filename, raise_errors=False):
-        try:
-            sobject = self.conn.head_object(self.container, filename)
-            return {'size': int(sobject['content-length'])}
-        except self.resp_exc:
-            return {'size': -1}
-        except Exception as e:
-            log.Warn("Error querying '%s/%s': %s"
-                     "" % (self.container,
-                           filename,
-                           str(e)))
-            if raise_errors:
-                raise e
-            else:
-                return {'size': None}
+        headers, objs = self.conn.get_container(self.container)
+        return [ o['name'] for o in objs ]
+
+    def _delete(self, filename):
+        self.conn.delete_object(self.container, filename)
+
+    def _query(self, filename):
+        sobject = self.conn.head_object(self.container, filename)
+        return {'size': int(sobject['content-length'])}
 
 duplicity.backend.register_backend("swift", SwiftBackend)

=== modified file 'duplicity/backends/tahoebackend.py'
--- duplicity/backends/tahoebackend.py	2013-12-27 06:39:00 +0000
+++ duplicity/backends/tahoebackend.py	2014-04-28 02:49:55 +0000
@@ -20,9 +20,8 @@
 
 import duplicity.backend
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
+from duplicity.errors import BackendException
 
-from commands import getstatusoutput
 
 class TAHOEBackend(duplicity.backend.Backend):
     """
@@ -36,10 +35,8 @@
 
         self.alias = url[0]
 
-        if len(url) > 2:
+        if len(url) > 1:
             self.directory = "/".join(url[1:])
-        elif len(url) == 2:
-            self.directory = url[1]
         else:
             self.directory = ""
 
@@ -59,28 +56,20 @@
 
     def run(self, *args):
         cmd = " ".join(args)
-        log.Debug("tahoe execute: %s" % cmd)
-        (status, output) = getstatusoutput(cmd)
-
-        if status != 0:
-            raise BackendException("Error running %s" % cmd)
-        else:
-            return output
-
-    def put(self, source_path, remote_filename=None):
+        _, output, _ = self.subprocess_popen(cmd)
+        return output
+
+    def _put(self, source_path, remote_filename):
         self.run("tahoe", "cp", source_path.name, self.get_remote_path(remote_filename))
 
-    def get(self, remote_filename, local_path):
+    def _get(self, remote_filename, local_path):
         self.run("tahoe", "cp", self.get_remote_path(remote_filename), local_path.name)
-        local_path.setdata()
 
     def _list(self):
-        log.Debug("tahoe: List")
-        return self.run("tahoe", "ls", self.get_remote_path()).split('\n')
+        output = self.run("tahoe", "ls", self.get_remote_path())
+        return output.split('\n') if output else []
 
-    def delete(self, filename_list):
-        log.Debug("tahoe: delete(%s)" % filename_list)
-        for filename in filename_list:
-            self.run("tahoe", "rm", self.get_remote_path(filename))
+    def _delete(self, filename):
+        self.run("tahoe", "rm", self.get_remote_path(filename))
 
 duplicity.backend.register_backend("tahoe", TAHOEBackend)

=== modified file 'duplicity/backends/webdavbackend.py'
--- duplicity/backends/webdavbackend.py	2014-04-25 15:03:00 +0000
+++ duplicity/backends/webdavbackend.py	2014-04-28 02:49:55 +0000
@@ -32,8 +32,7 @@
 import duplicity.backend
 from duplicity import globals
 from duplicity import log
-from duplicity.errors import * #@UnusedWildImport
-from duplicity.backend import retry_fatal
+from duplicity.errors import BackendException, FatalBackendException
 
 class CustomMethodRequest(urllib2.Request):
     """
@@ -54,7 +53,7 @@
                 global socket, ssl
                 import socket, ssl
             except ImportError:
-                raise FatalBackendError("Missing socket or ssl libraries.")
+                raise FatalBackendException("Missing socket or ssl libraries.")
 
             httplib.HTTPSConnection.__init__(self, *args, **kwargs)
 
@@ -71,21 +70,21 @@
                         break
             # still no cacert file, inform user
             if not self.cacert_file:
-                raise FatalBackendError("""For certificate verification a cacert database file is needed in one of these locations: %s
+                raise FatalBackendException("""For certificate verification a cacert database file is needed in one of these locations: %s
 Hints:
   Consult the man page, chapter 'SSL Certificate Verification'.
   Consider using the options --ssl-cacert-file, --ssl-no-check-certificate .""" % ", ".join(cacert_candidates) )
             # check if file is accessible (libssl errors are not very detailed)
             if not os.access(self.cacert_file, os.R_OK):
-                raise FatalBackendError("Cacert database file '%s' is not readable." % cacert_file)
+                raise FatalBackendException("Cacert database file '%s' is not readable." % cacert_file)
 
         def connect(self):
             # create new socket
             sock = socket.create_connection((self.host, self.port),
                                             self.timeout)
-            if self._tunnel_host:
+            if self.tunnel_host:
                 self.sock = sock
-                self._tunnel()
+                self.tunnel()
 
             # wrap the socket in ssl using verification
             self.sock = ssl.wrap_socket(sock,
@@ -126,7 +125,7 @@
 
         self.username = parsed_url.username
         self.password = self.get_password()
-        self.directory = self._sanitize_path(parsed_url.path)
+        self.directory = self.sanitize_path(parsed_url.path)
 
         log.Info("Using WebDAV protocol %s" % (globals.webdav_proto,))
         log.Info("Using WebDAV host %s port %s" % (parsed_url.hostname, parsed_url.port))
@@ -134,30 +133,33 @@
 
         self.conn = None
 
-    def _sanitize_path(self,path):
+    def sanitize_path(self,path):
         if path:
             foldpath = re.compile('/+')
             return foldpath.sub('/', path + '/' )
         else:
             return '/'
 
-    def _getText(self,nodelist):
+    def getText(self,nodelist):
         rc = ""
         for node in nodelist:
             if node.nodeType == node.TEXT_NODE:
                 rc = rc + node.data
         return rc
 
-    def _connect(self, forced=False):
+    def _retry_cleanup(self):
+        self.connect(forced=True)
+
+    def connect(self, forced=False):
         """
         Connect or re-connect to the server, updates self.conn
         # reconnect on errors as a precaution, there are errors e.g.
         # "[Errno 32] Broken pipe" or SSl errors that render the connection unusable
         """
-        if self.retry_count<=1 and self.conn \
+        if not forced and self.conn \
             and self.conn.host == self.parsed_url.hostname: return
 
-        log.Info("WebDAV create connection on '%s' (retry %s) " % (self.parsed_url.hostname,self.retry_count) )
+        log.Info("WebDAV create connection on '%s'" % (self.parsed_url.hostname))
         if self.conn: self.conn.close()
         # http schemes needed for redirect urls from servers
         if self.parsed_url.scheme in ['webdav','http']:
@@ -168,9 +170,9 @@
             else:
                 self.conn = VerifiedHTTPSConnection(self.parsed_url.hostname, self.parsed_url.port)
         else:
-            raise FatalBackendError("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
+            raise FatalBackendException("WebDAV Unknown URI scheme: %s" % (self.parsed_url.scheme))
 
-    def close(self):
+    def _close(self):
         self.conn.close()
 
     def request(self, method, path, data=None, redirected=0):
@@ -178,7 +180,7 @@
         Wraps the connection.request method to retry once if authentication is
         required
         """
-        self._connect()
+        self.connect()
 
         quoted_path = urllib.quote(path,"/:~")
 
@@ -197,12 +199,12 @@
             if redirect_url:
                 log.Notice("WebDAV redirect to: %s " % urllib.unquote(redirect_url) )
                 if redirected > 10:
-                    raise FatalBackendError("WebDAV redirected 10 times. Giving up.")
+                    raise FatalBackendException("WebDAV redirected 10 times. Giving up.")
                 self.parsed_url = duplicity.backend.ParsedUrl(redirect_url)
-                self.directory = self._sanitize_path(self.parsed_url.path)
+                self.directory = self.sanitize_path(self.parsed_url.path)
                 return self.request(method,self.directory,data,redirected+1)
             else:
-                raise FatalBackendError("WebDAV missing location header in redirect response.")
+                raise FatalBackendException("WebDAV missing location header in redirect response.")
         elif response.status == 401:
             response.close()
             self.headers['Authorization'] = self.get_authorization(response, quoted_path)
@@ -261,10 +263,7 @@
         auth_string = self.digest_auth_handler.get_authorization(dummy_req, self.digest_challenge)
         return 'Digest %s' % auth_string
 
-    @retry_fatal
     def _list(self):
-        """List files in directory"""
-        log.Info("Listing directory %s on WebDAV server" % (self.directory,))
         response = None
         try:
             self.headers['Depth'] = "1"
@@ -289,7 +288,7 @@
             dom = xml.dom.minidom.parseString(document)
             result = []
             for href in dom.getElementsByTagName('d:href') + dom.getElementsByTagName('D:href'):
-                filename = self.__taste_href(href)
+                filename = self.taste_href(href)
                 if filename:
                     result.append(filename)
             return result
@@ -308,7 +307,7 @@
         for i in range(1,len(dirs)):
             d="/".join(dirs[0:i+1])+"/"
 
-            self.close() # or we get previous request's data or exception
+            self._close() # or we get previous request's data or exception
             self.headers['Depth'] = "1"
             response = self.request("PROPFIND", d)
             del self.headers['Depth']
@@ -317,21 +316,21 @@
 
             if response.status == 404:
                 log.Info("Creating missing directory %s" % d)
-                self.close() # or we get previous request's data or exception
+                self._close() # or we get previous request's data or exception
 
                 res = self.request("MKCOL", d)
                 if res.status != 201:
                     raise BackendException("WebDAV MKCOL %s failed: %s %s" % (d,res.status,res.reason))
-                self.close()
+                self._close()
 
-    def __taste_href(self, href):
+    def taste_href(self, href):
         """
         Internal helper to taste the given href node and, if
         it is a duplicity file, collect it as a result file.
 
         @return: A matching filename, or None if the href did not match.
         """
-        raw_filename = self._getText(href.childNodes).strip()
+        raw_filename = self.getText(href.childNodes).strip()
         parsed_url = urlparse.urlparse(urllib.unquote(raw_filename))
         filename = parsed_url.path
         log.Debug("webdav path decoding and translation: "
@@ -361,11 +360,8 @@
         else:
             return None
 
-    @retry_fatal
-    def get(self, remote_filename, local_path):
-        """Get remote filename, saving it to local_path"""
+    def _get(self, remote_filename, local_path):
         url = self.directory + remote_filename
-        log.Info("Retrieving %s from WebDAV server" % (url ,))
         response = None
         try:
             target_file = local_path.open("wb")
@@ -376,7 +372,6 @@
                 #import hashlib
                 #log.Info("WebDAV GOT %s bytes with md5=%s" % (len(data),hashlib.md5(data).hexdigest()) )
                 assert not target_file.close()
-                local_path.setdata()
                 response.close()
             else:
                 status = response.status
@@ -388,13 +383,8 @@
         finally:
             if response: response.close()
 
-    @retry_fatal
-    def put(self, source_path, remote_filename = None):
-        """Transfer source_path to remote_filename"""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
+    def _put(self, source_path, remote_filename):
         url = self.directory + remote_filename
-        log.Info("Saving %s on WebDAV server" % (url ,))
         response = None
         try:
             source_file = source_path.open("rb")
@@ -412,27 +402,23 @@
         finally:
             if response: response.close()
 
-    @retry_fatal
-    def delete(self, filename_list):
-        """Delete files in filename_list"""
-        for filename in filename_list:
-            url = self.directory + filename
-            log.Info("Deleting %s from WebDAV server" % (url ,))
-            response = None
-            try:
-                response = self.request("DELETE", url)
-                if response.status in [200, 204]:
-                    response.read()
-                    response.close()
-                else:
-                    status = response.status
-                    reason = response.reason
-                    response.close()
-                    raise BackendException("Bad status code %s reason %s." % (status,reason))
-            except Exception as e:
-                raise e
-            finally:
-                if response: response.close()
+    def _delete(self, filename):
+        url = self.directory + filename
+        response = None
+        try:
+            response = self.request("DELETE", url)
+            if response.status in [200, 204]:
+                response.read()
+                response.close()
+            else:
+                status = response.status
+                reason = response.reason
+                response.close()
+                raise BackendException("Bad status code %s reason %s." % (status,reason))
+        except Exception as e:
+            raise e
+        finally:
+            if response: response.close()
 
 duplicity.backend.register_backend("webdav", WebDAVBackend)
 duplicity.backend.register_backend("webdavs", WebDAVBackend)

=== modified file 'duplicity/commandline.py'
--- duplicity/commandline.py	2014-04-25 23:20:12 +0000
+++ duplicity/commandline.py	2014-04-28 02:49:55 +0000
@@ -210,13 +210,6 @@
     global select_opts, select_files, full_backup
     global list_current, collection_status, cleanup, remove_time, verify
 
-    def use_gio(*args):
-        try:
-            import duplicity.backends.giobackend
-            backend.force_backend(duplicity.backends.giobackend.GIOBackend)
-        except ImportError:
-            log.FatalError(_("Unable to load gio backend: %s") % str(sys.exc_info()[1]), log.ErrorCode.gio_not_available)
-
     def set_log_fd(fd):
         if fd < 1:
             raise optparse.OptionValueError("log-fd must be greater than zero.")
@@ -365,7 +358,9 @@
     # the time specified
     parser.add_option("--full-if-older-than", type = "time", dest = "full_force_time", metavar = _("time"))
 
-    parser.add_option("--gio", action = "callback", callback = use_gio)
+    parser.add_option("--gio",action = "callback", dest = "use_gio",
+                      callback = lambda o, s, v, p: (setattr(p.values, o.dest, True),
+                                                     old_fn_deprecation(s)))
 
     parser.add_option("--gpg-options", action = "extend", metavar = _("options"))
 
@@ -521,8 +516,8 @@
     # sftp command to use (ssh pexpect backend)
     parser.add_option("--sftp-command", metavar = _("command"))
 
-    # sftp command to use (ssh pexpect backend)
-    parser.add_option("--cf-command", metavar = _("command"))
+    # allow the user to switch cloudfiles backend
+    parser.add_option("--cf-backend", metavar = _("pyrax|cloudfiles"))
 
     # If set, use short (< 30 char) filenames for all the remote files.
     parser.add_option("--short-filenames", action = "callback",

=== modified file 'duplicity/errors.py'
--- duplicity/errors.py	2013-01-10 19:04:39 +0000
+++ duplicity/errors.py	2014-04-28 02:49:55 +0000
@@ -23,6 +23,8 @@
 Error/exception classes that do not fit naturally anywhere else.
 """
 
+from duplicity import log
+
 class DuplicityError(Exception):
     pass
 
@@ -68,9 +70,11 @@
     """
     Raised to indicate a backend specific problem.
     """
-    pass
+    def __init__(self, msg, code=log.ErrorCode.backend_error):
+        super(BackendException, self).__init__(msg)
+        self.code = code
 
-class FatalBackendError(DuplicityError):
+class FatalBackendException(BackendException):
     """
     Raised to indicate a backend failed fatally.
     """

=== modified file 'duplicity/globals.py'
--- duplicity/globals.py	2014-04-17 21:49:37 +0000
+++ duplicity/globals.py	2014-04-28 02:49:55 +0000
@@ -284,3 +284,6 @@
 
 # Level of Redundancy in % for Par2 files
 par2_redundancy = 10
+
+# Whether to enable gio backend
+use_gio = False

=== modified file 'po/duplicity.pot'
--- po/duplicity.pot	2014-04-25 15:03:00 +0000
+++ po/duplicity.pot	2014-04-28 02:49:55 +0000
@@ -8,7 +8,7 @@
 msgstr ""
 "Project-Id-Version: PACKAGE VERSION\n"
 "Report-Msgid-Bugs-To: Kenneth Loafman <kenneth@xxxxxxxxxxx>\n"
-"POT-Creation-Date: 2014-04-21 11:04-0500\n"
+"POT-Creation-Date: 2014-04-27 22:39-0400\n"
 "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
 "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
 "Language-Team: LANGUAGE <LL@xxxxxx>\n"
@@ -69,243 +69,243 @@
 "Continuing restart on file %s."
 msgstr ""
 
-#: ../bin/duplicity:299
+#: ../bin/duplicity:297
 #, python-format
 msgid "File %s was corrupted during upload."
 msgstr ""
 
-#: ../bin/duplicity:333
+#: ../bin/duplicity:331
 msgid ""
 "Restarting backup, but current encryption settings do not match original "
 "settings"
 msgstr ""
 
-#: ../bin/duplicity:356
+#: ../bin/duplicity:354
 #, python-format
 msgid "Restarting after volume %s, file %s, block %s"
 msgstr ""
 
-#: ../bin/duplicity:423
+#: ../bin/duplicity:421
 #, python-format
 msgid "Processed volume %d"
 msgstr ""
 
-#: ../bin/duplicity:572
+#: ../bin/duplicity:570
 msgid ""
 "Fatal Error: Unable to start incremental backup.  Old signatures not found "
 "and incremental specified"
 msgstr ""
 
-#: ../bin/duplicity:576
+#: ../bin/duplicity:574
 msgid "No signatures found, switching to full backup."
 msgstr ""
 
-#: ../bin/duplicity:590
+#: ../bin/duplicity:588
 msgid "Backup Statistics"
 msgstr ""
 
-#: ../bin/duplicity:695
+#: ../bin/duplicity:693
 #, python-format
 msgid "%s not found in archive, no files restored."
 msgstr ""
 
-#: ../bin/duplicity:699
+#: ../bin/duplicity:697
 msgid "No files found in archive - nothing restored."
 msgstr ""
 
-#: ../bin/duplicity:732
+#: ../bin/duplicity:730
 #, python-format
 msgid "Processed volume %d of %d"
 msgstr ""
 
+#: ../bin/duplicity:764
+#, python-format
+msgid "Invalid data - %s hash mismatch for file:"
+msgstr ""
+
 #: ../bin/duplicity:766
 #, python-format
-msgid "Invalid data - %s hash mismatch for file:"
-msgstr ""
-
-#: ../bin/duplicity:768
-#, python-format
 msgid "Calculated hash: %s"
 msgstr ""
 
-#: ../bin/duplicity:769
+#: ../bin/duplicity:767
 #, python-format
 msgid "Manifest hash: %s"
 msgstr ""
 
-#: ../bin/duplicity:807
+#: ../bin/duplicity:805
 #, python-format
 msgid "Volume was signed by key %s, not %s"
 msgstr ""
 
-#: ../bin/duplicity:837
+#: ../bin/duplicity:835
 #, python-format
 msgid "Verify complete: %s, %s."
 msgstr ""
 
-#: ../bin/duplicity:838
+#: ../bin/duplicity:836
 #, python-format
 msgid "%d file compared"
 msgid_plural "%d files compared"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:840
+#: ../bin/duplicity:838
 #, python-format
 msgid "%d difference found"
 msgid_plural "%d differences found"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:859
+#: ../bin/duplicity:857
 msgid "No extraneous files found, nothing deleted in cleanup."
 msgstr ""
 
-#: ../bin/duplicity:864
+#: ../bin/duplicity:862
 msgid "Deleting this file from backend:"
 msgid_plural "Deleting these files from backend:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:876
+#: ../bin/duplicity:874
 msgid "Found the following file to delete:"
 msgid_plural "Found the following files to delete:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:880
+#: ../bin/duplicity:878
 msgid "Run duplicity again with the --force option to actually delete."
 msgstr ""
 
+#: ../bin/duplicity:921
+msgid "There are backup set(s) at time(s):"
+msgstr ""
+
 #: ../bin/duplicity:923
-msgid "There are backup set(s) at time(s):"
-msgstr ""
-
-#: ../bin/duplicity:925
 msgid "Which can't be deleted because newer sets depend on them."
 msgstr ""
 
-#: ../bin/duplicity:929
+#: ../bin/duplicity:927
 msgid ""
 "Current active backup chain is older than specified time.  However, it will "
 "not be deleted.  To remove all your backups, manually purge the repository."
 msgstr ""
 
-#: ../bin/duplicity:935
+#: ../bin/duplicity:933
 msgid "No old backup sets found, nothing deleted."
 msgstr ""
 
-#: ../bin/duplicity:938
+#: ../bin/duplicity:936
 msgid "Deleting backup chain at time:"
 msgid_plural "Deleting backup chains at times:"
 msgstr[0] ""
 msgstr[1] ""
 
+#: ../bin/duplicity:947
+#, python-format
+msgid "Deleting incremental signature chain %s"
+msgstr ""
+
 #: ../bin/duplicity:949
 #, python-format
-msgid "Deleting incremental signature chain %s"
-msgstr ""
-
-#: ../bin/duplicity:951
-#, python-format
 msgid "Deleting incremental backup chain %s"
 msgstr ""
 
+#: ../bin/duplicity:952
+#, python-format
+msgid "Deleting complete signature chain %s"
+msgstr ""
+
 #: ../bin/duplicity:954
 #, python-format
-msgid "Deleting complete signature chain %s"
-msgstr ""
-
-#: ../bin/duplicity:956
-#, python-format
 msgid "Deleting complete backup chain %s"
 msgstr ""
 
-#: ../bin/duplicity:962
+#: ../bin/duplicity:960
 msgid "Found old backup chain at the following time:"
 msgid_plural "Found old backup chains at the following times:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../bin/duplicity:966
+#: ../bin/duplicity:964
 msgid "Rerun command with --force option to actually delete."
 msgstr ""
 
-#: ../bin/duplicity:1043
+#: ../bin/duplicity:1041
 #, python-format
 msgid "Deleting local %s (not authoritative at backend)."
 msgstr ""
 
-#: ../bin/duplicity:1047
+#: ../bin/duplicity:1045
 #, python-format
 msgid "Unable to delete %s: %s"
 msgstr ""
 
-#: ../bin/duplicity:1075 ../duplicity/dup_temp.py:263
+#: ../bin/duplicity:1073 ../duplicity/dup_temp.py:263
 #, python-format
 msgid "Failed to read %s: %s"
 msgstr ""
 
-#: ../bin/duplicity:1089
+#: ../bin/duplicity:1087
 #, python-format
 msgid "Copying %s to local cache."
 msgstr ""
 
-#: ../bin/duplicity:1137
+#: ../bin/duplicity:1135
 msgid "Local and Remote metadata are synchronized, no sync needed."
 msgstr ""
 
-#: ../bin/duplicity:1142
+#: ../bin/duplicity:1140
 msgid "Synchronizing remote metadata to local cache..."
 msgstr ""
 
-#: ../bin/duplicity:1157
+#: ../bin/duplicity:1155
 msgid "Sync would copy the following from remote to local:"
 msgstr ""
 
-#: ../bin/duplicity:1160
+#: ../bin/duplicity:1158
 msgid "Sync would remove the following spurious local files:"
 msgstr ""
 
-#: ../bin/duplicity:1203
+#: ../bin/duplicity:1201
 msgid "Unable to get free space on temp."
 msgstr ""
 
-#: ../bin/duplicity:1211
+#: ../bin/duplicity:1209
 #, python-format
 msgid "Temp space has %d available, backup needs approx %d."
 msgstr ""
 
-#: ../bin/duplicity:1214
+#: ../bin/duplicity:1212
 #, python-format
 msgid "Temp has %d available, backup will use approx %d."
 msgstr ""
 
-#: ../bin/duplicity:1222
+#: ../bin/duplicity:1220
 msgid "Unable to get max open files."
 msgstr ""
 
-#: ../bin/duplicity:1226
+#: ../bin/duplicity:1224
 #, python-format
 msgid ""
 "Max open files of %s is too low, should be >= 1024.\n"
 "Use 'ulimit -n 1024' or higher to correct.\n"
 msgstr ""
 
-#: ../bin/duplicity:1275
+#: ../bin/duplicity:1273
 msgid ""
 "RESTART: The first volume failed to upload before termination.\n"
 "         Restart is impossible...starting backup from beginning."
 msgstr ""
 
-#: ../bin/duplicity:1281
+#: ../bin/duplicity:1279
 #, python-format
 msgid ""
 "RESTART: Volumes %d to %d failed to upload before termination.\n"
 "         Restarting backup at volume %d."
 msgstr ""
 
-#: ../bin/duplicity:1288
+#: ../bin/duplicity:1286
 #, python-format
 msgid ""
 "RESTART: Impossible backup state: manifest has %d vols, remote has %d vols.\n"
@@ -314,7 +314,7 @@
 "         backup then restart the backup from the beginning."
 msgstr ""
 
-#: ../bin/duplicity:1310
+#: ../bin/duplicity:1308
 msgid ""
 "\n"
 "PYTHONOPTIMIZE in the environment causes duplicity to fail to\n"
@@ -324,54 +324,54 @@
 "See https://bugs.launchpad.net/duplicity/+bug/931175\n";
 msgstr ""
 
-#: ../bin/duplicity:1401
+#: ../bin/duplicity:1399
 #, python-format
 msgid "Last %s backup left a partial set, restarting."
 msgstr ""
 
-#: ../bin/duplicity:1405
+#: ../bin/duplicity:1403
 #, python-format
 msgid "Cleaning up previous partial %s backup set, restarting."
 msgstr ""
 
+#: ../bin/duplicity:1414
+msgid "Last full backup date:"
+msgstr ""
+
 #: ../bin/duplicity:1416
-msgid "Last full backup date:"
+msgid "Last full backup date: none"
 msgstr ""
 
 #: ../bin/duplicity:1418
-msgid "Last full backup date: none"
-msgstr ""
-
-#: ../bin/duplicity:1420
 msgid "Last full backup is too old, forcing full backup"
 msgstr ""
 
-#: ../bin/duplicity:1463
+#: ../bin/duplicity:1461
 msgid ""
 "When using symmetric encryption, the signing passphrase must equal the "
 "encryption passphrase."
 msgstr ""
 
-#: ../bin/duplicity:1516
+#: ../bin/duplicity:1514
 msgid "INT intercepted...exiting."
 msgstr ""
 
-#: ../bin/duplicity:1524
+#: ../bin/duplicity:1522
 #, python-format
 msgid "GPG error detail: %s"
 msgstr ""
 
-#: ../bin/duplicity:1534
+#: ../bin/duplicity:1532
 #, python-format
 msgid "User error detail: %s"
 msgstr ""
 
-#: ../bin/duplicity:1544
+#: ../bin/duplicity:1542
 #, python-format
 msgid "Backend error detail: %s"
 msgstr ""
 
-#: ../bin/rdiffdir:56 ../duplicity/commandline.py:238
+#: ../bin/rdiffdir:56 ../duplicity/commandline.py:233
 #, python-format
 msgid "Error opening file %s"
 msgstr ""
@@ -381,33 +381,33 @@
 msgid "File %s already exists, will not overwrite."
 msgstr ""
 
-#: ../duplicity/selection.py:119
+#: ../duplicity/selection.py:121
 #, python-format
 msgid "Skipping socket %s"
 msgstr ""
 
-#: ../duplicity/selection.py:123
+#: ../duplicity/selection.py:125
 #, python-format
 msgid "Error initializing file %s"
 msgstr ""
 
-#: ../duplicity/selection.py:127 ../duplicity/selection.py:148
+#: ../duplicity/selection.py:129 ../duplicity/selection.py:150
 #, python-format
 msgid "Error accessing possibly locked file %s"
 msgstr ""
 
-#: ../duplicity/selection.py:163
+#: ../duplicity/selection.py:165
 #, python-format
 msgid "Warning: base %s doesn't exist, continuing"
 msgstr ""
 
-#: ../duplicity/selection.py:166 ../duplicity/selection.py:184
-#: ../duplicity/selection.py:187
+#: ../duplicity/selection.py:168 ../duplicity/selection.py:186
+#: ../duplicity/selection.py:189
 #, python-format
 msgid "Selecting %s"
 msgstr ""
 
-#: ../duplicity/selection.py:268
+#: ../duplicity/selection.py:270
 #, python-format
 msgid ""
 "Fatal Error: The file specification\n"
@@ -418,14 +418,14 @@
 "pattern (such as '**') which matches the base directory."
 msgstr ""
 
-#: ../duplicity/selection.py:276
+#: ../duplicity/selection.py:278
 #, python-format
 msgid ""
 "Fatal Error while processing expression\n"
 "%s"
 msgstr ""
 
-#: ../duplicity/selection.py:286
+#: ../duplicity/selection.py:288
 #, python-format
 msgid ""
 "Last selection expression:\n"
@@ -435,49 +435,49 @@
 "probably isn't what you meant."
 msgstr ""
 
-#: ../duplicity/selection.py:311
+#: ../duplicity/selection.py:313
 #, python-format
 msgid "Reading filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:314
+#: ../duplicity/selection.py:316
 #, python-format
 msgid "Sorting filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:341
+#: ../duplicity/selection.py:343
 #, python-format
 msgid ""
 "Warning: file specification '%s' in filelist %s\n"
 "doesn't start with correct prefix %s.  Ignoring."
 msgstr ""
 
-#: ../duplicity/selection.py:345
+#: ../duplicity/selection.py:347
 msgid "Future prefix errors will not be logged."
 msgstr ""
 
-#: ../duplicity/selection.py:361
+#: ../duplicity/selection.py:363
 #, python-format
 msgid "Error closing filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:428
+#: ../duplicity/selection.py:430
 #, python-format
 msgid "Reading globbing filelist %s"
 msgstr ""
 
-#: ../duplicity/selection.py:461
+#: ../duplicity/selection.py:463
 #, python-format
 msgid "Error compiling regular expression %s"
 msgstr ""
 
-#: ../duplicity/selection.py:477
+#: ../duplicity/selection.py:479
 msgid ""
 "Warning: exclude-device-files is not the first selector.\n"
 "This may not be what you intended"
 msgstr ""
 
-#: ../duplicity/commandline.py:68
+#: ../duplicity/commandline.py:70
 #, python-format
 msgid ""
 "Warning: Option %s is pending deprecation and will be removed in a future "
@@ -485,16 +485,11 @@
 "Use of default filenames is strongly suggested."
 msgstr ""
 
-#: ../duplicity/commandline.py:216
-#, python-format
-msgid "Unable to load gio backend: %s"
-msgstr ""
-
 #. Used in usage help to represent a Unix-style path name. Example:
 #. --archive-dir <path>
-#: ../duplicity/commandline.py:259 ../duplicity/commandline.py:269
-#: ../duplicity/commandline.py:286 ../duplicity/commandline.py:352
-#: ../duplicity/commandline.py:552 ../duplicity/commandline.py:768
+#: ../duplicity/commandline.py:254 ../duplicity/commandline.py:264
+#: ../duplicity/commandline.py:281 ../duplicity/commandline.py:347
+#: ../duplicity/commandline.py:549 ../duplicity/commandline.py:765
 msgid "path"
 msgstr ""
 
@@ -504,9 +499,9 @@
 #. --hidden-encrypt-key <gpg_key_id>
 #. Used in usage help to represent an ID for a GnuPG key. Example:
 #. --encrypt-key <gpg_key_id>
-#: ../duplicity/commandline.py:281 ../duplicity/commandline.py:288
-#: ../duplicity/commandline.py:372 ../duplicity/commandline.py:533
-#: ../duplicity/commandline.py:741
+#: ../duplicity/commandline.py:276 ../duplicity/commandline.py:283
+#: ../duplicity/commandline.py:369 ../duplicity/commandline.py:530
+#: ../duplicity/commandline.py:738
 msgid "gpg-key-id"
 msgstr ""
 
@@ -514,42 +509,42 @@
 #. matching one or more files, as described in the documentation.
 #. Example:
 #. --exclude <shell_pattern>
-#: ../duplicity/commandline.py:296 ../duplicity/commandline.py:398
-#: ../duplicity/commandline.py:791
+#: ../duplicity/commandline.py:291 ../duplicity/commandline.py:395
+#: ../duplicity/commandline.py:788
 msgid "shell_pattern"
 msgstr ""
 
 #. Used in usage help to represent the name of a file. Example:
 #. --log-file <filename>
-#: ../duplicity/commandline.py:302 ../duplicity/commandline.py:309
-#: ../duplicity/commandline.py:314 ../duplicity/commandline.py:400
-#: ../duplicity/commandline.py:405 ../duplicity/commandline.py:416
-#: ../duplicity/commandline.py:737
+#: ../duplicity/commandline.py:297 ../duplicity/commandline.py:304
+#: ../duplicity/commandline.py:309 ../duplicity/commandline.py:397
+#: ../duplicity/commandline.py:402 ../duplicity/commandline.py:413
+#: ../duplicity/commandline.py:734
 msgid "filename"
 msgstr ""
 
 #. Used in usage help to represent a regular expression (regexp).
-#: ../duplicity/commandline.py:321 ../duplicity/commandline.py:407
+#: ../duplicity/commandline.py:316 ../duplicity/commandline.py:404
 msgid "regular_expression"
 msgstr ""
 
 #. Used in usage help to represent a time spec for a previous
 #. point in time, as described in the documentation. Example:
 #. duplicity remove-older-than time [options] target_url
-#: ../duplicity/commandline.py:364 ../duplicity/commandline.py:475
-#: ../duplicity/commandline.py:823
+#: ../duplicity/commandline.py:359 ../duplicity/commandline.py:472
+#: ../duplicity/commandline.py:820
 msgid "time"
 msgstr ""
 
 #. Used in usage help. (Should be consistent with the "Options:"
 #. header.) Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:368 ../duplicity/commandline.py:478
-#: ../duplicity/commandline.py:544 ../duplicity/commandline.py:756
+#: ../duplicity/commandline.py:365 ../duplicity/commandline.py:475
+#: ../duplicity/commandline.py:541 ../duplicity/commandline.py:753
 msgid "options"
 msgstr ""
 
-#: ../duplicity/commandline.py:383
+#: ../duplicity/commandline.py:380
 #, python-format
 msgid ""
 "Running in 'ignore errors' mode due to %s; please re-consider if this was "
@@ -557,152 +552,156 @@
 msgstr ""
 
 #. Used in usage help to represent an imap mailbox
-#: ../duplicity/commandline.py:396
+#: ../duplicity/commandline.py:393
 msgid "imap_mailbox"
 msgstr ""
 
-#: ../duplicity/commandline.py:410
+#: ../duplicity/commandline.py:407
 msgid "file_descriptor"
 msgstr ""
 
 #. Used in usage help to represent a desired number of
 #. something. Example:
 #. --num-retries <number>
-#: ../duplicity/commandline.py:421 ../duplicity/commandline.py:443
-#: ../duplicity/commandline.py:455 ../duplicity/commandline.py:461
-#: ../duplicity/commandline.py:499 ../duplicity/commandline.py:504
-#: ../duplicity/commandline.py:508 ../duplicity/commandline.py:582
-#: ../duplicity/commandline.py:751
+#: ../duplicity/commandline.py:418 ../duplicity/commandline.py:440
+#: ../duplicity/commandline.py:452 ../duplicity/commandline.py:458
+#: ../duplicity/commandline.py:496 ../duplicity/commandline.py:501
+#: ../duplicity/commandline.py:505 ../duplicity/commandline.py:579
+#: ../duplicity/commandline.py:748
 msgid "number"
 msgstr ""
 
 #. Used in usage help (noun)
-#: ../duplicity/commandline.py:424
+#: ../duplicity/commandline.py:421
 msgid "backup name"
 msgstr ""
 
 #. noun
-#: ../duplicity/commandline.py:517 ../duplicity/commandline.py:520
-#: ../duplicity/commandline.py:523 ../duplicity/commandline.py:722
+#: ../duplicity/commandline.py:514 ../duplicity/commandline.py:517
+#: ../duplicity/commandline.py:719
 msgid "command"
 msgstr ""
 
-#: ../duplicity/commandline.py:541
+#: ../duplicity/commandline.py:520
+msgid "pyrax|cloudfiles"
+msgstr ""
+
+#: ../duplicity/commandline.py:538
 msgid "paramiko|pexpect"
 msgstr ""
 
-#: ../duplicity/commandline.py:547
+#: ../duplicity/commandline.py:544
 msgid "pem formatted bundle of certificate authorities"
 msgstr ""
 
 #. Used in usage help. Example:
 #. --timeout <seconds>
-#: ../duplicity/commandline.py:557 ../duplicity/commandline.py:785
+#: ../duplicity/commandline.py:554 ../duplicity/commandline.py:782
 msgid "seconds"
 msgstr ""
 
 #. abbreviation for "character" (noun)
-#: ../duplicity/commandline.py:563 ../duplicity/commandline.py:719
+#: ../duplicity/commandline.py:560 ../duplicity/commandline.py:716
 msgid "char"
 msgstr ""
 
-#: ../duplicity/commandline.py:685
+#: ../duplicity/commandline.py:682
 #, python-format
 msgid "Using archive dir: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:686
+#: ../duplicity/commandline.py:683
 #, python-format
 msgid "Using backup name: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:693
+#: ../duplicity/commandline.py:690
 #, python-format
 msgid "Command line error: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:694
+#: ../duplicity/commandline.py:691
 msgid "Enter 'duplicity --help' for help screen."
 msgstr ""
 
 #. Used in usage help to represent a Unix-style path name. Example:
 #. rsync://user[:password]@other_host[:port]//absolute_path
-#: ../duplicity/commandline.py:707
+#: ../duplicity/commandline.py:704
 msgid "absolute_path"
 msgstr ""
 
 #. Used in usage help. Example:
 #. tahoe://alias/some_dir
-#: ../duplicity/commandline.py:711
+#: ../duplicity/commandline.py:708
 msgid "alias"
 msgstr ""
 
 #. Used in help to represent a "bucket name" for Amazon Web
 #. Services' Simple Storage Service (S3). Example:
 #. s3://other.host/bucket_name[/prefix]
-#: ../duplicity/commandline.py:716
+#: ../duplicity/commandline.py:713
 msgid "bucket_name"
 msgstr ""
 
 #. Used in usage help to represent the name of a container in
 #. Amazon Web Services' Cloudfront. Example:
 #. cf+http://container_name
+#: ../duplicity/commandline.py:724
+msgid "container_name"
+msgstr ""
+
+#. noun
 #: ../duplicity/commandline.py:727
-msgid "container_name"
-msgstr ""
-
-#. noun
-#: ../duplicity/commandline.py:730
 msgid "count"
 msgstr ""
 
 #. Used in usage help to represent the name of a file directory
-#: ../duplicity/commandline.py:733
+#: ../duplicity/commandline.py:730
 msgid "directory"
 msgstr ""
 
 #. Used in usage help, e.g. to represent the name of a code
 #. module. Example:
 #. rsync://user[:password]@other.host[:port]::/module/some_dir
-#: ../duplicity/commandline.py:746
+#: ../duplicity/commandline.py:743
 msgid "module"
 msgstr ""
 
 #. Used in usage help to represent an internet hostname. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:760
+#: ../duplicity/commandline.py:757
 msgid "other.host"
 msgstr ""
 
 #. Used in usage help. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:764
+#: ../duplicity/commandline.py:761
 msgid "password"
 msgstr ""
 
 #. Used in usage help to represent a TCP port number. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:772
+#: ../duplicity/commandline.py:769
 msgid "port"
 msgstr ""
 
 #. Used in usage help. This represents a string to be used as a
 #. prefix to names for backup files created by Duplicity. Example:
 #. s3://other.host/bucket_name[/prefix]
-#: ../duplicity/commandline.py:777
+#: ../duplicity/commandline.py:774
 msgid "prefix"
 msgstr ""
 
 #. Used in usage help to represent a Unix-style path name. Example:
 #. rsync://user[:password]@other.host[:port]/relative_path
-#: ../duplicity/commandline.py:781
+#: ../duplicity/commandline.py:778
 msgid "relative_path"
 msgstr ""
 
 #. Used in usage help to represent the name of a single file
 #. directory or a Unix-style path to a directory. Example:
 #. file:///some_dir
-#: ../duplicity/commandline.py:796
+#: ../duplicity/commandline.py:793
 msgid "some_dir"
 msgstr ""
 
@@ -710,14 +709,14 @@
 #. directory or a Unix-style path to a directory where files will be
 #. coming FROM. Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:802
+#: ../duplicity/commandline.py:799
 msgid "source_dir"
 msgstr ""
 
 #. Used in usage help to represent a URL files will be coming
 #. FROM. Example:
 #. duplicity [restore] [options] source_url target_dir
-#: ../duplicity/commandline.py:807
+#: ../duplicity/commandline.py:804
 msgid "source_url"
 msgstr ""
 
@@ -725,75 +724,75 @@
 #. directory or a Unix-style path to a directory. where files will be
 #. going TO. Example:
 #. duplicity [restore] [options] source_url target_dir
-#: ../duplicity/commandline.py:813
+#: ../duplicity/commandline.py:810
 msgid "target_dir"
 msgstr ""
 
 #. Used in usage help to represent a URL files will be going TO.
 #. Example:
 #. duplicity [full|incremental] [options] source_dir target_url
-#: ../duplicity/commandline.py:818
+#: ../duplicity/commandline.py:815
 msgid "target_url"
 msgstr ""
 
 #. Used in usage help to represent a user name (i.e. login).
 #. Example:
 #. ftp://user[:password]@other.host[:port]/some_dir
-#: ../duplicity/commandline.py:828
+#: ../duplicity/commandline.py:825
 msgid "user"
 msgstr ""
 
 #. Header in usage help
-#: ../duplicity/commandline.py:845
+#: ../duplicity/commandline.py:842
 msgid "Backends and their URL formats:"
 msgstr ""
 
 #. Header in usage help
-#: ../duplicity/commandline.py:870
+#: ../duplicity/commandline.py:867
 msgid "Commands:"
 msgstr ""
 
-#: ../duplicity/commandline.py:894
+#: ../duplicity/commandline.py:891
 #, python-format
 msgid "Specified archive directory '%s' does not exist, or is not a directory"
 msgstr ""
 
-#: ../duplicity/commandline.py:903
+#: ../duplicity/commandline.py:900
 #, python-format
 msgid ""
 "Sign key should be an 8 character hex string, like 'AA0E73D2'.\n"
 "Received '%s' instead."
 msgstr ""
 
-#: ../duplicity/commandline.py:963
+#: ../duplicity/commandline.py:960
 #, python-format
 msgid ""
 "Restore destination directory %s already exists.\n"
 "Will not overwrite."
 msgstr ""
 
-#: ../duplicity/commandline.py:968
+#: ../duplicity/commandline.py:965
 #, python-format
 msgid "Verify directory %s does not exist"
 msgstr ""
 
-#: ../duplicity/commandline.py:974
+#: ../duplicity/commandline.py:971
 #, python-format
 msgid "Backup source directory %s does not exist."
 msgstr ""
 
-#: ../duplicity/commandline.py:1003
+#: ../duplicity/commandline.py:1000
 #, python-format
 msgid "Command line warning: %s"
 msgstr ""
 
-#: ../duplicity/commandline.py:1003
+#: ../duplicity/commandline.py:1000
 msgid ""
 "Selection options --exclude/--include\n"
 "currently work only when backing up,not restoring."
 msgstr ""
 
-#: ../duplicity/commandline.py:1051
+#: ../duplicity/commandline.py:1048
 #, python-format
 msgid ""
 "Bad URL '%s'.\n"
@@ -801,66 +800,49 @@
 "\"file:///usr/local\".  See the man page for more information."
 msgstr ""
 
-#: ../duplicity/commandline.py:1076
+#: ../duplicity/commandline.py:1073
 msgid "Main action: "
 msgstr ""
 
-#: ../duplicity/backend.py:108
+#: ../duplicity/backend.py:110
 #, python-format
 msgid "Import of %s %s"
 msgstr ""
 
-#: ../duplicity/backend.py:185
+#: ../duplicity/backend.py:212
 #, python-format
 msgid "Could not initialize backend: %s"
 msgstr ""
 
-#: ../duplicity/backend.py:310
-#, python-format
-msgid "Attempt %s failed: %s: %s"
-msgstr ""
-
-#: ../duplicity/backend.py:312 ../duplicity/backend.py:342
-#: ../duplicity/backend.py:349
+#: ../duplicity/backend.py:369
 #, python-format
 msgid "Backtrace of previous error: %s"
 msgstr ""
 
-#: ../duplicity/backend.py:340
+#: ../duplicity/backend.py:384
+#, python-format
+msgid "Giving up after %s attempts. %s: %s"
+msgstr ""
+
+#: ../duplicity/backend.py:388
 #, python-format
 msgid "Attempt %s failed. %s: %s"
 msgstr ""
 
-#: ../duplicity/backend.py:351
-#, python-format
-msgid "Giving up after %s attempts. %s: %s"
-msgstr ""
-
-#: ../duplicity/backend.py:536 ../duplicity/backend.py:560
+#: ../duplicity/backend.py:471
 #, python-format
 msgid "Reading results of '%s'"
 msgstr ""
 
-#: ../duplicity/backend.py:575
-#, python-format
-msgid "Running '%s' failed with code %d (attempt #%d)"
-msgid_plural "Running '%s' failed with code %d (attempt #%d)"
-msgstr[0] ""
-msgstr[1] ""
-
-#: ../duplicity/backend.py:579
-#, python-format
-msgid ""
-"Error is:\n"
-"%s"
-msgstr ""
-
-#: ../duplicity/backend.py:581
-#, python-format
-msgid "Giving up trying to execute '%s' after %d attempt"
-msgid_plural "Giving up trying to execute '%s' after %d attempts"
-msgstr[0] ""
-msgstr[1] ""
+#: ../duplicity/backend.py:498
+#, python-format
+msgid "Writing %s"
+msgstr ""
+
+#: ../duplicity/backend.py:538
+#, python-format
+msgid "File %s not found locally after get from backend"
+msgstr ""
 
 #: ../duplicity/asyncscheduler.py:66
 #, python-format
@@ -898,142 +880,142 @@
 msgid "task execution done (success: %s)"
 msgstr ""
 
-#: ../duplicity/patchdir.py:74 ../duplicity/patchdir.py:79
+#: ../duplicity/patchdir.py:76 ../duplicity/patchdir.py:81
 #, python-format
 msgid "Patching %s"
 msgstr ""
 
-#: ../duplicity/patchdir.py:508
+#: ../duplicity/patchdir.py:510
 #, python-format
 msgid "Error '%s' patching %s"
 msgstr ""
 
-#: ../duplicity/patchdir.py:581
+#: ../duplicity/patchdir.py:582
 #, python-format
 msgid "Writing %s of type %s"
 msgstr ""
 
-#: ../duplicity/collections.py:150 ../duplicity/collections.py:161
+#: ../duplicity/collections.py:152 ../duplicity/collections.py:163
 #, python-format
 msgid "BackupSet.delete: missing %s"
 msgstr ""
 
-#: ../duplicity/collections.py:186
+#: ../duplicity/collections.py:188
 msgid "Fatal Error: No manifests found for most recent backup"
 msgstr ""
 
-#: ../duplicity/collections.py:195
+#: ../duplicity/collections.py:197
 msgid ""
 "Fatal Error: Remote manifest does not match local one.  Either the remote "
 "backup set or the local archive directory has been corrupted."
 msgstr ""
 
-#: ../duplicity/collections.py:203
+#: ../duplicity/collections.py:205
 msgid "Fatal Error: Neither remote nor local manifest is readable."
 msgstr ""
 
-#: ../duplicity/collections.py:314
+#: ../duplicity/collections.py:315
 msgid "Preferring Backupset over previous one!"
 msgstr ""
 
-#: ../duplicity/collections.py:317
+#: ../duplicity/collections.py:318
 #, python-format
 msgid "Ignoring incremental Backupset (start_time: %s; needed: %s)"
 msgstr ""
 
-#: ../duplicity/collections.py:322
+#: ../duplicity/collections.py:323
 #, python-format
 msgid "Added incremental Backupset (start_time: %s / end_time: %s)"
 msgstr ""
 
-#: ../duplicity/collections.py:392
+#: ../duplicity/collections.py:393
 msgid "Chain start time: "
 msgstr ""
 
-#: ../duplicity/collections.py:393
+#: ../duplicity/collections.py:394
 msgid "Chain end time: "
 msgstr ""
 
-#: ../duplicity/collections.py:394
+#: ../duplicity/collections.py:395
 #, python-format
 msgid "Number of contained backup sets: %d"
 msgstr ""
 
-#: ../duplicity/collections.py:396
+#: ../duplicity/collections.py:397
 #, python-format
 msgid "Total number of contained volumes: %d"
 msgstr ""
 
-#: ../duplicity/collections.py:398
+#: ../duplicity/collections.py:399
 msgid "Type of backup set:"
 msgstr ""
 
-#: ../duplicity/collections.py:398
+#: ../duplicity/collections.py:399
 msgid "Time:"
 msgstr ""
 
-#: ../duplicity/collections.py:398
+#: ../duplicity/collections.py:399
 msgid "Num volumes:"
 msgstr ""
 
-#: ../duplicity/collections.py:402
+#: ../duplicity/collections.py:403
 msgid "Full"
 msgstr ""
 
-#: ../duplicity/collections.py:405
+#: ../duplicity/collections.py:406
 msgid "Incremental"
 msgstr ""
 
-#: ../duplicity/collections.py:465
+#: ../duplicity/collections.py:466
 msgid "local"
 msgstr ""
 
-#: ../duplicity/collections.py:467
+#: ../duplicity/collections.py:468
 msgid "remote"
 msgstr ""
 
-#: ../duplicity/collections.py:622
+#: ../duplicity/collections.py:623
 msgid "Collection Status"
 msgstr ""
 
-#: ../duplicity/collections.py:624
+#: ../duplicity/collections.py:625
 #, python-format
 msgid "Connecting with backend: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:626
+#: ../duplicity/collections.py:627
 #, python-format
 msgid "Archive dir: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:629
+#: ../duplicity/collections.py:630
 #, python-format
 msgid "Found %d secondary backup chain."
 msgid_plural "Found %d secondary backup chains."
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:634
+#: ../duplicity/collections.py:635
 #, python-format
 msgid "Secondary chain %d of %d:"
 msgstr ""
 
-#: ../duplicity/collections.py:640
+#: ../duplicity/collections.py:641
 msgid "Found primary backup chain with matching signature chain:"
 msgstr ""
 
-#: ../duplicity/collections.py:644
+#: ../duplicity/collections.py:645
 msgid "No backup chains with active signatures found"
 msgstr ""
 
-#: ../duplicity/collections.py:647
+#: ../duplicity/collections.py:648
 #, python-format
 msgid "Also found %d backup set not part of any chain,"
 msgid_plural "Also found %d backup sets not part of any chain,"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:651
+#: ../duplicity/collections.py:652
 #, python-format
 msgid "and %d incomplete backup set."
 msgid_plural "and %d incomplete backup sets."
@@ -1041,95 +1023,95 @@
 msgstr[1] ""
 
 #. "cleanup" is a hard-coded command, so do not translate it
-#: ../duplicity/collections.py:656
+#: ../duplicity/collections.py:657
 msgid "These may be deleted by running duplicity with the \"cleanup\" command."
 msgstr ""
 
-#: ../duplicity/collections.py:659
+#: ../duplicity/collections.py:660
 msgid "No orphaned or incomplete backup sets found."
 msgstr ""
 
-#: ../duplicity/collections.py:675
+#: ../duplicity/collections.py:676
 #, python-format
 msgid "%d file exists on backend"
 msgid_plural "%d files exist on backend"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:682
+#: ../duplicity/collections.py:683
 #, python-format
 msgid "%d file exists in cache"
 msgid_plural "%d files exist in cache"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:734
+#: ../duplicity/collections.py:735
 msgid "Warning, discarding last backup set, because of missing signature file."
 msgstr ""
 
-#: ../duplicity/collections.py:757
+#: ../duplicity/collections.py:758
 msgid "Warning, found the following local orphaned signature file:"
 msgid_plural "Warning, found the following local orphaned signature files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:766
+#: ../duplicity/collections.py:767
 msgid "Warning, found the following remote orphaned signature file:"
 msgid_plural "Warning, found the following remote orphaned signature files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:775
+#: ../duplicity/collections.py:776
 msgid "Warning, found signatures but no corresponding backup files"
 msgstr ""
 
-#: ../duplicity/collections.py:779
+#: ../duplicity/collections.py:780
 msgid ""
 "Warning, found incomplete backup sets, probably left from aborted session"
 msgstr ""
 
-#: ../duplicity/collections.py:783
+#: ../duplicity/collections.py:784
 msgid "Warning, found the following orphaned backup file:"
 msgid_plural "Warning, found the following orphaned backup files:"
 msgstr[0] ""
 msgstr[1] ""
 
-#: ../duplicity/collections.py:800
+#: ../duplicity/collections.py:801
 #, python-format
 msgid "Extracting backup chains from list of files: %s"
 msgstr ""
 
-#: ../duplicity/collections.py:810
+#: ../duplicity/collections.py:811
 #, python-format
 msgid "File %s is part of known set"
 msgstr ""
 
-#: ../duplicity/collections.py:813
+#: ../duplicity/collections.py:814
 #, python-format
 msgid "File %s is not part of a known set; creating new set"
 msgstr ""
 
-#: ../duplicity/collections.py:818
+#: ../duplicity/collections.py:819
 #, python-format
 msgid "Ignoring file (rejected by backup set) '%s'"
 msgstr ""
 
-#: ../duplicity/collections.py:831
+#: ../duplicity/collections.py:833
 #, python-format
 msgid "Found backup chain %s"
 msgstr ""
 
-#: ../duplicity/collections.py:836
+#: ../duplicity/collections.py:838
 #, python-format
 msgid "Added set %s to pre-existing chain %s"
 msgstr ""
 
-#: ../duplicity/collections.py:840
+#: ../duplicity/collections.py:842
 #, python-format
 msgid "Found orphaned set %s"
 msgstr ""
 
-#: ../duplicity/collections.py:992
+#: ../duplicity/collections.py:993
 #, python-format
 msgid ""
 "No signature chain for the requested time.  Using oldest available chain, "
@@ -1141,32 +1123,32 @@
 msgid "Error listing directory %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:103 ../duplicity/diffdir.py:394
+#: ../duplicity/diffdir.py:105 ../duplicity/diffdir.py:395
 #, python-format
 msgid "Error %s getting delta for %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:117
+#: ../duplicity/diffdir.py:119
 #, python-format
 msgid "Getting delta of %s and %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:162
+#: ../duplicity/diffdir.py:164
 #, python-format
 msgid "A %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:169
+#: ../duplicity/diffdir.py:171
 #, python-format
 msgid "M %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:191
+#: ../duplicity/diffdir.py:193
 #, python-format
 msgid "Comparing %s and %s"
 msgstr ""
 
-#: ../duplicity/diffdir.py:199
+#: ../duplicity/diffdir.py:201
 #, python-format
 msgid "D %s"
 msgstr ""
@@ -1186,31 +1168,26 @@
 msgid "Skipping %s because of previous error"
 msgstr ""
 
-#: ../duplicity/backends/sshbackend.py:25
+#: ../duplicity/backends/sshbackend.py:26
 #, python-format
 msgid ""
 "Warning: Option %s is supported by ssh pexpect backend only and will be "
 "ignored."
 msgstr ""
 
-#: ../duplicity/backends/sshbackend.py:33
+#: ../duplicity/backends/sshbackend.py:34
 #, python-format
 msgid ""
 "Warning: Selected ssh backend '%s' is neither 'paramiko nor 'pexpect'. Will "
 "use default paramiko instead."
 msgstr ""
 
-#: ../duplicity/backends/giobackend.py:106
+#: ../duplicity/backends/giobackend.py:105
 #, python-format
 msgid "Connection failed, please check your password: %s"
 msgstr ""
 
-#: ../duplicity/backends/giobackend.py:130
-#, python-format
-msgid "Writing %s"
-msgstr ""
-
-#: ../duplicity/manifest.py:87
+#: ../duplicity/manifest.py:89
 #, python-format
 msgid ""
 "Fatal Error: Backup source host has changed.\n"
@@ -1218,7 +1195,7 @@
 "Previous hostname: %s"
 msgstr ""
 
-#: ../duplicity/manifest.py:94
+#: ../duplicity/manifest.py:96
 #, python-format
 msgid ""
 "Fatal Error: Backup source directory has changed.\n"
@@ -1226,7 +1203,7 @@
 "Previous directory: %s"
 msgstr ""
 
-#: ../duplicity/manifest.py:103
+#: ../duplicity/manifest.py:105
 msgid ""
 "Aborting because you may have accidentally tried to backup two different "
 "data sets to the same remote location, or using the same archive directory.  "
@@ -1234,107 +1211,107 @@
 "seeing this message"
 msgstr ""
 
-#: ../duplicity/manifest.py:209
+#: ../duplicity/manifest.py:211
 msgid "Manifests not equal because different volume numbers"
 msgstr ""
 
-#: ../duplicity/manifest.py:214
+#: ../duplicity/manifest.py:216
 msgid "Manifests not equal because volume lists differ"
 msgstr ""
 
-#: ../duplicity/manifest.py:219
+#: ../duplicity/manifest.py:221
 msgid "Manifests not equal because hosts or directories differ"
 msgstr ""
 
-#: ../duplicity/manifest.py:366
+#: ../duplicity/manifest.py:368
 msgid "Warning, found extra Volume identifier"
 msgstr ""
 
-#: ../duplicity/manifest.py:392
+#: ../duplicity/manifest.py:394
 msgid "Other is not VolumeInfo"
 msgstr ""
 
-#: ../duplicity/manifest.py:395
+#: ../duplicity/manifest.py:397
 msgid "Volume numbers don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:398
+#: ../duplicity/manifest.py:400
 msgid "start_indicies don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:401
+#: ../duplicity/manifest.py:403
 msgid "end_index don't match"
 msgstr ""
 
-#: ../duplicity/manifest.py:408
+#: ../duplicity/manifest.py:410
 msgid "Hashes don't match"
 msgstr ""
 
-#: ../duplicity/path.py:222 ../duplicity/path.py:281
+#: ../duplicity/path.py:224 ../duplicity/path.py:283
 #, python-format
 msgid "Warning: %s has negative mtime, treating as 0."
 msgstr ""
 
-#: ../duplicity/path.py:346
+#: ../duplicity/path.py:348
 msgid "Difference found:"
 msgstr ""
 
-#: ../duplicity/path.py:352
+#: ../duplicity/path.py:354
 #, python-format
 msgid "New file %s"
 msgstr ""
 
-#: ../duplicity/path.py:355
+#: ../duplicity/path.py:357
 #, python-format
 msgid "File %s is missing"
 msgstr ""
 
-#: ../duplicity/path.py:358
+#: ../duplicity/path.py:360
 #, python-format
 msgid "File %%s has type %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:364 ../duplicity/path.py:390
+#: ../duplicity/path.py:366 ../duplicity/path.py:392
 #, python-format
 msgid "File %%s has permissions %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:369
+#: ../duplicity/path.py:371
 #, python-format
 msgid "File %%s has mtime %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:377
+#: ../duplicity/path.py:379
 #, python-format
 msgid "Data for file %s is different"
 msgstr ""
 
-#: ../duplicity/path.py:385
+#: ../duplicity/path.py:387
 #, python-format
 msgid "Symlink %%s points to %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:394
+#: ../duplicity/path.py:396
 #, python-format
 msgid "Device file %%s has numbers %s, expected %s"
 msgstr ""
 
-#: ../duplicity/path.py:554
+#: ../duplicity/path.py:556
 #, python-format
 msgid "Making directory %s"
 msgstr ""
 
-#: ../duplicity/path.py:564
+#: ../duplicity/path.py:566
 #, python-format
 msgid "Deleting %s"
 msgstr ""
 
-#: ../duplicity/path.py:573
+#: ../duplicity/path.py:575
 #, python-format
 msgid "Touching %s"
 msgstr ""
 
-#: ../duplicity/path.py:580
+#: ../duplicity/path.py:582
 #, python-format
 msgid "Deleting tree %s"
 msgstr ""
@@ -1348,7 +1325,7 @@
 msgid "GPG process %d terminated before wait()"
 msgstr ""
 
-#: ../duplicity/dup_time.py:47
+#: ../duplicity/dup_time.py:49
 #, python-format
 msgid ""
 "Bad interval string \"%s\"\n"
@@ -1358,7 +1335,7 @@
 "page for more information."
 msgstr ""
 
-#: ../duplicity/dup_time.py:53
+#: ../duplicity/dup_time.py:55
 #, python-format
 msgid ""
 "Bad time string \"%s\"\n"
@@ -1411,12 +1388,12 @@
 msgid "Cleanup of temporary directory %s failed - this is probably a bug."
 msgstr ""
 
-#: ../duplicity/util.py:85
+#: ../duplicity/util.py:87
 #, python-format
 msgid "IGNORED_ERROR: Warning: ignoring error as requested: %s: %s"
 msgstr ""
 
-#: ../duplicity/util.py:142
+#: ../duplicity/util.py:144
 #, python-format
 msgid "Releasing lockfile %s"
 msgstr ""

=== modified file 'testing/__init__.py'
--- testing/__init__.py	2014-04-20 14:48:58 +0000
+++ testing/__init__.py	2014-04-28 02:49:55 +0000
@@ -30,13 +30,17 @@
 _testing_dir = os.path.dirname(os.path.abspath(__file__))
 _top_dir = os.path.dirname(_testing_dir)
 _overrides_dir = os.path.join(_testing_dir, 'overrides')
+_bin_dir = os.path.join(_testing_dir, 'overrides', 'bin')
 
 # Adjust python path for duplicity and override modules
-sys.path = [_overrides_dir, _top_dir] + sys.path
+sys.path = [_overrides_dir, _top_dir, _bin_dir] + sys.path
 
 # Also set PYTHONPATH for any subprocesses
 os.environ['PYTHONPATH'] = _overrides_dir + ":" + _top_dir + ":" + os.environ.get('PYTHONPATH', '')
 
+# And PATH for any subprocesses
+os.environ['PATH'] = _bin_dir + ":" + os.environ.get('PATH', '')
+
 # Now set some variables that help standardize test behavior
 os.environ['LANG'] = ''
 os.environ['GNUPGHOME'] = os.path.join(_testing_dir, 'gnupg')

=== modified file 'testing/functional/__init__.py'
--- testing/functional/__init__.py	2014-04-25 23:53:46 +0000
+++ testing/functional/__init__.py	2014-04-28 02:49:55 +0000
@@ -83,13 +83,15 @@
             child.expect('passphrase.*:')
             child.sendline(passphrase)
         child.wait()
+
         return_val = child.exitstatus
+        #output = child.read()
+        #print "Ran duplicity command: ", cmdline, "\n with return_val: ", return_val, "\n and output:\n", output
 
-        #print "Ran duplicity command: ", cmdline, "\n with return_val: ", child.exitstatus
         if fail:
-            self.assertEqual(30, child.exitstatus)
+            self.assertEqual(30, return_val)
         elif return_val:
-            raise CmdError(child.exitstatus)
+            raise CmdError(return_val)
 
     def backup(self, type, input_dir, options=[], **kwargs):
         """Run duplicity backup to default directory"""

=== modified file 'testing/functional/test_badupload.py'
--- testing/functional/test_badupload.py	2014-04-20 05:58:47 +0000
+++ testing/functional/test_badupload.py	2014-04-28 02:49:55 +0000
@@ -37,7 +37,7 @@
             self.backup("full", "testfiles/dir1", options=["--skip-volume=1"])
             self.fail()
         except CmdError as e:
-            self.assertEqual(e.exit_status, 44)
+            self.assertEqual(e.exit_status, 44, str(e))
 
 if __name__ == "__main__":
     unittest.main()

=== modified file 'testing/manual/backendtest.py' (properties changed: -x to +x)
--- testing/manual/backendtest.py	2013-07-19 22:38:23 +0000
+++ testing/manual/backendtest.py	2014-04-28 02:49:55 +0000
@@ -1,3 +1,4 @@
+#!/usr/bin/env python
 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
 #
 # Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
@@ -19,297 +20,214 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import config
-import sys, unittest, os
-sys.path.insert(0, "../")
+import os
+import sys
+import unittest
 
+_top_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..')
+sys.path.insert(0, _top_dir)
+try:
+    from testing.manual import config
+except ImportError:
+    # It's OK to not have copied config.py.tmpl over yet, if user is just
+    # calling us directly to test a specific backend.  If they aren't, we'll
+    # fail later when config.blah is used.
+    pass
+from testing.unit.test_backend_instance import BackendInstanceBase
 import duplicity.backend
-try:
-    import duplicity.backends.giobackend
-    gio_available = True
-except Exception:
-    gio_available = False
-from duplicity.errors import * #@UnusedWildImport
-from duplicity import path, file_naming, dup_time, globals, gpg
-
-config.setup()
-
-class UnivTest:
-    """Contains methods that help test any backend"""
-    def del_tmp(self):
-        """Remove all files from test directory"""
-        config.set_environ("FTP_PASSWORD", self.password)
-        backend = duplicity.backend.get_backend(self.url_string)
-        backend.delete(backend.list())
-        backend.close()
-        """Delete and create testfiles/output"""
-        assert not os.system("rm -rf testfiles/output")
-        assert not os.system("mkdir testfiles/output")
-
-    def test_basic(self):
-        """Test basic backend operations"""
-        if not self.url_string:
-            print "No URL for test %s...skipping... " % self.my_test_id,
-            return 0
-        config.set_environ("FTP_PASSWORD", self.password)
-        self.del_tmp()
-        self.try_basic(duplicity.backend.get_backend(self.url_string))
-
-    def test_fileobj_ops(self):
-        """Test fileobj operations"""
-        if not self.url_string:
-            print "No URL for test %s...skipping... " % self.my_test_id,
-            return 0
-        config.set_environ("FTP_PASSWORD", self.password)
-        self.try_fileobj_ops(duplicity.backend.get_backend(self.url_string))
-
-    def try_basic(self, backend):
-        """Try basic operations with given backend.
-
-        Requires backend be empty at first, and all operations are
-        allowed.
-
-        """
-        def cmp_list(l):
-            """Assert that backend.list is same as l"""
-            blist = backend.list()
-            blist.sort()
-            l.sort()
-            assert blist == l, \
-                   ("Got list: %s  Wanted: %s\n" % (repr(blist), repr(l)))
-
-        # Identify test that's running
-        print self.my_test_id, "... ",
-
-        assert not os.system("rm -rf testfiles/backend_tmp")
-        assert not os.system("mkdir testfiles/backend_tmp")
-
-        regpath = path.Path("testfiles/various_file_types/regular_file")
-        normal_file = "testfile"
-        colonfile = ("file%swith.%scolons_-and%s%setc" %
-                     ((globals.time_separator,) * 4))
-        tmpregpath = path.Path("testfiles/backend_tmp/regfile")
-
-        # Test list and put
-        cmp_list([])
-        backend.put(regpath, normal_file)
-        cmp_list([normal_file])
-        backend.put(regpath, colonfile)
-        cmp_list([normal_file, colonfile])
-
-        # Test get
-        regfilebuf = regpath.open("rb").read()
-        backend.get(colonfile, tmpregpath)
-        backendbuf = tmpregpath.open("rb").read()
-        assert backendbuf == regfilebuf
-
-        # Test delete
-        backend.delete([colonfile, normal_file])
-        cmp_list([])
-
-    def try_fileobj_filename(self, backend, filename):
-        """Use get_fileobj_write and get_fileobj_read on filename around"""
-        fout = backend.get_fileobj_write(filename)
-        fout.write("hello, world!")
-        fout.close()
-        assert filename in backend.list()
-
-        fin = backend.get_fileobj_read(filename)
-        buf = fin.read()
-        fin.close()
-        assert buf == "hello, world!", buf
-
-        backend.delete ([filename])
-
-    def try_fileobj_ops(self, backend):
-        """Test above try_fileobj_filename with a few filenames"""
-        # Must set dup_time strings because they are used by file_naming
-        dup_time.setcurtime(2000)
-        dup_time.setprevtime(1000)
-        # Also set profile for encryption
-        globals.gpg_profile = gpg.GPGProfile(passphrase = "foobar")
-
-        filename1 = file_naming.get('full', manifest = 1, gzipped = 1)
-        self.try_fileobj_filename(backend, filename1)
-
-        filename2 = file_naming.get('new-sig', encrypted = 1)
-        self.try_fileobj_filename(backend, filename2)
-
-
-class LocalTest(unittest.TestCase, UnivTest):
-    """ Test the Local backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "local"
-    url_string = config.file_url
-    password = config.file_password
-
-
-class scpTest(unittest.TestCase, UnivTest):
-    """ Test the SSH backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "ssh/scp"
-    url_string = config.ssh_url
-    password = config.ssh_password
-
-
-class ftpTest(unittest.TestCase, UnivTest):
-    """ Test the ftp backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "ftp"
-    url_string = config.ftp_url
-    password = config.ftp_password
-
-
-class ftpsTest(unittest.TestCase, UnivTest):
-    """ Test the ftp backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "ftps"
-    url_string = config.ftp_url
-    password = config.ftp_password
-
-
-class gsModuleTest(unittest.TestCase, UnivTest):
-    """ Test the gs module backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "gs/boto"
-    url_string = config.gs_url
-    password = None
-
-
-class rsyncAbsPathTest(unittest.TestCase, UnivTest):
-    """ Test the rsync abs path backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "rsync_abspath"
-    url_string = config.rsync_abspath_url
-    password = config.rsync_password
-
-
-class rsyncRelPathTest(unittest.TestCase, UnivTest):
-    """ Test the rsync relative path backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "rsync_relpath"
-    url_string = config.rsync_relpath_url
-    password = config.rsync_password
-
-
-class rsyncModuleTest(unittest.TestCase, UnivTest):
-    """ Test the rsync module backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "rsync_module"
-    url_string = config.rsync_module_url
-    password = config.rsync_password
-
-
-class s3ModuleTest(unittest.TestCase, UnivTest):
-    """ Test the s3 module backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "s3/boto"
-    url_string = config.s3_url
-    password = None
-
-
-class webdavModuleTest(unittest.TestCase, UnivTest):
-    """ Test the webdav module backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "webdav"
-    url_string = config.webdav_url
-    password = config.webdav_password
-
-
-class webdavsModuleTest(unittest.TestCase, UnivTest):
-    """ Test the webdavs module backend """
-    def setUp(self):
-        assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-    def tearDown(self):
-        assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-    my_test_id = "webdavs"
-    url_string = config.webdavs_url
-    password = config.webdavs_password
-
-
-#if gio_available:
-    class GIOTest(UnivTest):
-        """ Generic gio module backend class """
-        def setUp(self):
-            duplicity.backend.force_backend(duplicity.backends.giobackend.GIOBackend)
-            assert not os.system("tar xzf testfiles.tar.gz > /dev/null 2>&1")
-
-        def tearDown(self):
-            duplicity.backend.force_backend(None)
-            assert not os.system("rm -rf testfiles tempdir temp2.tar")
-
-
-    class gioFileModuleTest(GIOTest, unittest.TestCase):
-        """ Test the gio file module backend """
-        my_test_id = "gio/file"
-        url_string = config.file_url
-        password = config.file_password
-
-
-    class gioSSHModuleTest(GIOTest, unittest.TestCase):
-        """ Test the gio ssh module backend """
-        my_test_id = "gio/ssh"
-        url_string = config.ssh_url
-        password = config.ssh_password
-
-
-    class gioFTPModuleTest(GIOTest, unittest.TestCase):
-        """ Test the gio ftp module backend """
-        my_test_id = "gio/ftp"
-        url_string = config.ftp_url
-        password = config.ftp_password
+
+# undo the overrides support that our testing framework adds
+sys.path = [x for x in sys.path if '/overrides' not in x]
+os.environ['PATH'] = ':'.join([x for x in os.environ['PATH'].split(':')
+                               if '/overrides' not in x])
+os.environ['PYTHONPATH'] = ':'.join([x for x in os.environ['PYTHONPATH'].split(':')
+                                     if '/overrides' not in x])
+
+
+class ManualBackendBase(BackendInstanceBase):
+
+    url_string = None
+    password = None
+
+    def setUp(self):
+        super(ManualBackendBase, self).setUp()
+        self.set_global('num_retries', 1)
+        self.setBackendInfo()
+        if self.password is not None:
+            self.set_environ("FTP_PASSWORD", self.password)
+        if self.url_string is not None:
+            self.backend = duplicity.backend.get_backend_object(self.url_string)
+
+        # Clear out backend first
+        if self.backend is not None:
+            if hasattr(self.backend, '_delete_list'):
+                self.backend._delete_list(self.backend._list())
+            else:
+                for x in self.backend._list():
+                    self.backend._delete(x)
+
+    def setBackendInfo(self):
+        pass
+
+
+class sshParamikoTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _ssh_paramiko
+        duplicity.backend._backends['ssh'] = _ssh_paramiko.SSHParamikoBackend
+        self.set_global('use_scp', False)
+        self.url_string = config.ssh_url
+        self.password = config.ssh_password
+
+
+class sshParamikoScpTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _ssh_paramiko
+        duplicity.backend._backends['ssh'] = _ssh_paramiko.SSHParamikoBackend
+        self.set_global('use_scp', True)
+        self.url_string = config.ssh_url
+        self.password = config.ssh_password
+
+
+class sshPexpectTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _ssh_pexpect
+        duplicity.backend._backends['ssh'] = _ssh_pexpect.SSHPExpectBackend
+        self.set_global('use_scp', False)
+        self.url_string = config.ssh_url
+        self.password = config.ssh_password
+
+
+class sshPexpectScpTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _ssh_pexpect
+        duplicity.backend._backends['ssh'] = _ssh_pexpect.SSHPExpectBackend
+        self.set_global('use_scp', True)
+        self.url_string = config.ssh_url
+        self.password = config.ssh_password
+
+
+class ftpTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.ftp_url
+        self.password = config.ftp_password
+
+
+class ftpsTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.ftp_url.replace('ftp://', 'ftps://') if config.ftp_url else None
+        self.password = config.ftp_password
+
+
+class gsTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.gs_url
+        self.set_environ("GS_ACCESS_KEY_ID", config.gs_access_key)
+        self.set_environ("GS_SECRET_ACCESS_KEY", config.gs_secret_key)
+
+
+class s3SingleTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _boto_single
+        duplicity.backend._backends['s3+http'] = _boto_single.BotoBackend
+        self.set_global('s3_use_new_style', True)
+        self.set_environ("AWS_ACCESS_KEY_ID", config.s3_access_key)
+        self.set_environ("AWS_SECRET_ACCESS_KEY", config.s3_secret_key)
+        self.url_string = config.s3_url
+
+
+class s3MultiTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _boto_multi
+        duplicity.backend._backends['s3+http'] = _boto_multi.BotoBackend
+        self.set_global('s3_use_new_style', True)
+        self.set_environ("AWS_ACCESS_KEY_ID", config.s3_access_key)
+        self.set_environ("AWS_SECRET_ACCESS_KEY", config.s3_secret_key)
+        self.url_string = config.s3_url
+
+
+class cfCloudfilesTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _cf_cloudfiles
+        duplicity.backend._backends['cf+http'] = _cf_cloudfiles.CloudFilesBackend
+        self.set_environ("CLOUDFILES_USERNAME", config.cf_username)
+        self.set_environ("CLOUDFILES_APIKEY", config.cf_api_key)
+        self.url_string = config.cf_url
+
+
+class cfPyraxTest(ManualBackendBase):
+    def setBackendInfo(self):
+        from duplicity.backends import _cf_pyrax
+        duplicity.backend._backends['cf+http'] = _cf_pyrax.PyraxBackend
+        self.set_environ("CLOUDFILES_USERNAME", config.cf_username)
+        self.set_environ("CLOUDFILES_APIKEY", config.cf_api_key)
+        self.url_string = config.cf_url
+
+
+class swiftTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.swift_url
+        self.set_environ("SWIFT_USERNAME", config.swift_username)
+        self.set_environ("SWIFT_PASSWORD", config.swift_password)
+        self.set_environ("SWIFT_TENANTNAME", config.swift_tenant)
+        # Assumes you're just using the same storage as your cloudfiles config above
+        self.set_environ("SWIFT_AUTHURL", 'https://identity.api.rackspacecloud.com/v2.0/')
+        self.set_environ("SWIFT_AUTHVERSION", '2')
+
+
+class megaTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.mega_url
+        self.password = config.mega_password
+
+
+class webdavTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.webdav_url
+        self.password = config.webdav_password
+
+
+class webdavsTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.webdavs_url
+        self.password = config.webdavs_password
+        self.set_global('ssl_no_check_certificate', True)
+
+
+class gdocsTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.gdocs_url
+        self.password = config.gdocs_password
+
+
+class dpbxTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.dpbx_url
+
+
+class imapTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = config.imap_url
+        self.set_environ("IMAP_PASSWORD", config.imap_password)
+        self.set_global('imap_mailbox', 'deja-dup-testing')
+
+
+class gioSSHTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = 'gio+' + config.ssh_url if config.ssh_url else None
+        self.password = config.ssh_password
+
+
+class gioFTPTest(ManualBackendBase):
+    def setBackendInfo(self):
+        self.url_string = 'gio+' + config.ftp_url if config.ftp_url else None
+        self.password = config.ftp_password
+
 
 if __name__ == "__main__":
-    unittest.main()
+    defaultTest = None
+    if len(sys. argv) > 1:
+        class manualTest(ManualBackendBase):
+            def setBackendInfo(self):
+                self.url_string = sys.argv[1]
+        defaultTest = 'manualTest'
+    unittest.main(argv=[sys.argv[0]], defaultTest=defaultTest)

=== modified file 'testing/manual/config.py.tmpl'
--- testing/manual/config.py.tmpl	2013-07-26 21:10:20 +0000
+++ testing/manual/config.py.tmpl	2014-04-28 02:49:55 +0000
@@ -19,48 +19,8 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import sys, os
-testing = os.path.dirname(sys.argv[0])
-newpath = os.path.abspath(os.path.join(testing, "../../."))
-sys.path.insert(0, newpath)
-
-import gettext
-gettext.install('duplicity', codeset='utf8')
-
-from duplicity import globals
-from duplicity import log
-from duplicity import backend
-from duplicity.backends import localbackend
-
-# config for duplicity unit tests
-
-# verbosity, default is log.WARNING
-verbosity = log.WARNING
-
-# to test GPG and friends
-# these must be without passwords
-encrypt_key1 = ""
-encrypt_key2 = ""
-
-# password required on this one
-sign_key = ""
-sign_passphrase = ""
-
 # URLs for testing
-# NOTE: if the ***_url is None or "" the test
-# is skipped and is noted in the test results.
-
-file_url = "file:///tmp/testdup"
-file_password = None
-
-# To set up rsyncd for test:
-# /etc/rsyncd.conf contains
-# [testdup]
-# path = /tmp/testdup
-# comment = Test area for duplicity
-# read only = false
-#
-# NOTE: chmod 777 /tmp/testdup
+# NOTE: if the ***_url is None, the test is skipped
 
 ftp_url = None
 ftp_password = None
@@ -81,6 +41,23 @@
 s3_access_key = None
 s3_secret_key = None
 
+cf_url = None
+cf_username = None
+cf_api_key = None
+
+swift_url = None
+swift_tenant = None
+swift_username = None
+swift_password = None
+
+dpbx_url = None
+
+imap_url = None
+imap_password = None
+
+mega_url = None
+mega_password = None
+
 webdav_url = None
 webdav_password = None
 
@@ -89,48 +66,3 @@
 
 gdocs_url = None
 gdocs_password = None
-
-def setup():
-    """ setup for unit tests """
-    # The following is for starting remote debugging in Eclipse with Pydev.
-    # Adjust the path to your location and version of Eclipse and Pydev.  Comment out
-    # to run normally, or this process will hang at pydevd.settrace() waiting for the
-    # remote debugger to start.
-#    pysrc = "/opt/Aptana Studio 2/plugins/org.python.pydev.debug_2.1.0.2011052613/pysrc/"
-#    sys.path.append(pysrc)
-#    import pydevd #@UnresolvedImport
-#    pydevd.settrace()
-    # end remote debugger startup
-
-    log.setup()
-    log.setverbosity(verbosity)
-    globals.print_statistics = 0
-
-    globals.num_retries = 2
-
-    backend.import_backends()
-
-    set_environ("FTP_PASSWORD", None)
-    set_environ("PASSPHRASE", None)
-    if gs_access_key:
-        set_environ("GS_ACCESS_KEY_ID", gs_access_key)
-        set_environ("GS_SECRET_ACCESS_KEY", gs_secret_key)
-    else:
-        set_environ("GS_ACCESS_KEY_ID", None)
-        set_environ("GS_SECRET_ACCESS_KEY", None)
-    if s3_access_key:
-        set_environ("AWS_ACCESS_KEY_ID", s3_access_key)
-        set_environ("AWS_SECRET_ACCESS_KEY", s3_secret_key)
-    else:
-        set_environ("AWS_ACCESS_KEY_ID", None)
-        set_environ("AWS_SECRET_ACCESS_KEY", None)
-
-
-def set_environ(varname, value):
-    if value is not None:
-        os.environ[varname] = value
-    else:
-        try:
-            del os.environ[varname]
-        except Exception:
-            pass

=== added directory 'testing/overrides/bin'
=== added file 'testing/overrides/bin/hsi'
--- testing/overrides/bin/hsi	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/hsi	2014-04-28 02:49:55 +0000
@@ -0,0 +1,16 @@
+#!/usr/bin/env python
+
+import subprocess
+import sys
+
+cmd = sys.argv[1].split()
+if cmd[0] == 'get':
+    cmd = ['cp', cmd[3], cmd[1]]
+elif cmd[0] == 'put':
+    cmd = ['cp', cmd[1], cmd[3]]
+elif cmd[0] == 'ls':
+    # output some expected header lines
+    print >> sys.stderr, 'one'
+    print >> sys.stderr, 'two'
+
+sys.exit(subprocess.call(' '.join(cmd) + ' 1>&2', shell=True))

=== added file 'testing/overrides/bin/lftp'
--- testing/overrides/bin/lftp	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/lftp	2014-04-28 02:49:55 +0000
@@ -0,0 +1,23 @@
+#!/usr/bin/env python
+
+import os
+import subprocess
+import sys
+
+if sys.argv[1] == '--version':
+    print 'blah | blah 1.0'
+    sys.exit(0)
+
+command = ""
+for cmd in [x.split() for x in sys.argv[2].split(';')]:
+    if cmd[0] == 'source':
+        continue
+
+    if cmd[0] == 'get':
+        cmd = ['cp', cmd[1], cmd[3]]
+    elif cmd[0] == 'put':
+        cmd = ['cp', cmd[1], cmd[3]]
+
+    command += ' '.join(cmd) + ';'
+
+sys.exit(subprocess.call('/bin/sh -c "%s"' % command, shell=True))

=== added file 'testing/overrides/bin/ncftpget'
--- testing/overrides/bin/ncftpget	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/ncftpget	2014-04-28 02:49:55 +0000
@@ -0,0 +1,6 @@
+#!/usr/bin/env python
+
+import subprocess
+import sys
+
+sys.exit(subprocess.call(['cp', sys.argv[-2], sys.argv[-1]]))

=== added file 'testing/overrides/bin/ncftpls'
--- testing/overrides/bin/ncftpls	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/ncftpls	2014-04-28 02:49:55 +0000
@@ -0,0 +1,19 @@
+#!/usr/bin/env python
+
+from duplicity.backend import ParsedUrl
+import os
+import subprocess
+import sys
+
+if sys.argv[1] == '-v':
+    print 'blah 3.2.1'
+    sys.exit(8)  # really, the backend expects 8 as the return
+
+for arg in sys.argv:
+    if arg.startswith('DELE '):
+        pu = ParsedUrl(sys.argv[-1])
+        filename = os.path.join(pu.path.lstrip('/'), arg.split()[1])
+        sys.exit(subprocess.call(['rm', filename]))
+
+pu = ParsedUrl(sys.argv[-1])
+sys.exit(subprocess.call(['ls', '-l', pu.path.lstrip('/')]))

=== added file 'testing/overrides/bin/ncftpput'
--- testing/overrides/bin/ncftpput	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/ncftpput	2014-04-28 02:49:55 +0000
@@ -0,0 +1,6 @@
+#!/usr/bin/env python
+
+import subprocess
+import sys
+
+sys.exit(subprocess.call(['cp', sys.argv[-2], sys.argv[-1]]))

=== added file 'testing/overrides/bin/tahoe'
--- testing/overrides/bin/tahoe	1970-01-01 00:00:00 +0000
+++ testing/overrides/bin/tahoe	2014-04-28 02:49:55 +0000
@@ -0,0 +1,10 @@
+#!/usr/bin/env python
+
+import subprocess
+import sys
+
+cmd = []
+for arg in sys.argv[1:]:
+    cmd.append(arg.replace('testfiles:', 'testfiles/'))
+
+sys.exit(subprocess.call(cmd))

=== modified file 'testing/unit/test_backend.py'
--- testing/unit/test_backend.py	2014-04-20 05:58:47 +0000
+++ testing/unit/test_backend.py	2014-04-28 02:49:55 +0000
@@ -19,11 +19,14 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
+import mock
 import unittest
 
 import duplicity.backend
 import duplicity.backends #@UnusedImport
 from duplicity.errors import * #@UnusedWildImport
+from duplicity import globals
+from duplicity import path
 from . import UnitTestCase
 
 
@@ -134,5 +137,130 @@
                           "foo://foo@bar:pass@xxxxxxxxxxx/home")
 
 
+class BackendWrapperTest(UnitTestCase):
+
+    def setUp(self):
+        super(BackendWrapperTest, self).setUp()
+        self.mock = mock.MagicMock()
+        self.backend = duplicity.backend.BackendWrapper(self.mock)
+        self.local = mock.MagicMock()
+        self.remote = 'remote'
+
+    @mock.patch('sys.exit')
+    def test_default_error_exit(self, exit_mock):
+        self.set_global('num_retries', 1)
+        del self.mock._error_code
+        self.mock._put.side_effect = Exception
+        self.backend.put(self.local, self.remote)
+        exit_mock.assert_called_once_with(50)
+
+    @mock.patch('sys.exit')
+    def test_translates_code(self, exit_mock):
+        self.set_global('num_retries', 1)
+        self.mock._error_code.return_value = 12345
+        self.mock._put.side_effect = Exception
+        self.backend.put(self.local, self.remote)
+        exit_mock.assert_called_once_with(12345)
+
+    @mock.patch('sys.exit')
+    def test_uses_exception_code(self, exit_mock):
+        self.set_global('num_retries', 1)
+        self.mock._error_code.return_value = 12345
+        self.mock._put.side_effect = BackendException('error', code=54321)
+        self.backend.put(self.local, self.remote)
+        exit_mock.assert_called_once_with(54321)
+
+    @mock.patch('sys.exit')
+    @mock.patch('time.sleep')  # so no waiting
+    def test_cleans_up(self, exit_mock, time_mock):
+        self.set_global('num_retries', 2)
+        self.mock._retry_cleanup.return_value = None
+        self.mock._put.side_effect = Exception
+        self.backend.put(self.local, self.remote)
+        self.mock._retry_cleanup.assert_called_once_with()
+
+    def test_prefer_lists(self):
+        self.mock._delete.return_value = None
+        self.mock._delete_list.return_value = None
+        self.backend.delete([self.remote])
+        self.assertEqual(self.mock._delete.call_count, 0)
+        self.assertEqual(self.mock._delete_list.call_count, 1)
+        del self.mock._delete_list
+        self.backend.delete([self.remote])
+        self.assertEqual(self.mock._delete.call_count, 1)
+
+        self.mock._query.return_value = None
+        self.mock._query_list.return_value = None
+        self.backend.query_info([self.remote])
+        self.assertEqual(self.mock._query.call_count, 0)
+        self.assertEqual(self.mock._query_list.call_count, 1)
+        del self.mock._query_list
+        self.backend.query_info([self.remote])
+        self.assertEqual(self.mock._query.call_count, 1)
+
+    @mock.patch('sys.exit')
+    @mock.patch('time.sleep')  # so no waiting
+    def test_retries(self, exit_mock, time_mock):
+        self.set_global('num_retries', 2)
+
+        self.mock._get.side_effect = Exception
+        self.backend.get(self.remote, self.local)
+        self.assertEqual(self.mock._get.call_count, globals.num_retries)
+
+        self.mock._put.side_effect = Exception
+        self.backend.put(self.local, self.remote)
+        self.assertEqual(self.mock._put.call_count, globals.num_retries)
+
+        self.mock._list.side_effect = Exception
+        self.backend.list()
+        self.assertEqual(self.mock._list.call_count, globals.num_retries)
+
+        self.mock._delete_list.side_effect = Exception
+        self.backend.delete([self.remote])
+        self.assertEqual(self.mock._delete_list.call_count, globals.num_retries)
+
+        self.mock._query_list.side_effect = Exception
+        self.backend.query_info([self.remote])
+        self.assertEqual(self.mock._query_list.call_count, globals.num_retries)
+
+        del self.mock._delete_list
+        self.mock._delete.side_effect = Exception
+        self.backend.delete([self.remote])
+        self.assertEqual(self.mock._delete.call_count, globals.num_retries)
+
+        del self.mock._query_list
+        self.mock._query.side_effect = Exception
+        self.backend.query_info([self.remote])
+        self.assertEqual(self.mock._query.call_count, globals.num_retries)
+
+        self.mock._move.side_effect = Exception
+        self.backend.move(self.local, self.remote)
+        self.assertEqual(self.mock._move.call_count, globals.num_retries)
+
+    def test_move(self):
+        self.mock._move.return_value = True
+        self.backend.move(self.local, self.remote)
+        self.mock._move.assert_called_once_with(self.local, self.remote)
+        self.assertEquals(self.mock._put.call_count, 0)
+
+    def test_move_fallback_false(self):
+        self.mock._move.return_value = False
+        self.backend.move(self.local, self.remote)
+        self.mock._move.assert_called_once_with(self.local, self.remote)
+        self.mock._put.assert_called_once_with(self.local, self.remote) 
+        self.local.delete.assert_called_once_with()
+
+    def test_move_fallback_undefined(self):
+        del self.mock._move
+        self.backend.move(self.local, self.remote)
+        self.mock._put.assert_called_once_with(self.local, self.remote)
+        self.local.delete.assert_called_once_with()
+
+    def test_close(self):
+        self.mock._close.return_value = None
+        self.backend.close()
+        self.mock._close.assert_called_once_with()
+
+
 if __name__ == "__main__":
     unittest.main()

=== added file 'testing/unit/test_backend_instance.py'
--- testing/unit/test_backend_instance.py	1970-01-01 00:00:00 +0000
+++ testing/unit/test_backend_instance.py	2014-04-28 02:49:55 +0000
@@ -0,0 +1,241 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2014 Canonical Ltd
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import os
+import StringIO
+import unittest
+
+import duplicity.backend
+from duplicity import log
+from duplicity import path
+from duplicity.errors import BackendException
+from . import UnitTestCase
+
+
+class BackendInstanceBase(UnitTestCase):
+
+    def setUp(self):
+        UnitTestCase.setUp(self)
+        assert not os.system("rm -rf testfiles")
+        os.makedirs('testfiles')
+        self.backend = None
+        self.local = path.Path('testfiles/local')
+        self.local.writefileobj(StringIO.StringIO("hello"))
+
+    def tearDown(self):
+        if self.backend is None:
+            return
+        if hasattr(self.backend, '_close'):
+            self.backend._close()
+
+    def test_get(self):
+        if self.backend is None:
+            return
+        self.backend._put(self.local, 'a')
+        getfile = path.Path('testfiles/getfile')
+        self.backend._get('a', getfile)
+        self.assertTrue(self.local.compare_data(getfile))
+
+    def test_list(self):
+        if self.backend is None:
+            return
+        self.backend._put(self.local, 'a')
+        self.backend._put(self.local, 'b')
+        # It's OK for backends to create files as a side effect of put (e.g.
+        # the par2 backend does), so only check that at least a and b exist.
+        self.assertTrue('a' in self.backend._list())
+        self.assertTrue('b' in self.backend._list())
+
+    def test_delete(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_delete'):
+            self.assertTrue(hasattr(self.backend, '_delete_list'))
+            return
+        self.backend._put(self.local, 'a')
+        self.backend._put(self.local, 'b')
+        self.backend._delete('a')
+        self.assertFalse('a' in self.backend._list())
+        self.assertTrue('b' in self.backend._list())
+
+    def test_delete_clean(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_delete'):
+            self.assertTrue(hasattr(self.backend, '_delete_list'))
+            return
+        self.backend._put(self.local, 'a')
+        self.backend._delete('a')
+        self.assertEqual(self.backend._list(), [])
+
+    def test_delete_missing(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_delete'):
+            self.assertTrue(hasattr(self.backend, '_delete_list'))
+            return
+        # Backends can either silently ignore this, or throw an error
+        # that gives log.ErrorCode.backend_not_found.
+        try:
+            self.backend._delete('a')
+        except BackendException as e:
+            pass  # Something went wrong, but it was an 'expected' something
+        except Exception as e:
+            code = duplicity.backend._get_code_from_exception(self.backend, 'delete', e)
+            self.assertEqual(code, log.ErrorCode.backend_not_found)
+
+    def test_delete_list(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_delete_list'):
+            self.assertTrue(hasattr(self.backend, '_delete'))
+            return
+        self.backend._put(self.local, 'a')
+        self.backend._put(self.local, 'b')
+        self.backend._put(self.local, 'c')
+        self.backend._delete_list(['a', 'd', 'c'])
+        files = self.backend._list()
+        self.assertFalse('a' in files, files)
+        self.assertTrue('b' in files, files)
+        self.assertFalse('c' in files, files)
+
+    def test_move(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_move'):
+            return
+
+        copy = path.Path('testfiles/copy')
+        self.local.copy(copy)
+
+        self.backend._move(self.local, 'a')
+        self.assertTrue('a' in self.backend._list())
+        self.assertFalse(self.local.exists())
+
+        getfile = path.Path('testfiles/getfile')
+        self.backend._get('a', getfile)
+        self.assertTrue(copy.compare_data(getfile))
+
+    def test_query_exists(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_query'):
+            return
+        self.backend._put(self.local, 'a')
+        info = self.backend._query('a')
+        self.assertEqual(info['size'], self.local.getsize())
+
+    def test_query_missing(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_query'):
+            return
+        # Backends can either return -1 themselves, or throw an error
+        # that gives log.ErrorCode.backend_not_found.
+        try:
+            info = self.backend._query('a')
+        except BackendException as e:
+            pass  # Something went wrong, but it was an 'expected' something
+        except Exception as e:
+            code = duplicity.backend._get_code_from_exception(self.backend, 'query', e)
+            self.assertEqual(code, log.ErrorCode.backend_not_found)
+        else:
+            self.assertEqual(info['size'], -1)
+
+    def test_query_list(self):
+        if self.backend is None:
+            return
+        if not hasattr(self.backend, '_query_list'):
+            return
+        self.backend._put(self.local, 'a')
+        self.backend._put(self.local, 'c')
+        info = self.backend._query_list(['a', 'b'])
+        self.assertEqual(info['a']['size'], self.local.getsize())
+        self.assertEqual(info['b']['size'], -1)
+        self.assertFalse('c' in info)
+
+
+class LocalBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(LocalBackendTest, self).setUp()
+        url = 'file://testfiles/output'
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'LocalBackend')
+
+
+class Par2BackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(Par2BackendTest, self).setUp()
+        url = 'par2+file://testfiles/output'
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'Par2Backend')
+
+    # TODO: Add par2-specific tests here, to confirm that we can recover from
+    # a missing file
+
+
+class RsyncBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(RsyncBackendTest, self).setUp()
+        os.makedirs('testfiles/output')  # rsync needs it to exist first
+        url = 'rsync://%s/testfiles/output' % os.getcwd()
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'RsyncBackend')
+
+
+class TahoeBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(TahoeBackendTest, self).setUp()
+        os.makedirs('testfiles/output')
+        url = 'tahoe://testfiles/output'
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'TAHOEBackend')
+
+
+class HSIBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(HSIBackendTest, self).setUp()
+        os.makedirs('testfiles/output')
+        # hostname is ignored...  Seemingly on purpose
+        url = 'hsi://hostname%s/testfiles/output' % os.getcwd()
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'HSIBackend')
+
+
+class FTPBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(FTPBackendTest, self).setUp()
+        os.makedirs('testfiles/output')
+        url = 'ftp://user:pass@hostname/testfiles/output'
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'FTPBackend')
+
+
+class FTPSBackendTest(BackendInstanceBase):
+    def setUp(self):
+        super(FTPSBackendTest, self).setUp()
+        os.makedirs('testfiles/output')
+        url = 'ftps://user:pass@hostname/testfiles/output'
+        self.backend = duplicity.backend.get_backend_object(url)
+        self.assertEqual(self.backend.__class__.__name__, 'FTPSBackend')
+
+
+if __name__ == "__main__":
+    unittest.main()


Follow ups