← Back to team overview

duplicity-team team mailing list archive

[Merge] lp:~ed.so/duplicity/0.6-readd_sshpexpect into lp:duplicity

 

edso has proposed merging lp:~ed.so/duplicity/0.6-readd_sshpexpect into lp:duplicity.

Requested reviews:
  duplicity-team (duplicity-team)

For more details, see:
https://code.launchpad.net/~ed.so/duplicity/0.6-readd_sshpexpect/+merge/97297

- readd ssh pexpect backend as alternative
- added --ssh-backend parameter to switch between paramiko,pexpect
- manpage 
-- update to reflect above changes
-- added more backend requirements
- Changelog.GNU removed double entries
-- 
https://code.launchpad.net/~ed.so/duplicity/0.6-readd_sshpexpect/+merge/97297
Your team duplicity-team is requested to review the proposed merge of lp:~ed.so/duplicity/0.6-readd_sshpexpect into lp:duplicity.
=== modified file 'Changelog.GNU'
--- Changelog.GNU	2012-03-13 11:39:53 +0000
+++ Changelog.GNU	2012-03-13 20:58:22 +0000
@@ -4,14 +4,6 @@
 	
 	add missing_host_key prompt to new sshbackend similar to ssh procedure
 
-2012-03-12  edso
-
-	changelog entry
-
-2012-03-12  edso
-
-	add missing_host_key prompt similar to ssh procedure
-
 2012-03-13  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
 	Merged in lp:~carlos-abalde/duplicity/gdocs-backend-gdata-2.0.16.-upgrade.
@@ -28,14 +20,6 @@
 	@Ken: would you please announce that sshbackend is paramiko based native python now in the Changelog for the next release? 
 	this was missing in 0.6.18's Changelog
 
-2012-03-08  edso
-
-	changelog entry
-
-2012-03-08  edso
-
-	add ssh_config support (/etc/ssh/ssh_config + ~/.ssh/config) to paramiko sshbackend
-
 2012-03-08  edso  <edgar.soldin@xxxxxx>
 
 	Merged in lp:~ed.so/duplicity/0.6-webdav_fixes.
@@ -48,10 +32,6 @@
 2012-03-08  Kenneth Loafman  <kenneth@xxxxxxxxxxx>
 
 	Merged in lp:~ed.so/duplicity/0.6-manpage
-	
-	- added REQUIREMENTS section
-
-2012-03-08  edso
 
 	- added REQUIREMENTS section
 	- restructure SYNOPSIS/ACTIONS to have commands sorted by backup lifecycle
@@ -82,7 +62,6 @@
 
 2012-02-29  kenneth@xxxxxxxxxxx
 
-	Changes for 0.6.18.
 
 2012-02-29  kenneth@xxxxxxxxxxx
 

=== modified file 'bin/duplicity.1'
--- bin/duplicity.1	2012-03-08 12:59:08 +0000
+++ bin/duplicity.1	2012-03-13 20:58:22 +0000
@@ -4,13 +4,54 @@
 duplicity \- Encrypted incremental backup to local or remote storage.
 
 .SH REQUIREMENTS
-Duplicity requires a POSIX-like operating system. 
+Duplicity requires a POSIX-like operating system with a 
+.B python
+interpreter version 2.4+ installed. 
 It is best used under GNU/Linux.
 
-Some backends also require additional components:
-.IP * 2
-.B "ssh backend"
-(scp/sftp/ssh)
+Some backends also require additional components (probably available as packages for your specific platform):
+.IP * 2
+.B "boto backend"
+(S3 Amazon Web Services)
+.RS
+.IP - 2
+.B boto
+- http://github.com/boto/boto
+.RE
+.IP * 2
+.B "ftp backend"
+.RS
+.IP - 2
+.B NcFTP Client
+- http://www.ncftp.com/
+.RE
+.IP * 2
+.B "ftps backend"
+.RS
+.IP - 2
+.B LFTP Client
+- http://lftp.yar.ru/
+.RE
+.IP * 2
+.B "gio backend"
+(Gnome VFS API)
+.RS
+.IP - 2
+.B PyGObject
+- http://live.gnome.org/PyGObject
+.IP - 2
+.B D-Bus
+(dbus)- http://www.freedesktop.org/wiki/Software/dbus
+.RE
+.IP * 2
+.B "ssh backends"
+(scp/sftp/ssh see 
+.B --ssh-backend
+)
+.RS
+.IP * 2
+.B ssh paramiko backend
+(default)
 .RS
 .IP - 2
 .B paramiko 
@@ -20,12 +61,12 @@
 Python Cryptography Toolkit - http://www.dlitz.net/software/pycrypto/
 .RE
 .IP * 2
-.B "boto backend"
-(S3 Amazon Web Services)
+.B ssh pexpect backend
 .RS
 .IP - 2
-.B boto
-- http://github.com/boto/boto
+.B sftp/scp client binaries
+OpenSSH - http://www.openssh.com/
+.RE
 .RE
 
 .SH SYNOPSIS
@@ -644,13 +685,33 @@
 
 .TP
 .BI "--scp-command " command
-Deprecated and ignored. The ssh backend does no longer use an external
-scp client program.
+.B (only ssh pexpect backend)
+The
+.I command
+will be used instead of scp to send or receive files.  The default command
+is "scp". To list and delete existing files, the sftp command is used.  See
+.BR --ssh-options
+and
+.BR --sftp-command .
 
 .TP
 .BI "--sftp-command " command
-Deprecated and ignored. The ssh backend does no longer use an external
-sftp client program.
+.B (only ssh pexpect backend)
+The
+.I command
+will be used instead of sftp for listing and deleting files.  The
+default is "sftp". File transfers are done using the sftp command. See
+.BR --ssh-options ,
+.BR --use-scp ,
+and
+.BR --scp-command .
+
+.TP
+.BI --short-filenames
+If this option is specified, the names of the files duplicity writes
+will be shorter (about 30 chars) but less understandable.  This may be
+useful when backing up to MacOS or another OS or FS that doesn't
+support long filenames.
 
 .TP
 .BI "--sign-key " key
@@ -675,28 +736,38 @@
 This password is also used for passphrase-protected ssh keys.
 
 .TP
+.BI "--ssh-backend " backend
+Allows the explicit selection of a ssh backend. Defaults to 
+.BR paramiko .
+Alternatively you might choose 
+.BR pexpect .
+see also
+.B "A NOTE ON SSH BACKENDS"
+
+.TP
 .BI "--ssh-options " options
-Allows you to pass options to the ssh/scp/sftp backend.  The
+Allows you to pass options to the ssh backend.  The
 .I options
-list should be of the form "-oopt1=parm1 -oopt2=parm2" where the option string is
-quoted and the only spaces allowed are between options. Options must 
-be given in the long option format described in
-.BR ssh_config(5) .
-The sftp/scp backend currently supports only one ssh option, IdentityFile
+list should be of the form "-oOpt1=parm1 -oOpt2=parm2" where the option string is
+quoted and the only spaces allowed are between options. The option string
+will be passed verbatim to both scp and sftp, whose command line syntax
+differs slightly hence the options should therefore be given in the long option format described in
+.BR ssh_config(5) ,
 like in this example:
-.PP
 .RS
-duplicity --ssh-options="-oIdentityFile=/my/backup/id" /home/me sftp://uid@xxxxxxxxxx/some_dir
 .PP
+.ad l
+duplicity --ssh-options="-oProtocol=2 -oIdentityFile=/my/backup/id" /home/me scp://uid@xxxxxxxxxx/some_dir
+.TP
+.B NOTE:
+.I ssh paramiko backend
+currently supports only the
+.nh
+.B -oIdentityFile
+.hy
+setting.
 .RE
-
-
-.TP
-.BI --short-filenames
-If this option is specified, the names of the files duplicity writes
-will be shorter (about 30 chars) but less understandable.  This may be
-useful when backing up to MacOS or another OS or FS that doesn't
-support long filenames.
+.ad n
 
 .TP
 .BI "--tempdir " directory
@@ -740,9 +811,10 @@
 
 .TP
 .BI --use-scp
-If this option is specified, then the sftp/scp backend will use the
-scp protocol rather than sftp for backend operations. The default is to use
-sftp, because it does not suffer from shell quoting issues like scp.
+If this option is specified, then the ssh backend will use the
+scp protocol rather than sftp for backend operations.
+see also
+.B "A NOTE ON SSH BACKENDS"
 
 .TP
 .BI "--verbosity " level ", -v" level
@@ -862,7 +934,10 @@
 sftp://user[:password]@other.host[:port]/[/]some_dir
 .br
 see also
-.BI "--use-scp"
+.BR --ssh-backend ,
+.B --use-scp
+and
+.B "A NOTE ON SSH BACKENDS"
 .PP
 tahoe://alias/directory
 .PP
@@ -1216,6 +1291,56 @@
 .B from_address_prefix
 will distinguish between different backups.
 
+.SH A NOTE ON SSH BACKENDS
+The 
+.I ssh backends
+support sftp and scp transports. This is a known user-confusing issue.
+These are fundamentally different. If you plan to access your backend via one
+of those please inform yourself about the requirements for a server to support
+scp or sftp.
+To make it even more confusing the user can choose between two ssh backends 
+see
+.nh 
+.B --ssh\-backend
+.hy
+for details.
+.PP
+.BR "SSH paramiko backend " "(selected by default)"
+is complete reimplementation of ssh protocols natively in python. Advantages 
+are speed and maintainability. Minor disadvantage is that extra packages are 
+needed see
+.nh
+.B REQUIREMENTS
+.hy
+above. In
+.I sftp
+(default) mode all operations are done via the according sftp commands. In
+.I scp
+mode (
+.I --use-scp
+) though scp access is used for put/get operations but listing is done via ssh remote shell.
+.PP
+.B SSH pexpect backend
+is the legacy ssh backend using the command line ssh binaries via pexpect.
+Older versions used
+.I scp
+for get and put operations and
+.I sftp
+for list and
+delete operations.  The current version uses
+.I sftp
+for all four supported
+operations, unless the
+.I --use-scp
+option is used to revert to old behavior. 
+.PP
+.B Why use sftp instead of scp?
+The change to sftp was made in order to allow the remote system to chroot the backup,
+thus providing better security and because it does not suffer from shell quoting issues like scp. 
+Scp also does not support any kind of file listing, so sftp or ssh access will always be needed 
+in addition for this backend mode to work properly. Sftp does not have these limitations but needs
+an sftp service running on the backend server, which is sometimes not an option.
+
 .SH A NOTE ON UBUNTU ONE
 Connecting to Ubuntu One requires that you be running duplicity inside of an X
 session so that you can be prompted for your credentials if necessary by the

=== added file 'duplicity/backends/ssh_paramiko.py'
--- duplicity/backends/ssh_paramiko.py	1970-01-01 00:00:00 +0000
+++ duplicity/backends/ssh_paramiko.py	2012-03-13 20:58:22 +0000
@@ -0,0 +1,361 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
+# Copyright 2007 Kenneth Loafman <kenneth@xxxxxxxxxxx>
+# Copyright 2011 Alexander Zangerl <az@xxxxxxxxxxxxx>
+# Copyright 2012 edso (ssh_config added)
+#
+# $Id: sshbackend.py,v 1.2 2011/12/31 04:44:12 az Exp $
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+import re
+import string
+import os
+import errno
+import sys
+import getpass
+from binascii import hexlify
+
+# debian squeeze's paramiko is a bit old, so we silence randompool depreciation warning
+# note also: passphrased private keys work with squeeze's paramiko only if done with DES, not AES
+import warnings
+warnings.simplefilter("ignore")
+import paramiko
+warnings.resetwarnings()
+
+import duplicity.backend
+from duplicity import globals
+from duplicity import log
+from duplicity.errors import *
+
+read_blocksize=65635            # for doing scp retrievals, where we need to read ourselves
+
+class SSHParamikoBackend(duplicity.backend.Backend):
+    """This backend accesses files using the sftp protocol, or scp when the --use-scp option is given.
+    It does not need any local client programs, but an ssh server and the sftp program must be installed on the remote
+    side (or with --use-scp, the programs scp, ls, mkdir, rm and a POSIX-compliant shell).
+
+    Authentication keys are requested from an ssh agent if present, then ~/.ssh/id_rsa/dsa are tried.
+    If -oIdentityFile=path is present in --ssh-options, then that file is also tried.
+    The passphrase for any of these keys is taken from the URI or FTP_PASSWORD.
+    If none of the above are available, password authentication is attempted (using the URI or FTP_PASSWORD).
+
+    Missing directories on the remote side will be created.
+
+    If --use-scp is active then all operations on the remote side require passing arguments through a shell,
+    which introduces unavoidable quoting issues: directory and file names that contain single quotes will not work.
+    This problem does not exist with sftp.
+    """
+    def __init__(self, parsed_url):
+        duplicity.backend.Backend.__init__(self, parsed_url)
+
+        if parsed_url.path:
+            # remove first leading '/'
+            self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
+        else:
+            self.remote_dir = '.'
+
+        self.client = paramiko.SSHClient()
+        self.client.set_missing_host_key_policy(AgreedAddPolicy())
+        # load known_hosts files
+        # paramiko is very picky wrt format and bails out on any problem...
+        try:
+            if os.path.isfile("/etc/ssh/ssh_known_hosts"):
+                self.client.load_system_host_keys("/etc/ssh/ssh_known_hosts")
+        except Exception, e:
+            raise BackendException("could not load /etc/ssh/ssh_known_hosts, maybe corrupt?")
+        try:
+            # use load_host_keys() to signal it's writable to paramiko
+            # load if file exists or add filename to create it if needed
+            file = os.path.expanduser('~/.ssh/known_hosts')
+            if os.path.isfile(file):
+                self.client.load_host_keys(file)
+            else:
+                self.client._host_keys_filename = file
+        except Exception, e:
+            raise BackendException("could not load ~/.ssh/known_hosts, maybe corrupt?")
+
+        """ the next block reorganizes all host parameters into a
+        dictionary like SSHConfig does. this dictionary 'self.config' 
+        becomes the authorative source for these values from here on.
+        rationale is that it is easiest to deal wrt overwriting multiple 
+        values from ssh_config file. (ede 03/2012)
+        """
+        self.config={'hostname':parsed_url.hostname}
+        # get system host config entries
+        self.config.update(self.gethostconfig('/etc/ssh/ssh_config',parsed_url.hostname))
+        # update with user's config file
+        self.config.update(self.gethostconfig('~/.ssh/config',parsed_url.hostname))
+        # update with url values
+        ## username from url
+        if parsed_url.username:
+            self.config.update({'user':parsed_url.username})
+        ## username from input
+        if not 'user' in self.config:
+            self.config.update({'user':getpass.getuser()})
+        ## port from url
+        if parsed_url.port:
+            self.config.update({'port':parsed_url.port})
+        ## ensure there is deafult 22 or an int value
+        if 'port' in self.config:
+            self.config.update({'port':int(self.config['port'])})
+        else:
+            self.config.update({'port':22})
+        ## alternative ssh private key, identity file
+        m=re.search("-oidentityfile=(\S+)",globals.ssh_options,re.I)
+        if (m!=None):
+            keyfilename=m.group(1)
+            self.config['identityfile'] = keyfilename
+        ## ensure ~ is expanded and identity exists in dictionary
+        if 'identityfile' in self.config:
+            self.config['identityfile'] = os.path.expanduser(
+                                            self.config['identityfile'])
+        else:
+            self.config['identityfile'] = None
+
+        # get password, enable prompt if askpass is set
+        self.use_getpass = globals.ssh_askpass
+        ## set url values for beautiful login prompt
+        parsed_url.username = self.config['user']
+        parsed_url.hostname = self.config['hostname']
+        password = self.get_password()
+
+        try:
+            self.client.connect(hostname=self.config['hostname'], 
+                                port=self.config['port'], 
+                                username=self.config['user'], 
+                                password=password,
+                                allow_agent=True, 
+                                look_for_keys=True,
+                                key_filename=self.config['identityfile'])
+        except Exception, e:
+            raise BackendException("ssh connection to %s@%s:%d failed: %s" % (
+                                    self.config['user'],
+                                    self.config['hostname'],
+                                    self.config['port'],e))
+        self.client.get_transport().set_keepalive((int)(globals.timeout / 2))
+
+        # scp or sftp?
+        if (globals.use_scp):
+            # sanity-check the directory name
+            if (re.search("'",self.remote_dir)):
+                raise BackendException("cannot handle directory names with single quotes with --use-scp!")
+
+            # make directory if needed
+            self.runremote("test -d '%s' || mkdir -p '%s'" % (self.remote_dir,self.remote_dir),False,"scp mkdir ")
+        else:
+            try:
+                self.sftp=self.client.open_sftp()
+            except Exception, e:
+                raise BackendException("sftp negotiation failed: %s" % e)
+
+
+            # move to the appropriate directory, possibly after creating it and its parents
+            dirs = self.remote_dir.split(os.sep)
+            if len(dirs) > 0:
+                if not dirs[0]:
+                    dirs = dirs[1:]
+                    dirs[0]= '/' + dirs[0]
+                for d in dirs:
+                    if (d == ''):
+                        continue
+                    try:
+                        attrs=self.sftp.stat(d)
+                    except IOError, e:
+                        if e.errno == errno.ENOENT:
+                            try:
+                                self.sftp.mkdir(d)
+                            except Exception, e:
+                                raise BackendException("sftp mkdir %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
+                        else:
+                            raise BackendException("sftp stat %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
+                    try:
+                        self.sftp.chdir(d)
+                    except Exception, e:
+                        raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
+
+    def put(self, source_path, remote_filename = None):
+        """transfers a single file to the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the remote directory or file name
+        contain single quotes."""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        if (globals.use_scp):
+            f=file(source_path.name,'rb')
+            try:
+                chan=self.client.get_transport().open_session()
+                chan.settimeout(globals.timeout)
+                chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
+            except Exception, e:
+                raise BackendException("scp execution failed: %s" % e)
+            # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
+            # if there's a problem: 0x1 or 0x02 and some error text
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            fstat=os.stat(source_path.name)
+            chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            chan.sendall(f.read()+'\0')
+            f.close()
+            response=chan.recv(1)
+            if (response!="\0"):
+                raise BackendException("scp remote error: %s" % chan.recv(-1))
+            chan.close()
+        else:
+            try:
+                self.sftp.put(source_path.name,remote_filename)
+            except Exception, e:
+                raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
+
+
+    def get(self, remote_filename, local_path):
+        """retrieves a single file from the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the remote directory or file names
+        contain single quotes."""
+        if (globals.use_scp):
+            try:
+                chan=self.client.get_transport().open_session()
+                chan.settimeout(globals.timeout)
+                chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
+            except Exception, e:
+                raise BackendException("scp execution failed: %s" % e)
+
+            chan.send('\0')     # overall ready indicator
+            msg=chan.recv(-1)
+            m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
+            if (m==None or m.group(3)!=remote_filename):
+                raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
+            chan.recv(1)        # dispose of the newline trailing the C message
+
+            size=int(m.group(2))
+            togo=size
+            f=file(local_path.name,'wb')
+            chan.send('\0')     # ready for data
+            try:
+                while togo>0:
+                    if togo>read_blocksize:
+                        blocksize = read_blocksize
+                    else:
+                        blocksize = togo
+                    buff=chan.recv(blocksize)
+                    f.write(buff)
+                    togo-=len(buff)
+            except Exception, e:
+                raise BackendException("scp get %s failed: %s" % (remote_filename,e))
+
+            msg=chan.recv(1)    # check the final status
+            if msg!='\0':
+                raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
+            f.close()
+            chan.send('\0')     # send final done indicator
+            chan.close()
+        else:
+            try:
+                self.sftp.get(remote_filename,local_path.name)
+            except Exception, e:
+                raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
+        local_path.setdata()
+
+    def list(self):
+        """lists the contents of the one-and-only duplicity dir on the remote side.
+        In scp mode unavoidable quoting issues will make this fail if the directory name
+        contains single quotes."""
+        if (globals.use_scp):
+            output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")
+            return output.splitlines()
+        else:
+            try:
+                return self.sftp.listdir()
+            except Exception, e:
+                raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
+
+    def delete(self, filename_list):
+        """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
+        will cause failures if filenames containing single quotes are encountered."""
+        for fn in filename_list:
+            if (globals.use_scp):
+                self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
+            else:
+                try:
+                    self.sftp.remove(fn)
+                except Exception, e:
+                    raise BackendException("sftp rm %s failed: %s" % (fn,e))
+
+    def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
+        """small convenience function that opens a shell channel, runs remote command and returns
+        stdout of command. throws an exception if exit code!=0 and not ignored"""
+        try:
+            chan=self.client.get_transport().open_session()
+            chan.settimeout(globals.timeout)
+            chan.exec_command(cmd)
+        except Exception, e:
+            raise BackendException("%sexecution failed: %s" % (errorprefix,e))
+        output=chan.recv(-1)
+        res=chan.recv_exit_status()
+        if (res!=0 and not ignoreexitcode):
+            raise BackendException("%sfailed(%d): %s" % (errorprefix,res,chan.recv_stderr(4096)))
+        return output
+
+    def gethostconfig(self, file, host):
+        file = os.path.expanduser(file)
+        if not os.path.isfile(file):
+            return {}
+        
+        sshconfig = paramiko.SSHConfig()
+        try:
+            sshconfig.parse(open(file))
+        except Exception, e:
+            raise BackendException("could not load '%s', maybe corrupt?" % (file))
+        
+        return sshconfig.lookup(host)
+
+class AgreedAddPolicy (paramiko.AutoAddPolicy):
+    """
+    Policy for showing a yes/no prompt and adding the hostname and new 
+    host key to the known host file accordingly.
+    
+    This class simply extends the AutoAddPolicy class with a yes/no prompt.
+    """
+    def missing_host_key(self, client, hostname, key):
+        fp = hexlify(key.get_fingerprint())
+        fingerprint = ':'.join(a+b for a,b in zip(fp[::2], fp[1::2]))
+        question = """The authenticity of host '%s' can't be established.
+%s key fingerprint is %s.
+Are you sure you want to continue connecting (yes/no)? """ % (hostname, key.get_name().upper(), fingerprint)
+        while True:
+            sys.stdout.write(question)
+            choice = raw_input().lower()
+            if choice in ['yes','y']:
+                super(AgreedAddPolicy, self).missing_host_key(client, hostname, key)
+                return
+            elif choice in ['no','n']:
+                raise AuthenticityException( hostname )
+            else:
+                question = "Please type 'yes' or 'no': "
+
+class AuthenticityException (paramiko.SSHException):
+    def __init__(self, hostname):
+        paramiko.SSHException.__init__(self, 'Host key verification for server %s failed.' % hostname)
+
+
+duplicity.backend.register_backend("sftp", SSHParamikoBackend)
+duplicity.backend.register_backend("scp", SSHParamikoBackend)
+duplicity.backend.register_backend("ssh", SSHParamikoBackend)

=== added file 'duplicity/backends/ssh_pexpect.py'
--- duplicity/backends/ssh_pexpect.py	1970-01-01 00:00:00 +0000
+++ duplicity/backends/ssh_pexpect.py	2012-03-13 20:58:22 +0000
@@ -0,0 +1,317 @@
+# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
+#
+# Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
+# Copyright 2007 Kenneth Loafman <kenneth@xxxxxxxxxxx>
+#
+# This file is part of duplicity.
+#
+# Duplicity is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; either version 2 of the License, or (at your
+# option) any later version.
+#
+# Duplicity is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with duplicity; if not, write to the Free Software Foundation,
+# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# The following can be redefined to use different shell commands from
+# ssh or scp or to add more arguments.  However, the replacements must
+# have the same syntax.  Also these strings will be executed by the
+# shell, so shouldn't have strange characters in them.
+
+import re
+import string
+import time
+import os
+
+import duplicity.backend
+from duplicity import globals
+from duplicity import log
+from duplicity import pexpect
+from duplicity.errors import * #@UnusedWildImport
+
+class SSHPExpectBackend(duplicity.backend.Backend):
+    """This backend copies files using scp.  List not supported"""
+    def __init__(self, parsed_url):
+        """scpBackend initializer"""
+        duplicity.backend.Backend.__init__(self, parsed_url)
+
+        self.scp_command = "scp"
+        if globals.scp_command: self.scp_command = globals.scp_command
+
+        self.sftp_command = "sftp"
+        if globals.sftp_command: self.sftp_command = globals.sftp_command
+
+        # host string of form [user@]hostname
+        if parsed_url.username:
+            self.host_string = parsed_url.username + "@" + parsed_url.hostname
+        else:
+            self.host_string = parsed_url.hostname
+        # make sure remote_dir is always valid
+        if parsed_url.path:
+            # remove leading '/'
+            self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
+        else:
+            self.remote_dir = '.'
+        self.remote_prefix = self.remote_dir + '/'
+        # maybe use different ssh port
+        if parsed_url.port:
+            globals.ssh_options = globals.ssh_options + " -oPort=%s" % parsed_url.port
+        # set some defaults if user has not specified already.
+        if "ServerAliveInterval" not in globals.ssh_options:
+            globals.ssh_options += " -oServerAliveInterval=%d" % ((int)(globals.timeout / 2))
+        if "ServerAliveCountMax" not in globals.ssh_options:
+            globals.ssh_options += " -oServerAliveCountMax=2"
+        # set up password
+        if globals.ssh_askpass:
+            self.password = self.get_password()
+        else:
+            if parsed_url.password:
+                self.password = parsed_url.password
+                globals.ssh_askpass = True
+            else:
+                self.password = ''
+
+    def run_scp_command(self, commandline):
+        """ Run an scp command, responding to password prompts """
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(30)
+            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
+            child = pexpect.spawn(commandline, timeout = None)
+            if globals.ssh_askpass:
+                state = "authorizing"
+            else:
+                state = "copying"
+            while 1:
+                if state == "authorizing":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "(?i)pass(word|phrase .*):",
+                                          "(?i)permission denied",
+                                          "authenticity"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        log.Warn("Failed to authenticate")
+                        break
+                    elif match == 1:
+                        log.Warn("Timeout waiting to authenticate")
+                        break
+                    elif match == 2:
+                        child.sendline(self.password)
+                        state = "copying"
+                    elif match == 3:
+                        log.Warn("Invalid SSH password")
+                        break
+                    elif match == 4:
+                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                        break
+                elif state == "copying":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "stalled",
+                                          "authenticity",
+                                          "ETA"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        break
+                    elif match == 1:
+                        log.Warn("Timeout waiting for response")
+                        break
+                    elif match == 2:
+                        state = "stalled"
+                    elif match == 3:
+                        log.Warn("Remote host authentication failed (missing known_hosts entry?)")
+                        break
+                elif state == "stalled":
+                    match = child.expect([pexpect.EOF,
+                                          "(?i)timeout, server not responding",
+                                          "ETA"])
+                    log.Debug("State = %s, Before = '%s'" % (state, child.before.strip()))
+                    if match == 0:
+                        break
+                    elif match == 1:
+                        log.Warn("Stalled for too long, aborted copy")
+                        break
+                    elif match == 2:
+                        state = "copying"
+            child.close(force = True)
+            if child.exitstatus == 0:
+                return
+            log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
+        log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
+        raise BackendException("Error running '%s'" % commandline)
+
+    def run_sftp_command(self, commandline, commands):
+        """ Run an sftp command, responding to password prompts, passing commands from list """
+        maxread = 2000 # expected read buffer size
+        responses = [pexpect.EOF,
+                     "(?i)timeout, server not responding",
+                     "sftp>",
+                     "(?i)pass(word|phrase .*):",
+                     "(?i)permission denied",
+                     "authenticity",
+                     "(?i)no such file or directory",
+                     "Couldn't delete file: No such file or directory",
+                     "Couldn't delete file",
+                     "open(.*): Failure"]
+        max_response_len = max([len(p) for p in responses[1:]])
+        for n in range(1, globals.num_retries+1):
+            if n > 1:
+                # sleep before retry
+                time.sleep(30)
+            log.Info("Running '%s' (attempt #%d)" % (commandline, n))
+            child = pexpect.spawn(commandline, timeout = None, maxread=maxread)
+            cmdloc = 0
+            while 1:
+                match = child.expect(responses,
+                                     searchwindowsize=maxread+max_response_len)
+                log.Debug("State = sftp, Before = '%s'" % (child.before.strip()))
+                if match == 0:
+                    break
+                elif match == 1:
+                    log.Info("Timeout waiting for response")
+                    break
+                if match == 2:
+                    if cmdloc < len(commands):
+                        command = commands[cmdloc]
+                        log.Info("sftp command: '%s'" % (command,))
+                        child.sendline(command)
+                        cmdloc += 1
+                    else:
+                        command = 'quit'
+                        child.sendline(command)
+                        res = child.before
+                elif match == 3:
+                    child.sendline(self.password)
+                elif match == 4:
+                    if not child.before.strip().startswith("mkdir"):
+                        log.Warn("Invalid SSH password")
+                        break
+                elif match == 5:
+                    log.Warn("Host key authenticity could not be verified (missing known_hosts entry?)")
+                    break
+                elif match == 6:
+                    if not child.before.strip().startswith("rm"):
+                        log.Warn("Remote file or directory does not exist in command='%s'" % (commandline,))
+                        break
+                elif match == 7:
+                    if not child.before.strip().startswith("Removing"):
+                        log.Warn("Could not delete file in command='%s'" % (commandline,))
+                        break;
+                elif match == 8:
+                    log.Warn("Could not delete file in command='%s'" % (commandline,))
+                    break
+                elif match == 9:
+                    log.Warn("Could not open file in command='%s'" % (commandline,))
+                    break
+            child.close(force = True)
+            if child.exitstatus == 0:
+                return res
+            log.Warn("Running '%s' failed (attempt #%d)" % (commandline, n))
+        log.Warn("Giving up trying to execute '%s' after %d attempts" % (commandline, globals.num_retries))
+        raise BackendException("Error running '%s'" % commandline)
+
+    def put(self, source_path, remote_filename = None):
+        if globals.use_scp:
+            self.put_scp(source_path, remote_filename = remote_filename)
+        else:
+            self.put_sftp(source_path, remote_filename = remote_filename)
+
+    def put_sftp(self, source_path, remote_filename = None):
+        """Use sftp to copy source_dir/filename to remote computer"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        commands = ["put \"%s\" \"%s.%s.part\"" %
+                    (source_path.name, self.remote_prefix, remote_filename),
+                    "rename \"%s.%s.part\" \"%s%s\"" %
+                    (self.remote_prefix, remote_filename,self.remote_prefix, remote_filename)]
+        commandline = ("%s %s %s" % (self.sftp_command,
+                                     globals.ssh_options,
+                                     self.host_string))
+        self.run_sftp_command(commandline, commands)
+
+    def put_scp(self, source_path, remote_filename = None):
+        """Use scp to copy source_dir/filename to remote computer"""
+        if not remote_filename:
+            remote_filename = source_path.get_filename()
+        commandline = "%s %s %s %s:%s%s" % \
+            (self.scp_command, globals.ssh_options, source_path.name, self.host_string,
+             self.remote_prefix, remote_filename)
+        self.run_scp_command(commandline)
+
+    def get(self, remote_filename, local_path):
+        if globals.use_scp:
+            self.get_scp(remote_filename, local_path)
+        else:
+            self.get_sftp(remote_filename, local_path)
+
+    def get_sftp(self, remote_filename, local_path):
+        """Use sftp to get a remote file"""
+        commands = ["get \"%s%s\" \"%s\"" %
+                    (self.remote_prefix, remote_filename, local_path.name)]
+        commandline = ("%s %s %s" % (self.sftp_command,
+                                     globals.ssh_options,
+                                     self.host_string))
+        self.run_sftp_command(commandline, commands)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found locally after get "
+                                   "from backend" % local_path.name)
+
+    def get_scp(self, remote_filename, local_path):
+        """Use scp to get a remote file"""
+        commandline = "%s %s %s:%s%s %s" % \
+            (self.scp_command, globals.ssh_options, self.host_string, self.remote_prefix,
+             remote_filename, local_path.name)
+        self.run_scp_command(commandline)
+        local_path.setdata()
+        if not local_path.exists():
+            raise BackendException("File %s not found locally after get "
+                                   "from backend" % local_path.name)
+
+    def list(self):
+        """
+        List files available for scp
+
+        Note that this command can get confused when dealing with
+        files with newlines in them, as the embedded newlines cannot
+        be distinguished from the file boundaries.
+        """
+        dirs = self.remote_dir.split(os.sep)
+        if len(dirs) > 0:
+            if not dirs[0] :
+                dirs = dirs[1:]
+                dirs[0]= '/' + dirs[0]
+        mkdir_commands = [];
+        for d in dirs:
+            mkdir_commands += ["mkdir \"%s\"" % (d)] + ["cd \"%s\"" % (d)]
+
+        commands = mkdir_commands + ["ls -1"]
+        commandline = ("%s %s %s" % (self.sftp_command,
+                                     globals.ssh_options,
+                                     self.host_string))
+
+        l = self.run_sftp_command(commandline, commands).split('\n')[1:]
+
+        return filter(lambda x: x, map(string.strip, l))
+
+    def delete(self, filename_list):
+        """
+        Runs sftp rm to delete files.  Files must not require quoting.
+        """
+        commands = ["cd \"%s\"" % (self.remote_dir,)]
+        for fn in filename_list:
+            commands.append("rm \"%s\"" % fn)
+        commandline = ("%s %s %s" % (self.sftp_command, globals.ssh_options, self.host_string))
+        self.run_sftp_command(commandline, commands)
+
+duplicity.backend.register_backend("ssh", SSHPExpectBackend)
+duplicity.backend.register_backend("scp", SSHPExpectBackend)
+duplicity.backend.register_backend("sftp", SSHPExpectBackend)

=== modified file 'duplicity/backends/sshbackend.py'
--- duplicity/backends/sshbackend.py	2012-03-12 18:29:42 +0000
+++ duplicity/backends/sshbackend.py	2012-03-13 20:58:22 +0000
@@ -1,11 +1,6 @@
 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
 #
-# Copyright 2002 Ben Escoto <ben@xxxxxxxxxxx>
-# Copyright 2007 Kenneth Loafman <kenneth@xxxxxxxxxxx>
-# Copyright 2011 Alexander Zangerl <az@xxxxxxxxxxxxx>
-# Copyright 2012 edso (ssh_config added)
-#
-# $Id: sshbackend.py,v 1.2 2011/12/31 04:44:12 az Exp $
+# Copyright 2012 edso
 #
 # This file is part of duplicity.
 #
@@ -23,339 +18,17 @@
 # along with duplicity; if not, write to the Free Software Foundation,
 # Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
 
-import re
-import string
-import os
-import errno
-import sys
-import getpass
-from binascii import hexlify
-
-# debian squeeze's paramiko is a bit old, so we silence randompool depreciation warning
-# note also: passphrased private keys work with squeeze's paramiko only if done with DES, not AES
-import warnings
-warnings.simplefilter("ignore")
-import paramiko
-warnings.resetwarnings()
-
-import duplicity.backend
-from duplicity import globals
-from duplicity import log
-from duplicity.errors import *
-
-read_blocksize=65635            # for doing scp retrievals, where we need to read ourselves
-
-class SftpBackend(duplicity.backend.Backend):
-    """This backend accesses files using the sftp protocol, or scp when the --use-scp option is given.
-    It does not need any local client programs, but an ssh server and the sftp program must be installed on the remote
-    side (or with --use-scp, the programs scp, ls, mkdir, rm and a POSIX-compliant shell).
-
-    Authentication keys are requested from an ssh agent if present, then ~/.ssh/id_rsa/dsa are tried.
-    If -oIdentityFile=path is present in --ssh-options, then that file is also tried.
-    The passphrase for any of these keys is taken from the URI or FTP_PASSWORD.
-    If none of the above are available, password authentication is attempted (using the URI or FTP_PASSWORD).
-
-    Missing directories on the remote side will be created.
-
-    If --use-scp is active then all operations on the remote side require passing arguments through a shell,
-    which introduces unavoidable quoting issues: directory and file names that contain single quotes will not work.
-    This problem does not exist with sftp.
-    """
-    def __init__(self, parsed_url):
-        duplicity.backend.Backend.__init__(self, parsed_url)
-
-        if parsed_url.path:
-            # remove first leading '/'
-            self.remote_dir = re.sub(r'^/', r'', parsed_url.path, 1)
-        else:
-            self.remote_dir = '.'
-
-        self.client = paramiko.SSHClient()
-        self.client.set_missing_host_key_policy(AgreedAddPolicy())
-        # load known_hosts files
-        # paramiko is very picky wrt format and bails out on any problem...
-        try:
-            if os.path.isfile("/etc/ssh/ssh_known_hosts"):
-                self.client.load_system_host_keys("/etc/ssh/ssh_known_hosts")
-        except Exception, e:
-            raise BackendException("could not load /etc/ssh/ssh_known_hosts, maybe corrupt?")
-        try:
-            # use load_host_keys() to signal it's writable to paramiko
-            # load if file exists or add filename to create it if needed
-            file = os.path.expanduser('~/.ssh/known_hosts')
-            if os.path.isfile(file):
-                self.client.load_host_keys(file)
-            else:
-                self.client._host_keys_filename = file
-        except Exception, e:
-            raise BackendException("could not load ~/.ssh/known_hosts, maybe corrupt?")
-
-        """ the next block reorganizes all host parameters into a
-        dictionary like SSHConfig does. this dictionary 'self.config' 
-        becomes the authorative source for these values from here on.
-        rationale is that it is easiest to deal wrt overwriting multiple 
-        values from ssh_config file. (ede 03/2012)
-        """
-        self.config={'hostname':parsed_url.hostname}
-        # get system host config entries
-        self.config.update(self.gethostconfig('/etc/ssh/ssh_config',parsed_url.hostname))
-        # update with user's config file
-        self.config.update(self.gethostconfig('~/.ssh/config',parsed_url.hostname))
-        # update with url values
-        ## username from url
-        if parsed_url.username:
-            self.config.update({'user':parsed_url.username})
-        ## username from input
-        if not 'user' in self.config:
-            self.config.update({'user':getpass.getuser()})
-        ## port from url
-        if parsed_url.port:
-            self.config.update({'port':parsed_url.port})
-        ## ensure there is deafult 22 or an int value
-        if 'port' in self.config:
-            self.config.update({'port':int(self.config['port'])})
-        else:
-            self.config.update({'port':22})
-        ## alternative ssh private key, identity file
-        m=re.search("-oidentityfile=(\S+)",globals.ssh_options,re.I)
-        if (m!=None):
-            keyfilename=m.group(1)
-            self.config['identityfile'] = keyfilename
-        ## ensure ~ is expanded and identity exists in dictionary
-        if 'identityfile' in self.config:
-            self.config['identityfile'] = os.path.expanduser(
-                                            self.config['identityfile'])
-        else:
-            self.config['identityfile'] = None
-
-        # get password, enable prompt if askpass is set
-        self.use_getpass = globals.ssh_askpass
-        ## set url values for beautiful login prompt
-        parsed_url.username = self.config['user']
-        parsed_url.hostname = self.config['hostname']
-        password = self.get_password()
-
-        try:
-            self.client.connect(hostname=self.config['hostname'], 
-                                port=self.config['port'], 
-                                username=self.config['user'], 
-                                password=password,
-                                allow_agent=True, 
-                                look_for_keys=True,
-                                key_filename=self.config['identityfile'])
-        except Exception, e:
-            raise BackendException("ssh connection to %s@%s:%d failed: %s" % (
-                                    self.config['user'],
-                                    self.config['hostname'],
-                                    self.config['port'],e))
-        self.client.get_transport().set_keepalive((int)(globals.timeout / 2))
-
-        # scp or sftp?
-        if (globals.use_scp):
-            # sanity-check the directory name
-            if (re.search("'",self.remote_dir)):
-                raise BackendException("cannot handle directory names with single quotes with --use-scp!")
-
-            # make directory if needed
-            self.runremote("test -d '%s' || mkdir -p '%s'" % (self.remote_dir,self.remote_dir),False,"scp mkdir ")
-        else:
-            try:
-                self.sftp=self.client.open_sftp()
-            except Exception, e:
-                raise BackendException("sftp negotiation failed: %s" % e)
-
-
-            # move to the appropriate directory, possibly after creating it and its parents
-            dirs = self.remote_dir.split(os.sep)
-            if len(dirs) > 0:
-                if not dirs[0]:
-                    dirs = dirs[1:]
-                    dirs[0]= '/' + dirs[0]
-                for d in dirs:
-                    if (d == ''):
-                        continue
-                    try:
-                        attrs=self.sftp.stat(d)
-                    except IOError, e:
-                        if e.errno == errno.ENOENT:
-                            try:
-                                self.sftp.mkdir(d)
-                            except Exception, e:
-                                raise BackendException("sftp mkdir %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
-                        else:
-                            raise BackendException("sftp stat %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
-                    try:
-                        self.sftp.chdir(d)
-                    except Exception, e:
-                        raise BackendException("sftp chdir to %s failed: %s" % (self.sftp.normalize(".")+"/"+d,e))
-
-    def put(self, source_path, remote_filename = None):
-        """transfers a single file to the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the remote directory or file name
-        contain single quotes."""
-        if not remote_filename:
-            remote_filename = source_path.get_filename()
-        if (globals.use_scp):
-            f=file(source_path.name,'rb')
-            try:
-                chan=self.client.get_transport().open_session()
-                chan.settimeout(globals.timeout)
-                chan.exec_command("scp -t '%s'" % self.remote_dir) # scp in sink mode uses the arg as base directory
-            except Exception, e:
-                raise BackendException("scp execution failed: %s" % e)
-            # scp protocol: one 0x0 after startup, one after the Create meta, one after saving
-            # if there's a problem: 0x1 or 0x02 and some error text
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            fstat=os.stat(source_path.name)
-            chan.send('C%s %d %s\n' %(oct(fstat.st_mode)[-4:], fstat.st_size, remote_filename))
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            chan.sendall(f.read()+'\0')
-            f.close()
-            response=chan.recv(1)
-            if (response!="\0"):
-                raise BackendException("scp remote error: %s" % chan.recv(-1))
-            chan.close()
-        else:
-            try:
-                self.sftp.put(source_path.name,remote_filename)
-            except Exception, e:
-                raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
-
-
-    def get(self, remote_filename, local_path):
-        """retrieves a single file from the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the remote directory or file names
-        contain single quotes."""
-        if (globals.use_scp):
-            try:
-                chan=self.client.get_transport().open_session()
-                chan.settimeout(globals.timeout)
-                chan.exec_command("scp -f '%s/%s'" % (self.remote_dir,remote_filename))
-            except Exception, e:
-                raise BackendException("scp execution failed: %s" % e)
-
-            chan.send('\0')     # overall ready indicator
-            msg=chan.recv(-1)
-            m=re.match(r"C([0-7]{4})\s+(\d+)\s+(\S.*)$",msg)
-            if (m==None or m.group(3)!=remote_filename):
-                raise BackendException("scp get %s failed: incorrect response '%s'" % (remote_filename,msg))
-            chan.recv(1)        # dispose of the newline trailing the C message
-
-            size=int(m.group(2))
-            togo=size
-            f=file(local_path.name,'wb')
-            chan.send('\0')     # ready for data
-            try:
-                while togo>0:
-                    if togo>read_blocksize:
-                        blocksize = read_blocksize
-                    else:
-                        blocksize = togo
-                    buff=chan.recv(blocksize)
-                    f.write(buff)
-                    togo-=len(buff)
-            except Exception, e:
-                raise BackendException("scp get %s failed: %s" % (remote_filename,e))
-
-            msg=chan.recv(1)    # check the final status
-            if msg!='\0':
-                raise BackendException("scp get %s failed: %s" % (remote_filename,chan.recv(-1)))
-            f.close()
-            chan.send('\0')     # send final done indicator
-            chan.close()
-        else:
-            try:
-                self.sftp.get(remote_filename,local_path.name)
-            except Exception, e:
-                raise BackendException("sftp get of %s (to %s) failed: %s" % (remote_filename,local_path.name,e))
-        local_path.setdata()
-
-    def list(self):
-        """lists the contents of the one-and-only duplicity dir on the remote side.
-        In scp mode unavoidable quoting issues will make this fail if the directory name
-        contains single quotes."""
-        if (globals.use_scp):
-            output=self.runremote("ls -1 '%s'" % self.remote_dir,False,"scp dir listing ")
-            return output.splitlines()
-        else:
-            try:
-                return self.sftp.listdir()
-            except Exception, e:
-                raise BackendException("sftp listing of %s failed: %s" % (self.sftp.getcwd(),e))
-
-    def delete(self, filename_list):
-        """deletes all files in the list on the remote side. In scp mode unavoidable quoting issues
-        will cause failures if filenames containing single quotes are encountered."""
-        for fn in filename_list:
-            if (globals.use_scp):
-                self.runremote("rm '%s/%s'" % (self.remote_dir,fn),False,"scp rm ")
-            else:
-                try:
-                    self.sftp.remove(fn)
-                except Exception, e:
-                    raise BackendException("sftp rm %s failed: %s" % (fn,e))
-
-    def runremote(self,cmd,ignoreexitcode=False,errorprefix=""):
-        """small convenience function that opens a shell channel, runs remote command and returns
-        stdout of command. throws an exception if exit code!=0 and not ignored"""
-        try:
-            chan=self.client.get_transport().open_session()
-            chan.settimeout(globals.timeout)
-            chan.exec_command(cmd)
-        except Exception, e:
-            raise BackendException("%sexecution failed: %s" % (errorprefix,e))
-        output=chan.recv(-1)
-        res=chan.recv_exit_status()
-        if (res!=0 and not ignoreexitcode):
-            raise BackendException("%sfailed(%d): %s" % (errorprefix,res,chan.recv_stderr(4096)))
-        return output
-
-    def gethostconfig(self, file, host):
-        file = os.path.expanduser(file)
-        if not os.path.isfile(file):
-            return {}
-        
-        sshconfig = paramiko.SSHConfig()
-        try:
-            sshconfig.parse(open(file))
-        except Exception, e:
-            raise BackendException("could not load '%s', maybe corrupt?" % (file))
-        
-        return sshconfig.lookup(host)
-
-class AgreedAddPolicy (paramiko.AutoAddPolicy):
-    """
-    Policy for showing a yes/no prompt and adding the hostname and new 
-    host key to the known host file accordingly.
-    
-    This class simply extends the AutoAddPolicy class with a yes/no prompt.
-    """
-    def missing_host_key(self, client, hostname, key):
-        fp = hexlify(key.get_fingerprint())
-        fingerprint = ':'.join(a+b for a,b in zip(fp[::2], fp[1::2]))
-        question = """The authenticity of host '%s' can't be established.
-%s key fingerprint is %s.
-Are you sure you want to continue connecting (yes/no)? """ % (hostname, key.get_name().upper(), fingerprint)
-        while True:
-            sys.stdout.write(question)
-            choice = raw_input().lower()
-            if choice in ['yes','y']:
-                super(AgreedAddPolicy, self).missing_host_key(client, hostname, key)
-                return
-            elif choice in ['no','n']:
-                raise AuthenticityException( hostname )
-            else:
-                question = "Please type 'yes' or 'no': "
-
-class AuthenticityException (paramiko.SSHException):
-    def __init__(self, hostname):
-        paramiko.SSHException.__init__(self, 'Host key verification for server %s failed.' % hostname)
-
-
-duplicity.backend.register_backend("sftp", SftpBackend)
-duplicity.backend.register_backend("scp", SftpBackend)
-duplicity.backend.register_backend("ssh", SftpBackend)
+from duplicity import globals, log
+
+def warn_option(option,optionvar):
+    if optionvar:
+        log.Warn("Warning: Option %s is supported by ssh pexpect backend only and will be ignored. " % option )
+
+if globals.ssh_backend and \
+   globals.ssh_backend.lower().strip() == 'pexpect':
+    import ssh_pexpect
+else:
+    warn_option("--scp-command",globals.scp_command)
+    warn_option("--sftp-command",globals.sftp_command)
+    import ssh_paramiko
+

=== modified file 'duplicity/commandline.py'
--- duplicity/commandline.py	2012-02-05 19:07:35 +0000
+++ duplicity/commandline.py	2012-03-13 20:58:22 +0000
@@ -68,10 +68,6 @@
                           "and will be removed in a future release.\n"
                           "Use of default filenames is strongly suggested.") % opt
 
-def scp_deprecation(o,s,v,p):
-    print >>sys.stderr, "Warning: Option %s is deprecated and ignored. Use --ssh-options instead." % o
-
-
 def expand_fn(filename):
     return os.path.expanduser(os.path.expandvars(filename))
 
@@ -473,15 +469,11 @@
     if sys.version_info[:2] >= (2,6):
         parser.add_option("--s3-use-multiprocessing", action="store_true")
 
-    # scp command to use
-    # TRANSL: noun
-    parser.add_option("--scp-command", nargs=1, type="string",
-                      action="callback", callback=scp_deprecation)
+    # scp command to use (ssh pexpect backend)
+    parser.add_option("--scp-command", metavar=_("command"))
 
-    # sftp command to use
-    # TRANSL: noun
-    parser.add_option("--sftp-command", nargs=1, type="string",
-                      action="callback", callback=scp_deprecation)
+    # sftp command to use (ssh pexpect backend)
+    parser.add_option("--sftp-command", metavar=_("command"))
 
     # If set, use short (< 30 char) filenames for all the remote files.
     parser.add_option("--short-filenames", action="callback",
@@ -498,6 +490,9 @@
     # default to batch mode using public-key encryption
     parser.add_option("--ssh-askpass", action="store_true")
 
+    # allow the user to switch ssh backend
+    parser.add_option("--ssh-backend", metavar=_("paramiko|pexpect"))
+
     # user added ssh options
     parser.add_option("--ssh-options", action="extend", metavar=_("options"))
 

=== modified file 'duplicity/globals.py'
--- duplicity/globals.py	2012-02-05 18:13:40 +0000
+++ duplicity/globals.py	2012-03-13 20:58:22 +0000
@@ -198,13 +198,16 @@
 # Wheter to specify --use-agent in GnuPG options
 use_agent = False
 
-# ssh commands to use
-scp_command = "scp"
-sftp_command = "sftp"
+# ssh commands to use, used by ssh_pexpect (defaults to sftp, scp)
+scp_command = None
+sftp_command = None
 
 # default to batch mode using public-key encryption
 ssh_askpass = False
 
+# default ssh backend is paramiko
+ssh_backend = "paramiko"
+
 # user added ssh options
 ssh_options = ""
 

=== added file 'duplicity/pexpect.py'
--- duplicity/pexpect.py	1970-01-01 00:00:00 +0000
+++ duplicity/pexpect.py	2012-03-13 20:58:22 +0000
@@ -0,0 +1,1845 @@
+"""Pexpect is a Python module for spawning child applications and controlling
+them automatically. Pexpect can be used for automating interactive applications
+such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup
+scripts for duplicating software package installations on different servers. It
+can be used for automated software testing. Pexpect is in the spirit of Don
+Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python
+require TCL and Expect or require C extensions to be compiled. Pexpect does not
+use C, Expect, or TCL extensions. It should work on any platform that supports
+the standard Python pty module. The Pexpect interface focuses on ease of use so
+that simple tasks are easy.
+
+There are two main interfaces to Pexpect -- the function, run() and the class,
+spawn. You can call the run() function to execute a command and return the
+output. This is a handy replacement for os.system().
+
+For example::
+
+    pexpect.run('ls -la')
+
+The more powerful interface is the spawn class. You can use this to spawn an
+external child command and then interact with the child by sending lines and
+expecting responses.
+
+For example::
+
+    child = pexpect.spawn('scp foo myname@xxxxxxxxxxxxxxxx:.')
+    child.expect ('Password:')
+    child.sendline (mypassword)
+
+This works even for commands that ask for passwords or other input outside of
+the normal stdio streams.
+
+Credits: Noah Spurrier, Richard Holden, Marco Molteni, Kimberley Burchett,
+Robert Stone, Hartmut Goebel, Chad Schroeder, Erick Tryzelaar, Dave Kirby, Ids
+vander Molen, George Todd, Noel Taylor, Nicolas D. Cesar, Alexander Gattin,
+Geoffrey Marshall, Francisco Lourenco, Glen Mabey, Karthik Gurusamy, Fernando
+Perez, Corey Minyard, Jon Cohen, Guillaume Chazarain, Andrew Ryan, Nick
+Craig-Wood, Andrew Stone, Jorgen Grahn (Let me know if I forgot anyone.)
+
+Free, open source, and all that good stuff.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+Pexpect Copyright (c) 2008 Noah Spurrier
+http://pexpect.sourceforge.net/
+
+$Id: pexpect.py,v 1.1 2009/01/06 22:11:37 loafman Exp $
+"""
+
+try:
+    import os, sys, time
+    import select
+    import string
+    import re
+    import struct
+    import resource
+    import types
+    import pty
+    import tty
+    import termios
+    import fcntl
+    import errno
+    import traceback
+    import signal
+except ImportError, e:
+    raise ImportError (str(e) + """
+
+A critical module was not found. Probably this operating system does not
+support it. Pexpect is intended for UNIX-like operating systems.""")
+
+__version__ = '2.3'
+__revision__ = '$Revision: 1.1 $'
+__all__ = ['ExceptionPexpect', 'EOF', 'TIMEOUT', 'spawn', 'run', 'which',
+    'split_command_line', '__version__', '__revision__']
+
+# Exception classes used by this module.
+class ExceptionPexpect(Exception):
+
+    """Base class for all exceptions raised by this module.
+    """
+
+    def __init__(self, value):
+
+        self.value = value
+
+    def __str__(self):
+
+        return str(self.value)
+
+    def get_trace(self):
+
+        """This returns an abbreviated stack trace with lines that only concern
+        the caller. In other words, the stack trace inside the Pexpect module
+        is not included. """
+
+        tblist = traceback.extract_tb(sys.exc_info()[2])
+        #tblist = filter(self.__filter_not_pexpect, tblist)
+        tblist = [item for item in tblist if self.__filter_not_pexpect(item)]
+        tblist = traceback.format_list(tblist)
+        return ''.join(tblist)
+
+    def __filter_not_pexpect(self, trace_list_item):
+
+        """This returns True if list item 0 the string 'pexpect.py' in it. """
+
+        if trace_list_item[0].find('pexpect.py') == -1:
+            return True
+        else:
+            return False
+
+class EOF(ExceptionPexpect):
+
+    """Raised when EOF is read from a child. This usually means the child has exited."""
+
+class TIMEOUT(ExceptionPexpect):
+
+    """Raised when a read time exceeds the timeout. """
+
+##class TIMEOUT_PATTERN(TIMEOUT):
+##    """Raised when the pattern match time exceeds the timeout.
+##    This is different than a read TIMEOUT because the child process may
+##    give output, thus never give a TIMEOUT, but the output
+##    may never match a pattern.
+##    """
+##class MAXBUFFER(ExceptionPexpect):
+##    """Raised when a scan buffer fills before matching an expected pattern."""
+
+def run (command, timeout=-1, withexitstatus=False, events=None, extra_args=None, logfile=None, cwd=None, env=None):
+
+    """
+    This function runs the given command; waits for it to finish; then
+    returns all output as a string. STDERR is included in output. If the full
+    path to the command is not given then the path is searched.
+
+    Note that lines are terminated by CR/LF (\\r\\n) combination even on
+    UNIX-like systems because this is the standard for pseudo ttys. If you set
+    'withexitstatus' to true, then run will return a tuple of (command_output,
+    exitstatus). If 'withexitstatus' is false then this returns just
+    command_output.
+
+    The run() function can often be used instead of creating a spawn instance.
+    For example, the following code uses spawn::
+
+        from pexpect import * #@UnusedWildImport
+        child = spawn('scp foo myname@xxxxxxxxxxxxxxxx:.')
+        child.expect ('(?i)password')
+        child.sendline (mypassword)
+
+    The previous code can be replace with the following::
+
+        from pexpect import * #@UnusedWildImport
+        run ('scp foo myname@xxxxxxxxxxxxxxxx:.', events={'(?i)password': mypassword})
+
+    Examples
+    ========
+
+    Start the apache daemon on the local machine::
+
+        from pexpect import * #@UnusedWildImport
+        run ("/usr/local/apache/bin/apachectl start")
+
+    Check in a file using SVN::
+
+        from pexpect import * #@UnusedWildImport
+        run ("svn ci -m 'automatic commit' my_file.py")
+
+    Run a command and capture exit status::
+
+        from pexpect import * #@UnusedWildImport
+        (command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)
+
+    Tricky Examples
+    ===============
+
+    The following will run SSH and execute 'ls -l' on the remote machine. The
+    password 'secret' will be sent if the '(?i)password' pattern is ever seen::
+
+        run ("ssh username@xxxxxxxxxxxxxxxxxxx 'ls -l'", events={'(?i)password':'secret\\n'})
+
+    This will start mencoder to rip a video from DVD. This will also display
+    progress ticks every 5 seconds as it runs. For example::
+
+        from pexpect import * #@UnusedWildImport
+        def print_ticks(d):
+            print d['event_count'],
+        run ("mencoder dvd://1 -o video.avi -oac copy -ovc copy", events={TIMEOUT:print_ticks}, timeout=5)
+
+    The 'events' argument should be a dictionary of patterns and responses.
+    Whenever one of the patterns is seen in the command out run() will send the
+    associated response string. Note that you should put newlines in your
+    string if Enter is necessary. The responses may also contain callback
+    functions. Any callback is function that takes a dictionary as an argument.
+    The dictionary contains all the locals from the run() function, so you can
+    access the child spawn object or any other variable defined in run()
+    (event_count, child, and extra_args are the most useful). A callback may
+    return True to stop the current run process otherwise run() continues until
+    the next event. A callback may also return a string which will be sent to
+    the child. 'extra_args' is not used by directly run(). It provides a way to
+    pass data to a callback function through run() through the locals
+    dictionary passed to a callback. """
+
+    if timeout == -1:
+        child = spawn(command, maxread=2000, logfile=logfile, cwd=cwd, env=env)
+    else:
+        child = spawn(command, timeout=timeout, maxread=2000, logfile=logfile, cwd=cwd, env=env)
+    if events is not None:
+        patterns = events.keys()
+        responses = events.values()
+    else:
+        patterns=None # We assume that EOF or TIMEOUT will save us.
+        responses=None
+    child_result_list = []
+    event_count = 0
+    while 1:
+        try:
+            index = child.expect (patterns)
+            if type(child.after) in types.StringTypes:
+                child_result_list.append(child.before + child.after)
+            else: # child.after may have been a TIMEOUT or EOF, so don't cat those.
+                child_result_list.append(child.before)
+            if type(responses[index]) in types.StringTypes:
+                child.send(responses[index])
+            elif type(responses[index]) is types.FunctionType:
+                callback_result = responses[index](locals())
+                sys.stdout.flush()
+                if type(callback_result) in types.StringTypes:
+                    child.send(callback_result)
+                elif callback_result:
+                    break
+            else:
+                raise TypeError ('The callback must be a string or function type.')
+            event_count = event_count + 1
+        except TIMEOUT, e:
+            child_result_list.append(child.before)
+            break
+        except EOF, e:
+            child_result_list.append(child.before)
+            break
+    child_result = ''.join(child_result_list)
+    if withexitstatus:
+        child.close()
+        return (child_result, child.exitstatus)
+    else:
+        return child_result
+
+class spawn (object):
+
+    """This is the main class interface for Pexpect. Use this class to start
+    and control child applications. """
+
+    def __init__(self, command, args=[], timeout=30, maxread=2000, searchwindowsize=None, logfile=None, cwd=None, env=None):
+
+        """This is the constructor. The command parameter may be a string that
+        includes a command and any arguments to the command. For example::
+
+            child = pexpect.spawn ('/usr/bin/ftp')
+            child = pexpect.spawn ('/usr/bin/ssh user@xxxxxxxxxxx')
+            child = pexpect.spawn ('ls -latr /tmp')
+
+        You may also construct it with a list of arguments like so::
+
+            child = pexpect.spawn ('/usr/bin/ftp', [])
+            child = pexpect.spawn ('/usr/bin/ssh', ['user@xxxxxxxxxxx'])
+            child = pexpect.spawn ('ls', ['-latr', '/tmp'])
+
+        After this the child application will be created and will be ready to
+        talk to. For normal use, see expect() and send() and sendline().
+
+        Remember that Pexpect does NOT interpret shell meta characters such as
+        redirect, pipe, or wild cards (>, |, or *). This is a common mistake.
+        If you want to run a command and pipe it through another command then
+        you must also start a shell. For example::
+
+            child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > log_list.txt"')
+            child.expect(pexpect.EOF)
+
+        The second form of spawn (where you pass a list of arguments) is useful
+        in situations where you wish to spawn a command and pass it its own
+        argument list. This can make syntax more clear. For example, the
+        following is equivalent to the previous example::
+
+            shell_cmd = 'ls -l | grep LOG > log_list.txt'
+            child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
+            child.expect(pexpect.EOF)
+
+        The maxread attribute sets the read buffer size. This is maximum number
+        of bytes that Pexpect will try to read from a TTY at one time. Setting
+        the maxread size to 1 will turn off buffering. Setting the maxread
+        value higher may help performance in cases where large amounts of
+        output are read back from the child. This feature is useful in
+        conjunction with searchwindowsize.
+
+        The searchwindowsize attribute sets the how far back in the incomming
+        seach buffer Pexpect will search for pattern matches. Every time
+        Pexpect reads some data from the child it will append the data to the
+        incomming buffer. The default is to search from the beginning of the
+        imcomming buffer each time new data is read from the child. But this is
+        very inefficient if you are running a command that generates a large
+        amount of data where you want to match The searchwindowsize does not
+        effect the size of the incomming data buffer. You will still have
+        access to the full buffer after expect() returns.
+
+        The logfile member turns on or off logging. All input and output will
+        be copied to the given file object. Set logfile to None to stop
+        logging. This is the default. Set logfile to sys.stdout to echo
+        everything to standard output. The logfile is flushed after each write.
+
+        Example log input and output to a file::
+
+            child = pexpect.spawn('some_command')
+            fout = file('mylog.txt','w')
+            child.logfile = fout
+
+        Example log to stdout::
+
+            child = pexpect.spawn('some_command')
+            child.logfile = sys.stdout
+
+        The logfile_read and logfile_send members can be used to separately log
+        the input from the child and output sent to the child. Sometimes you
+        don't want to see everything you write to the child. You only want to
+        log what the child sends back. For example::
+
+            child = pexpect.spawn('some_command')
+            child.logfile_read = sys.stdout
+
+        To separately log output sent to the child use logfile_send::
+
+            self.logfile_send = fout
+
+        The delaybeforesend helps overcome a weird behavior that many users
+        were experiencing. The typical problem was that a user would expect() a
+        "Password:" prompt and then immediately call sendline() to send the
+        password. The user would then see that their password was echoed back
+        to them. Passwords don't normally echo. The problem is caused by the
+        fact that most applications print out the "Password" prompt and then
+        turn off stdin echo, but if you send your password before the
+        application turned off echo, then you get your password echoed.
+        Normally this wouldn't be a problem when interacting with a human at a
+        real keyboard. If you introduce a slight delay just before writing then
+        this seems to clear up the problem. This was such a common problem for
+        many users that I decided that the default pexpect behavior should be
+        to sleep just before writing to the child application. 1/20th of a
+        second (50 ms) seems to be enough to clear up the problem. You can set
+        delaybeforesend to 0 to return to the old behavior. Most Linux machines
+        don't like this to be below 0.03. I don't know why.
+
+        Note that spawn is clever about finding commands on your path.
+        It uses the same logic that "which" uses to find executables.
+
+        If you wish to get the exit status of the child you must call the
+        close() method. The exit or signal status of the child will be stored
+        in self.exitstatus or self.signalstatus. If the child exited normally
+        then exitstatus will store the exit return code and signalstatus will
+        be None. If the child was terminated abnormally with a signal then
+        signalstatus will store the signal value and exitstatus will be None.
+        If you need more detail you can also read the self.status member which
+        stores the status returned by os.waitpid. You can interpret this using
+        os.WIFEXITED/os.WEXITSTATUS or os.WIFSIGNALED/os.TERMSIG. """
+
+        self.STDIN_FILENO = pty.STDIN_FILENO
+        self.STDOUT_FILENO = pty.STDOUT_FILENO
+        self.STDERR_FILENO = pty.STDERR_FILENO
+        self.stdin = sys.stdin
+        self.stdout = sys.stdout
+        self.stderr = sys.stderr
+
+        self.searcher = None
+        self.ignorecase = False
+        self.before = None
+        self.after = None
+        self.match = None
+        self.match_index = None
+        self.terminated = True
+        self.exitstatus = None
+        self.signalstatus = None
+        self.status = None # status returned by os.waitpid
+        self.flag_eof = False
+        self.pid = None
+        self.child_fd = -1 # initially closed
+        self.timeout = timeout
+        self.delimiter = EOF
+        self.logfile = logfile
+        self.logfile_read = None # input from child (read_nonblocking)
+        self.logfile_send = None # output to send (send, sendline)
+        self.maxread = maxread # max bytes to read at one time into buffer
+        self.buffer = '' # This is the read buffer. See maxread.
+        self.searchwindowsize = searchwindowsize # Anything before searchwindowsize point is preserved, but not searched.
+        # Most Linux machines don't like delaybeforesend to be below 0.03 (30 ms).
+        self.delaybeforesend = 0.05 # Sets sleep time used just before sending data to child. Time in seconds.
+        self.delayafterclose = 0.1 # Sets delay in close() method to allow kernel time to update process status. Time in seconds.
+        self.delayafterterminate = 0.1 # Sets delay in terminate() method to allow kernel time to update process status. Time in seconds.
+        self.softspace = False # File-like object.
+        self.name = '<' + repr(self) + '>' # File-like object.
+        self.encoding = None # File-like object.
+        self.closed = True # File-like object.
+        self.cwd = cwd
+        self.env = env
+        self.__irix_hack = (sys.platform.lower().find('irix')>=0) # This flags if we are running on irix
+        # Solaris uses internal __fork_pty(). All others use pty.fork().
+        if (sys.platform.lower().find('solaris')>=0) or (sys.platform.lower().find('sunos5')>=0):
+            self.use_native_pty_fork = False
+        else:
+            self.use_native_pty_fork = True
+
+
+        # allow dummy instances for subclasses that may not use command or args.
+        if command is None:
+            self.command = None
+            self.args = None
+            self.name = '<pexpect factory incomplete>'
+        else:
+            self._spawn (command, args)
+
+    def __del__(self):
+
+        """This makes sure that no system resources are left open. Python only
+        garbage collects Python objects. OS file descriptors are not Python
+        objects, so they must be handled explicitly. If the child file
+        descriptor was opened outside of this class (passed to the constructor)
+        then this does not close it. """
+
+        if not self.closed:
+            # It is possible for __del__ methods to execute during the
+            # teardown of the Python VM itself. Thus self.close() may
+            # trigger an exception because os.close may be None.
+            # -- Fernando Perez
+            try:
+                self.close()
+            except AttributeError:
+                pass
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object. """
+
+        s = []
+        s.append(repr(self))
+        s.append('version: ' + __version__ + ' (' + __revision__ + ')')
+        s.append('command: ' + str(self.command))
+        s.append('args: ' + str(self.args))
+        s.append('searcher: ' + str(self.searcher))
+        s.append('buffer (last 100 chars): ' + str(self.buffer)[-100:])
+        s.append('before (last 100 chars): ' + str(self.before)[-100:])
+        s.append('after: ' + str(self.after))
+        s.append('match: ' + str(self.match))
+        s.append('match_index: ' + str(self.match_index))
+        s.append('exitstatus: ' + str(self.exitstatus))
+        s.append('flag_eof: ' + str(self.flag_eof))
+        s.append('pid: ' + str(self.pid))
+        s.append('child_fd: ' + str(self.child_fd))
+        s.append('closed: ' + str(self.closed))
+        s.append('timeout: ' + str(self.timeout))
+        s.append('delimiter: ' + str(self.delimiter))
+        s.append('logfile: ' + str(self.logfile))
+        s.append('logfile_read: ' + str(self.logfile_read))
+        s.append('logfile_send: ' + str(self.logfile_send))
+        s.append('maxread: ' + str(self.maxread))
+        s.append('ignorecase: ' + str(self.ignorecase))
+        s.append('searchwindowsize: ' + str(self.searchwindowsize))
+        s.append('delaybeforesend: ' + str(self.delaybeforesend))
+        s.append('delayafterclose: ' + str(self.delayafterclose))
+        s.append('delayafterterminate: ' + str(self.delayafterterminate))
+        return '\n'.join(s)
+
+    def _spawn(self,command,args=[]):
+
+        """This starts the given command in a child process. This does all the
+        fork/exec type of stuff for a pty. This is called by __init__. If args
+        is empty then command will be parsed (split on spaces) and args will be
+        set to parsed arguments. """
+
+        # The pid and child_fd of this object get set by this method.
+        # Note that it is difficult for this method to fail.
+        # You cannot detect if the child process cannot start.
+        # So the only way you can tell if the child process started
+        # or not is to try to read from the file descriptor. If you get
+        # EOF immediately then it means that the child is already dead.
+        # That may not necessarily be bad because you may haved spawned a child
+        # that performs some task; creates no stdout output; and then dies.
+
+        # If command is an int type then it may represent a file descriptor.
+        if type(command) == type(0):
+            raise ExceptionPexpect ('Command is an int type. If this is a file descriptor then maybe you want to use fdpexpect.fdspawn which takes an existing file descriptor instead of a command string.')
+
+        if type (args) != type([]):
+            raise TypeError ('The argument, args, must be a list.')
+
+        if args == []:
+            self.args = split_command_line(command)
+            self.command = self.args[0]
+        else:
+            self.args = args[:] # work with a copy
+            self.args.insert (0, command)
+            self.command = command
+
+        command_with_path = which(self.command)
+        if command_with_path is None:
+            raise ExceptionPexpect ('The command was not found or was not executable: %s.' % self.command)
+        self.command = command_with_path
+        self.args[0] = self.command
+
+        self.name = '<' + ' '.join (self.args) + '>'
+
+        assert self.pid is None, 'The pid member should be None.'
+        assert self.command is not None, 'The command member should not be None.'
+
+        if self.use_native_pty_fork:
+            try:
+                self.pid, self.child_fd = pty.fork()
+            except OSError, e:
+                raise ExceptionPexpect('Error! pty.fork() failed: ' + str(e))
+        else: # Use internal __fork_pty
+            self.pid, self.child_fd = self.__fork_pty()
+
+        if self.pid == 0: # Child
+            try:
+                self.child_fd = sys.stdout.fileno() # used by setwinsize()
+                self.setwinsize(24, 80)
+            except Exception:
+                # Some platforms do not like setwinsize (Cygwin).
+                # This will cause problem when running applications that
+                # are very picky about window size.
+                # This is a serious limitation, but not a show stopper.
+                pass
+            # Do not allow child to inherit open file descriptors from parent.
+            max_fd = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
+            for i in range (3, max_fd):
+                try:
+                    os.close (i)
+                except OSError:
+                    pass
+
+            # I don't know why this works, but ignoring SIGHUP fixes a
+            # problem when trying to start a Java daemon with sudo
+            # (specifically, Tomcat).
+            signal.signal(signal.SIGHUP, signal.SIG_IGN)
+
+            if self.cwd is not None:
+                os.chdir(self.cwd)
+            if self.env is None:
+                os.execv(self.command, self.args)
+            else:
+                os.execvpe(self.command, self.args, self.env)
+
+        # Parent
+        self.terminated = False
+        self.closed = False
+
+    def __fork_pty(self):
+
+        """This implements a substitute for the forkpty system call. This
+        should be more portable than the pty.fork() function. Specifically,
+        this should work on Solaris.
+
+        Modified 10.06.05 by Geoff Marshall: Implemented __fork_pty() method to
+        resolve the issue with Python's pty.fork() not supporting Solaris,
+        particularly ssh. Based on patch to posixmodule.c authored by Noah
+        Spurrier::
+
+            http://mail.python.org/pipermail/python-dev/2003-May/035281.html
+
+        """
+
+        parent_fd, child_fd = os.openpty()
+        if parent_fd < 0 or child_fd < 0:
+            raise ExceptionPexpect, "Error! Could not open pty with os.openpty()."
+
+        pid = os.fork()
+        if pid < 0:
+            raise ExceptionPexpect, "Error! Failed os.fork()."
+        elif pid == 0:
+            # Child.
+            os.close(parent_fd)
+            self.__pty_make_controlling_tty(child_fd)
+
+            os.dup2(child_fd, 0)
+            os.dup2(child_fd, 1)
+            os.dup2(child_fd, 2)
+
+            if child_fd > 2:
+                os.close(child_fd)
+        else:
+            # Parent.
+            os.close(child_fd)
+
+        return pid, parent_fd
+
+    def __pty_make_controlling_tty(self, tty_fd):
+
+        """This makes the pseudo-terminal the controlling tty. This should be
+        more portable than the pty.fork() function. Specifically, this should
+        work on Solaris. """
+
+        child_name = os.ttyname(tty_fd)
+
+        # Disconnect from controlling tty if still connected.
+        fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
+        if fd >= 0:
+            os.close(fd)
+
+        os.setsid()
+
+        # Verify we are disconnected from controlling tty
+        try:
+            fd = os.open("/dev/tty", os.O_RDWR | os.O_NOCTTY);
+            if fd >= 0:
+                os.close(fd)
+                raise ExceptionPexpect, "Error! We are not disconnected from a controlling tty."
+        except Exception:
+            # Good! We are disconnected from a controlling tty.
+            pass
+
+        # Verify we can open child pty.
+        fd = os.open(child_name, os.O_RDWR);
+        if fd < 0:
+            raise ExceptionPexpect, "Error! Could not open child pty, " + child_name
+        else:
+            os.close(fd)
+
+        # Verify we now have a controlling tty.
+        fd = os.open("/dev/tty", os.O_WRONLY)
+        if fd < 0:
+            raise ExceptionPexpect, "Error! Could not open controlling tty, /dev/tty"
+        else:
+            os.close(fd)
+
+    def fileno (self):   # File-like object.
+
+        """This returns the file descriptor of the pty for the child.
+        """
+
+        return self.child_fd
+
+    def close (self, force=True):   # File-like object.
+
+        """This closes the connection with the child application. Note that
+        calling close() more than once is valid. This emulates standard Python
+        behavior with files. Set force to True if you want to make sure that
+        the child is terminated (SIGKILL is sent if the child ignores SIGHUP
+        and SIGINT). """
+
+        if not self.closed:
+            self.flush()
+            os.close (self.child_fd)
+            time.sleep(self.delayafterclose) # Give kernel time to update process status.
+            if self.isalive():
+                if not self.terminate(force):
+                    raise ExceptionPexpect ('close() could not terminate the child using terminate()')
+            self.child_fd = -1
+            self.closed = True
+            #self.pid = None
+
+    def flush (self):   # File-like object.
+
+        """This does nothing. It is here to support the interface for a
+        File-like object. """
+
+        pass
+
+    def isatty (self):   # File-like object.
+
+        """This returns True if the file descriptor is open and connected to a
+        tty(-like) device, else False. """
+
+        return os.isatty(self.child_fd)
+
+    def waitnoecho (self, timeout=-1):
+
+        """This waits until the terminal ECHO flag is set False. This returns
+        True if the echo mode is off. This returns False if the ECHO flag was
+        not set False before the timeout. This can be used to detect when the
+        child is waiting for a password. Usually a child application will turn
+        off echo mode when it is waiting for the user to enter a password. For
+        example, instead of expecting the "password:" prompt you can wait for
+        the child to set ECHO off::
+
+            p = pexpect.spawn ('ssh user@xxxxxxxxxxx')
+            p.waitnoecho()
+            p.sendline(mypassword)
+
+        If timeout is None then this method to block forever until ECHO flag is
+        False.
+
+        """
+
+        if timeout == -1:
+            timeout = self.timeout
+        if timeout is not None:
+            end_time = time.time() + timeout
+        while True:
+            if not self.getecho():
+                return True
+            if timeout < 0 and timeout is not None:
+                return False
+            if timeout is not None:
+                timeout = end_time - time.time()
+            time.sleep(0.1)
+
+    def getecho (self):
+
+        """This returns the terminal echo mode. This returns True if echo is
+        on or False if echo is off. Child applications that are expecting you
+        to enter a password often set ECHO False. See waitnoecho(). """
+
+        attr = termios.tcgetattr(self.child_fd)
+        if attr[3] & termios.ECHO:
+            return True
+        return False
+
+    def setecho (self, state):
+
+        """This sets the terminal echo mode on or off. Note that anything the
+        child sent before the echo will be lost, so you should be sure that
+        your input buffer is empty before you call setecho(). For example, the
+        following will work as expected::
+
+            p = pexpect.spawn('cat')
+            p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
+            p.expect (['1234'])
+            p.expect (['1234'])
+            p.setecho(False) # Turn off tty echo
+            p.sendline ('abcd') # We will set this only once (echoed by cat).
+            p.sendline ('wxyz') # We will set this only once (echoed by cat)
+            p.expect (['abcd'])
+            p.expect (['wxyz'])
+
+        The following WILL NOT WORK because the lines sent before the setecho
+        will be lost::
+
+            p = pexpect.spawn('cat')
+            p.sendline ('1234') # We will see this twice (once from tty echo and again from cat).
+            p.setecho(False) # Turn off tty echo
+            p.sendline ('abcd') # We will set this only once (echoed by cat).
+            p.sendline ('wxyz') # We will set this only once (echoed by cat)
+            p.expect (['1234'])
+            p.expect (['1234'])
+            p.expect (['abcd'])
+            p.expect (['wxyz'])
+        """
+
+        self.child_fd
+        attr = termios.tcgetattr(self.child_fd)
+        if state:
+            attr[3] = attr[3] | termios.ECHO
+        else:
+            attr[3] = attr[3] & ~termios.ECHO
+        # I tried TCSADRAIN and TCSAFLUSH, but these were inconsistent
+        # and blocked on some platforms. TCSADRAIN is probably ideal if it worked.
+        termios.tcsetattr(self.child_fd, termios.TCSANOW, attr)
+
+    def read_nonblocking (self, size = 1, timeout = -1):
+
+        """This reads at most size characters from the child application. It
+        includes a timeout. If the read does not complete within the timeout
+        period then a TIMEOUT exception is raised. If the end of file is read
+        then an EOF exception will be raised. If a log file was set using
+        setlog() then all data will also be written to the log file.
+
+        If timeout is None then the read may block indefinitely. If timeout is -1
+        then the self.timeout value is used. If timeout is 0 then the child is
+        polled and if there was no data immediately ready then this will raise
+        a TIMEOUT exception.
+
+        The timeout refers only to the amount of time to read at least one
+        character. This is not effected by the 'size' parameter, so if you call
+        read_nonblocking(size=100, timeout=30) and only one character is
+        available right away then one character will be returned immediately.
+        It will not wait for 30 seconds for another 99 characters to come in.
+
+        This is a wrapper around os.read(). It uses select.select() to
+        implement the timeout. """
+
+        if self.closed:
+            raise ValueError ('I/O operation on closed file in read_nonblocking().')
+
+        if timeout == -1:
+            timeout = self.timeout
+
+        # Note that some systems such as Solaris do not give an EOF when
+        # the child dies. In fact, you can still try to read
+        # from the child_fd -- it will block forever or until TIMEOUT.
+        # For this case, I test isalive() before doing any reading.
+        # If isalive() is false, then I pretend that this is the same as EOF.
+        if not self.isalive():
+            r,w,e = self.__select([self.child_fd], [], [], 0) # timeout of 0 means "poll" @UnusedVariable
+            if not r:
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Braindead platform.')
+        elif self.__irix_hack:
+            # This is a hack for Irix. It seems that Irix requires a long delay before checking isalive.
+            # This adds a 2 second delay, but only when the child is terminated.
+            r, w, e = self.__select([self.child_fd], [], [], 2) #@UnusedVariable
+            if not r and not self.isalive():
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Pokey platform.')
+
+        r,w,e = self.__select([self.child_fd], [], [], timeout) #@UnusedVariable
+
+        if not r:
+            if not self.isalive():
+                # Some platforms, such as Irix, will claim that their processes are alive;
+                # then timeout on the select; and then finally admit that they are not alive.
+                self.flag_eof = True
+                raise EOF ('End of File (EOF) in read_nonblocking(). Very pokey platform.')
+            else:
+                raise TIMEOUT ('Timeout exceeded in read_nonblocking().')
+
+        if self.child_fd in r:
+            try:
+                s = os.read(self.child_fd, size)
+            except OSError, e: # Linux does this
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Exception style platform.')
+            if s == '': # BSD style
+                self.flag_eof = True
+                raise EOF ('End Of File (EOF) in read_nonblocking(). Empty string style platform.')
+
+            if self.logfile is not None:
+                self.logfile.write (s)
+                self.logfile.flush()
+            if self.logfile_read is not None:
+                self.logfile_read.write (s)
+                self.logfile_read.flush()
+
+            return s
+
+        raise ExceptionPexpect ('Reached an unexpected state in read_nonblocking().')
+
+    def read (self, size = -1):   # File-like object.
+
+        """This reads at most "size" bytes from the file (less if the read hits
+        EOF before obtaining size bytes). If the size argument is negative or
+        omitted, read all data until EOF is reached. The bytes are returned as
+        a string object. An empty string is returned when EOF is encountered
+        immediately. """
+
+        if size == 0:
+            return ''
+        if size < 0:
+            self.expect (self.delimiter) # delimiter default is EOF
+            return self.before
+
+        # I could have done this more directly by not using expect(), but
+        # I deliberately decided to couple read() to expect() so that
+        # I would catch any bugs early and ensure consistant behavior.
+        # It's a little less efficient, but there is less for me to
+        # worry about if I have to later modify read() or expect().
+        # Note, it's OK if size==-1 in the regex. That just means it
+        # will never match anything in which case we stop only on EOF.
+        cre = re.compile('.{%d}' % size, re.DOTALL)
+        index = self.expect ([cre, self.delimiter]) # delimiter default is EOF
+        if index == 0:
+            return self.after ### self.before should be ''. Should I assert this?
+        return self.before
+
+    def readline (self, size = -1):    # File-like object.
+
+        """This reads and returns one entire line. A trailing newline is kept
+        in the string, but may be absent when a file ends with an incomplete
+        line. Note: This readline() looks for a \\r\\n pair even on UNIX
+        because this is what the pseudo tty device returns. So contrary to what
+        you may expect you will receive the newline as \\r\\n. An empty string
+        is returned when EOF is hit immediately. Currently, the size argument is
+        mostly ignored, so this behavior is not standard for a file-like
+        object. If size is 0 then an empty string is returned. """
+
+        if size == 0:
+            return ''
+        index = self.expect (['\r\n', self.delimiter]) # delimiter default is EOF
+        if index == 0:
+            return self.before + '\r\n'
+        else:
+            return self.before
+
+    def __iter__ (self):    # File-like object.
+
+        """This is to support iterators over a file-like object.
+        """
+
+        return self
+
+    def next (self):    # File-like object.
+
+        """This is to support iterators over a file-like object.
+        """
+
+        result = self.readline()
+        if result == "":
+            raise StopIteration
+        return result
+
+    def readlines (self, sizehint = -1):    # File-like object.
+
+        """This reads until EOF using readline() and returns a list containing
+        the lines thus read. The optional "sizehint" argument is ignored. """
+
+        lines = []
+        while True:
+            line = self.readline()
+            if not line:
+                break
+            lines.append(line)
+        return lines
+
+    def write(self, s):   # File-like object.
+
+        """This is similar to send() except that there is no return value.
+        """
+
+        self.send (s)
+
+    def writelines (self, sequence):   # File-like object.
+
+        """This calls write() for each element in the sequence. The sequence
+        can be any iterable object producing strings, typically a list of
+        strings. This does not add line separators There is no return value.
+        """
+
+        for s in sequence:
+            self.write (s)
+
+    def send(self, s):
+
+        """This sends a string to the child process. This returns the number of
+        bytes written. If a log file was set then the data is also written to
+        the log. """
+
+        time.sleep(self.delaybeforesend)
+        if self.logfile is not None:
+            self.logfile.write (s)
+            self.logfile.flush()
+        if self.logfile_send is not None:
+            self.logfile_send.write (s)
+            self.logfile_send.flush()
+        c = os.write(self.child_fd, s)
+        return c
+
+    def sendline(self, s=''):
+
+        """This is like send(), but it adds a line feed (os.linesep). This
+        returns the number of bytes written. """
+
+        n = self.send(s)
+        n = n + self.send (os.linesep)
+        return n
+
+    def sendcontrol(self, char):
+
+        """This sends a control character to the child such as Ctrl-C or
+        Ctrl-D. For example, to send a Ctrl-G (ASCII 7)::
+
+            child.sendcontrol('g')
+
+        See also, sendintr() and sendeof().
+        """
+
+        char = char.lower()
+        a = ord(char)
+        if a>=97 and a<=122:
+            a = a - ord('a') + 1
+            return self.send (chr(a))
+        d = {'@':0, '`':0,
+            '[':27, '{':27,
+            '\\':28, '|':28,
+            ']':29, '}': 29,
+            '^':30, '~':30,
+            '_':31,
+            '?':127}
+        if char not in d:
+            return 0
+        return self.send (chr(d[char]))
+
+    def sendeof(self):
+
+        """This sends an EOF to the child. This sends a character which causes
+        the pending parent output buffer to be sent to the waiting child
+        program without waiting for end-of-line. If it is the first character
+        of the line, the read() in the user program returns 0, which signifies
+        end-of-file. This means to work as expected a sendeof() has to be
+        called at the beginning of a line. This method does not send a newline.
+        It is the responsibility of the caller to ensure the eof is sent at the
+        beginning of a line. """
+
+        ### Hmmm... how do I send an EOF?
+        ###C  if ((m = write(pty, *buf, p - *buf)) < 0)
+        ###C      return (errno == EWOULDBLOCK) ? n : -1;
+        #fd = sys.stdin.fileno()
+        #old = termios.tcgetattr(fd) # remember current state
+        #attr = termios.tcgetattr(fd)
+        #attr[3] = attr[3] | termios.ICANON # ICANON must be set to recognize EOF
+        #try: # use try/finally to ensure state gets restored
+        #    termios.tcsetattr(fd, termios.TCSADRAIN, attr)
+        #    if hasattr(termios, 'CEOF'):
+        #        os.write (self.child_fd, '%c' % termios.CEOF)
+        #    else:
+        #        # Silly platform does not define CEOF so assume CTRL-D
+        #        os.write (self.child_fd, '%c' % 4)
+        #finally: # restore state
+        #    termios.tcsetattr(fd, termios.TCSADRAIN, old)
+        if hasattr(termios, 'VEOF'):
+            char = termios.tcgetattr(self.child_fd)[6][termios.VEOF]
+        else:
+            # platform does not define VEOF so assume CTRL-D
+            char = chr(4)
+        self.send(char)
+
+    def sendintr(self):
+
+        """This sends a SIGINT to the child. It does not require
+        the SIGINT to be the first character on a line. """
+
+        if hasattr(termios, 'VINTR'):
+            char = termios.tcgetattr(self.child_fd)[6][termios.VINTR]
+        else:
+            # platform does not define VINTR so assume CTRL-C
+            char = chr(3)
+        self.send (char)
+
+    def eof (self):
+
+        """This returns True if the EOF exception was ever raised.
+        """
+
+        return self.flag_eof
+
+    def terminate(self, force=False):
+
+        """This forces a child process to terminate. It starts nicely with
+        SIGHUP and SIGINT. If "force" is True then moves onto SIGKILL. This
+        returns True if the child was terminated. This returns False if the
+        child could not be terminated. """
+
+        if not self.isalive():
+            return True
+        try:
+            self.kill(signal.SIGHUP)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            self.kill(signal.SIGCONT)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            self.kill(signal.SIGINT)
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            if force:
+                self.kill(signal.SIGKILL)
+                time.sleep(self.delayafterterminate)
+                if not self.isalive():
+                    return True
+                else:
+                    return False
+            return False
+        except OSError, e:
+            # I think there are kernel timing issues that sometimes cause
+            # this to happen. I think isalive() reports True, but the
+            # process is dead to the kernel.
+            # Make one last attempt to see if the kernel is up to date.
+            time.sleep(self.delayafterterminate)
+            if not self.isalive():
+                return True
+            else:
+                return False
+
+    def wait(self):
+
+        """This waits until the child exits. This is a blocking call. This will
+        not read any data from the child, so this will block forever if the
+        child has unread output and has terminated. In other words, the child
+        may have printed output then called exit(); but, technically, the child
+        is still alive until its output is read. """
+
+        if self.isalive():
+            pid, status = os.waitpid(self.pid, 0) #@UnusedVariable
+        else:
+            raise ExceptionPexpect ('Cannot wait for dead child process.')
+        self.exitstatus = os.WEXITSTATUS(status)
+        if os.WIFEXITED (status):
+            self.status = status
+            self.exitstatus = os.WEXITSTATUS(status)
+            self.signalstatus = None
+            self.terminated = True
+        elif os.WIFSIGNALED (status):
+            self.status = status
+            self.exitstatus = None
+            self.signalstatus = os.WTERMSIG(status)
+            self.terminated = True
+        elif os.WIFSTOPPED (status):
+            raise ExceptionPexpect ('Wait was called for a child process that is stopped. This is not supported. Is some other process attempting job control with our child pid?')
+        return self.exitstatus
+
+    def isalive(self):
+
+        """This tests if the child process is running or not. This is
+        non-blocking. If the child was terminated then this will read the
+        exitstatus or signalstatus of the child. This returns True if the child
+        process appears to be running or False if not. It can take literally
+        SECONDS for Solaris to return the right status. """
+
+        if self.terminated:
+            return False
+
+        if self.flag_eof:
+            # This is for Linux, which requires the blocking form of waitpid to get
+            # status of a defunct process. This is super-lame. The flag_eof would have
+            # been set in read_nonblocking(), so this should be safe.
+            waitpid_options = 0
+        else:
+            waitpid_options = os.WNOHANG
+
+        try:
+            pid, status = os.waitpid(self.pid, waitpid_options)
+        except OSError, e: # No child processes
+            if e[0] == errno.ECHILD:
+                raise ExceptionPexpect ('isalive() encountered condition where "terminated" is 0, but there was no child process. Did someone else call waitpid() on our process?')
+            else:
+                raise e
+
+        # I have to do this twice for Solaris. I can't even believe that I figured this out...
+        # If waitpid() returns 0 it means that no child process wishes to
+        # report, and the value of status is undefined.
+        if pid == 0:
+            try:
+                pid, status = os.waitpid(self.pid, waitpid_options) ### os.WNOHANG) # Solaris!
+            except OSError, e: # This should never happen...
+                if e[0] == errno.ECHILD:
+                    raise ExceptionPexpect ('isalive() encountered condition that should never happen. There was no child process. Did someone else call waitpid() on our process?')
+                else:
+                    raise e
+
+            # If pid is still 0 after two calls to waitpid() then
+            # the process really is alive. This seems to work on all platforms, except
+            # for Irix which seems to require a blocking call on waitpid or select, so I let read_nonblocking
+            # take care of this situation (unfortunately, this requires waiting through the timeout).
+            if pid == 0:
+                return True
+
+        if pid == 0:
+            return True
+
+        if os.WIFEXITED (status):
+            self.status = status
+            self.exitstatus = os.WEXITSTATUS(status)
+            self.signalstatus = None
+            self.terminated = True
+        elif os.WIFSIGNALED (status):
+            self.status = status
+            self.exitstatus = None
+            self.signalstatus = os.WTERMSIG(status)
+            self.terminated = True
+        elif os.WIFSTOPPED (status):
+            raise ExceptionPexpect ('isalive() encountered condition where child process is stopped. This is not supported. Is some other process attempting job control with our child pid?')
+        return False
+
+    def kill(self, sig):
+
+        """This sends the given signal to the child application. In keeping
+        with UNIX tradition it has a misleading name. It does not necessarily
+        kill the child unless you send the right signal. """
+
+        # Same as os.kill, but the pid is given for you.
+        if self.isalive():
+            os.kill(self.pid, sig)
+
+    def compile_pattern_list(self, patterns):
+
+        """This compiles a pattern-string or a list of pattern-strings.
+        Patterns must be a StringType, EOF, TIMEOUT, SRE_Pattern, or a list of
+        those. Patterns may also be None which results in an empty list (you
+        might do this if waiting for an EOF or TIMEOUT condition without
+        expecting any pattern).
+
+        This is used by expect() when calling expect_list(). Thus expect() is
+        nothing more than::
+
+             cpl = self.compile_pattern_list(pl)
+             return self.expect_list(cpl, timeout)
+
+        If you are using expect() within a loop it may be more
+        efficient to compile the patterns first and then call expect_list().
+        This avoid calls in a loop to compile_pattern_list()::
+
+             cpl = self.compile_pattern_list(my_pattern)
+             while some_condition:
+                ...
+                i = self.expect_list(clp, timeout)
+                ...
+        """
+
+        if patterns is None:
+            return []
+        if type(patterns) is not types.ListType:
+            patterns = [patterns]
+
+        compile_flags = re.DOTALL # Allow dot to match \n
+        if self.ignorecase:
+            compile_flags = compile_flags | re.IGNORECASE
+        compiled_pattern_list = []
+        for p in patterns:
+            if type(p) in types.StringTypes:
+                compiled_pattern_list.append(re.compile(p, compile_flags))
+            elif p is EOF:
+                compiled_pattern_list.append(EOF)
+            elif p is TIMEOUT:
+                compiled_pattern_list.append(TIMEOUT)
+            elif type(p) is type(re.compile('')):
+                compiled_pattern_list.append(p)
+            else:
+                raise TypeError ('Argument must be one of StringTypes, EOF, TIMEOUT, SRE_Pattern, or a list of those type. %s' % str(type(p)))
+
+        return compiled_pattern_list
+
+    def expect(self, pattern, timeout = -1, searchwindowsize=None):
+
+        """This seeks through the stream until a pattern is matched. The
+        pattern is overloaded and may take several types. The pattern can be a
+        StringType, EOF, a compiled re, or a list of any of those types.
+        Strings will be compiled to re types. This returns the index into the
+        pattern list. If the pattern was not a list this returns index 0 on a
+        successful match. This may raise exceptions for EOF or TIMEOUT. To
+        avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern
+        list. That will cause expect to match an EOF or TIMEOUT condition
+        instead of raising an exception.
+
+        If you pass a list of patterns and more than one matches, the first match
+        in the stream is chosen. If more than one pattern matches at that point,
+        the leftmost in the pattern list is chosen. For example::
+
+            # the input is 'foobar'
+            index = p.expect (['bar', 'foo', 'foobar'])
+            # returns 1 ('foo') even though 'foobar' is a "better" match
+
+        Please note, however, that buffering can affect this behavior, since
+        input arrives in unpredictable chunks. For example::
+
+            # the input is 'foobar'
+            index = p.expect (['foobar', 'foo'])
+            # returns 0 ('foobar') if all input is available at once,
+            # but returs 1 ('foo') if parts of the final 'bar' arrive late
+
+        After a match is found the instance attributes 'before', 'after' and
+        'match' will be set. You can see all the data read before the match in
+        'before'. You can see the data that was matched in 'after'. The
+        re.MatchObject used in the re match will be in 'match'. If an error
+        occurred then 'before' will be set to all the data read so far and
+        'after' and 'match' will be None.
+
+        If timeout is -1 then timeout will be set to the self.timeout value.
+
+        A list entry may be EOF or TIMEOUT instead of a string. This will
+        catch these exceptions and return the index of the list entry instead
+        of raising the exception. The attribute 'after' will be set to the
+        exception type. The attribute 'match' will be None. This allows you to
+        write code like this::
+
+                index = p.expect (['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])
+                if index == 0:
+                    do_something()
+                elif index == 1:
+                    do_something_else()
+                elif index == 2:
+                    do_some_other_thing()
+                elif index == 3:
+                    do_something_completely_different()
+
+        instead of code like this::
+
+                try:
+                    index = p.expect (['good', 'bad'])
+                    if index == 0:
+                        do_something()
+                    elif index == 1:
+                        do_something_else()
+                except EOF:
+                    do_some_other_thing()
+                except TIMEOUT:
+                    do_something_completely_different()
+
+        These two forms are equivalent. It all depends on what you want. You
+        can also just expect the EOF if you are waiting for all output of a
+        child to finish. For example::
+
+                p = pexpect.spawn('/bin/ls')
+                p.expect (pexpect.EOF)
+                print p.before
+
+        If you are trying to optimize for speed then see expect_list().
+        """
+
+        compiled_pattern_list = self.compile_pattern_list(pattern)
+        return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
+
+    def expect_list(self, pattern_list, timeout = -1, searchwindowsize = -1):
+
+        """This takes a list of compiled regular expressions and returns the
+        index into the pattern_list that matched the child output. The list may
+        also contain EOF or TIMEOUT (which are not compiled regular
+        expressions). This method is similar to the expect() method except that
+        expect_list() does not recompile the pattern list on every call. This
+        may help if you are trying to optimize for speed, otherwise just use
+        the expect() method.  This is called by expect(). If timeout==-1 then
+        the self.timeout value is used. If searchwindowsize==-1 then the
+        self.searchwindowsize value is used. """
+
+        return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
+
+    def expect_exact(self, pattern_list, timeout = -1, searchwindowsize = -1):
+
+        """This is similar to expect(), but uses plain string matching instead
+        of compiled regular expressions in 'pattern_list'. The 'pattern_list'
+        may be a string; a list or other sequence of strings; or TIMEOUT and
+        EOF.
+
+        This call might be faster than expect() for two reasons: string
+        searching is faster than RE matching and it is possible to limit the
+        search to just the end of the input buffer.
+
+        This method is also useful when you don't want to have to worry about
+        escaping regular expression characters that you want to match."""
+
+        if type(pattern_list) in types.StringTypes or pattern_list in (TIMEOUT, EOF):
+            pattern_list = [pattern_list]
+        return self.expect_loop(searcher_string(pattern_list), timeout, searchwindowsize)
+
+    def expect_loop(self, searcher, timeout = -1, searchwindowsize = -1):
+
+        """This is the common loop used inside expect. The 'searcher' should be
+        an instance of searcher_re or searcher_string, which describes how and what
+        to search for in the input.
+
+        See expect() for other arguments, return value and exceptions. """
+
+        self.searcher = searcher
+
+        if timeout == -1:
+            timeout = self.timeout
+        if timeout is not None:
+            end_time = time.time() + timeout
+        if searchwindowsize == -1:
+            searchwindowsize = self.searchwindowsize
+
+        try:
+            incoming = self.buffer
+            freshlen = len(incoming)
+            while True: # Keep reading until exception or return.
+                index = searcher.search(incoming, freshlen, searchwindowsize)
+                if index >= 0:
+                    self.buffer = incoming[searcher.end : ]
+                    self.before = incoming[ : searcher.start]
+                    self.after = incoming[searcher.start : searcher.end]
+                    self.match = searcher.match
+                    self.match_index = index
+                    return self.match_index
+                # No match at this point
+                if timeout < 0 and timeout is not None:
+                    raise TIMEOUT ('Timeout exceeded in expect_any().')
+                # Still have time left, so read more data
+                c = self.read_nonblocking (self.maxread, timeout)
+                freshlen = len(c)
+                time.sleep (0.0001)
+                incoming = incoming + c
+                if timeout is not None:
+                    timeout = end_time - time.time()
+        except EOF, e:
+            self.buffer = ''
+            self.before = incoming
+            self.after = EOF
+            index = searcher.eof_index
+            if index >= 0:
+                self.match = EOF
+                self.match_index = index
+                return self.match_index
+            else:
+                self.match = None
+                self.match_index = None
+                raise EOF (str(e) + '\n' + str(self))
+        except TIMEOUT, e:
+            self.buffer = incoming
+            self.before = incoming
+            self.after = TIMEOUT
+            index = searcher.timeout_index
+            if index >= 0:
+                self.match = TIMEOUT
+                self.match_index = index
+                return self.match_index
+            else:
+                self.match = None
+                self.match_index = None
+                raise TIMEOUT (str(e) + '\n' + str(self))
+        except Exception:
+            self.before = incoming
+            self.after = None
+            self.match = None
+            self.match_index = None
+            raise
+
+    def getwinsize(self):
+
+        """This returns the terminal window size of the child tty. The return
+        value is a tuple of (rows, cols). """
+
+        TIOCGWINSZ = getattr(termios, 'TIOCGWINSZ', 1074295912L)
+        s = struct.pack('HHHH', 0, 0, 0, 0)
+        x = fcntl.ioctl(self.fileno(), TIOCGWINSZ, s)
+        return struct.unpack('HHHH', x)[0:2]
+
+    def setwinsize(self, r, c):
+
+        """This sets the terminal window size of the child tty. This will cause
+        a SIGWINCH signal to be sent to the child. This does not change the
+        physical window size. It changes the size reported to TTY-aware
+        applications like vi or curses -- applications that respond to the
+        SIGWINCH signal. """
+
+        # Check for buggy platforms. Some Python versions on some platforms
+        # (notably OSF1 Alpha and RedHat 7.1) truncate the value for
+        # termios.TIOCSWINSZ. It is not clear why this happens.
+        # These platforms don't seem to handle the signed int very well;
+        # yet other platforms like OpenBSD have a large negative value for
+        # TIOCSWINSZ and they don't have a truncate problem.
+        # Newer versions of Linux have totally different values for TIOCSWINSZ.
+        # Note that this fix is a hack.
+        TIOCSWINSZ = getattr(termios, 'TIOCSWINSZ', -2146929561)
+        if TIOCSWINSZ == 2148037735L: # L is not required in Python >= 2.2.
+            TIOCSWINSZ = -2146929561 # Same bits, but with sign.
+        # Note, assume ws_xpixel and ws_ypixel are zero.
+        s = struct.pack('HHHH', r, c, 0, 0)
+        fcntl.ioctl(self.fileno(), TIOCSWINSZ, s)
+
+    def interact(self, escape_character = chr(29), input_filter = None, output_filter = None):
+
+        """This gives control of the child process to the interactive user (the
+        human at the keyboard). Keystrokes are sent to the child process, and
+        the stdout and stderr output of the child process is printed. This
+        simply echos the child stdout and child stderr to the real stdout and
+        it echos the real stdin to the child stdin. When the user types the
+        escape_character this method will stop. The default for
+        escape_character is ^]. This should not be confused with ASCII 27 --
+        the ESC character. ASCII 29 was chosen for historical merit because
+        this is the character used by 'telnet' as the escape character. The
+        escape_character will not be sent to the child process.
+
+        You may pass in optional input and output filter functions. These
+        functions should take a string and return a string. The output_filter
+        will be passed all the output from the child process. The input_filter
+        will be passed all the keyboard input from the user. The input_filter
+        is run BEFORE the check for the escape_character.
+
+        Note that if you change the window size of the parent the SIGWINCH
+        signal will not be passed through to the child. If you want the child
+        window size to change when the parent's window size changes then do
+        something like the following example::
+
+            import pexpect, struct, fcntl, termios, signal, sys
+            def sigwinch_passthrough (sig, data):
+                s = struct.pack("HHHH", 0, 0, 0, 0)
+                a = struct.unpack('hhhh', fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ , s))
+                global p
+                p.setwinsize(a[0],a[1])
+            p = pexpect.spawn('/bin/bash') # Note this is global and used in sigwinch_passthrough.
+            signal.signal(signal.SIGWINCH, sigwinch_passthrough)
+            p.interact()
+        """
+
+        # Flush the buffer.
+        self.stdout.write (self.buffer)
+        self.stdout.flush()
+        self.buffer = ''
+        mode = tty.tcgetattr(self.STDIN_FILENO)
+        tty.setraw(self.STDIN_FILENO)
+        try:
+            self.__interact_copy(escape_character, input_filter, output_filter)
+        finally:
+            tty.tcsetattr(self.STDIN_FILENO, tty.TCSAFLUSH, mode)
+
+    def __interact_writen(self, fd, data):
+
+        """This is used by the interact() method.
+        """
+
+        while data != '' and self.isalive():
+            n = os.write(fd, data)
+            data = data[n:]
+
+    def __interact_read(self, fd):
+
+        """This is used by the interact() method.
+        """
+
+        return os.read(fd, 1000)
+
+    def __interact_copy(self, escape_character = None, input_filter = None, output_filter = None):
+
+        """This is used by the interact() method.
+        """
+
+        while self.isalive():
+            r,w,e = self.__select([self.child_fd, self.STDIN_FILENO], [], []) #@UnusedVariable
+            if self.child_fd in r:
+                data = self.__interact_read(self.child_fd)
+                if output_filter: data = output_filter(data)
+                if self.logfile is not None:
+                    self.logfile.write (data)
+                    self.logfile.flush()
+                os.write(self.STDOUT_FILENO, data)
+            if self.STDIN_FILENO in r:
+                data = self.__interact_read(self.STDIN_FILENO)
+                if input_filter: data = input_filter(data)
+                i = data.rfind(escape_character)
+                if i != -1:
+                    data = data[:i]
+                    self.__interact_writen(self.child_fd, data)
+                    break
+                self.__interact_writen(self.child_fd, data)
+
+    def __select (self, iwtd, owtd, ewtd, timeout=None):
+
+        """This is a wrapper around select.select() that ignores signals. If
+        select.select raises a select.error exception and errno is an EINTR
+        error then it is ignored. Mainly this is used to ignore sigwinch
+        (terminal resize). """
+
+        # if select() is interrupted by a signal (errno==EINTR) then
+        # we loop back and enter the select() again.
+        if timeout is not None:
+            end_time = time.time() + timeout
+        while True:
+            try:
+                return select.select (iwtd, owtd, ewtd, timeout)
+            except select.error, e:
+                if e[0] == errno.EINTR:
+                    # if we loop back we have to subtract the amount of time we already waited.
+                    if timeout is not None:
+                        timeout = end_time - time.time()
+                        if timeout < 0:
+                            return ([],[],[])
+                else: # something else caused the select.error, so this really is an exception
+                    raise
+
+##############################################################################
+# The following methods are no longer supported or allowed.
+
+    def setmaxread (self, maxread):
+
+        """This method is no longer supported or allowed. I don't like getters
+        and setters without a good reason. """
+
+        raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the maxread member variable.')
+
+    def setlog (self, fileobject):
+
+        """This method is no longer supported or allowed.
+        """
+
+        raise ExceptionPexpect ('This method is no longer supported or allowed. Just assign a value to the logfile member variable.')
+
+##############################################################################
+# End of spawn class
+##############################################################################
+
+class searcher_string (object):
+
+    """This is a plain string search helper for the spawn.expect_any() method.
+
+    Attributes:
+
+        eof_index     - index of EOF, or -1
+        timeout_index - index of TIMEOUT, or -1
+
+    After a successful match by the search() method the following attributes
+    are available:
+
+        start - index into the buffer, first byte of match
+        end   - index into the buffer, first byte after match
+        match - the matching string itself
+    """
+
+    def __init__(self, strings):
+
+        """This creates an instance of searcher_string. This argument 'strings'
+        may be a list; a sequence of strings; or the EOF or TIMEOUT types. """
+
+        self.eof_index = -1
+        self.timeout_index = -1
+        self._strings = []
+        for n, s in zip(range(len(strings)), strings):
+            if s is EOF:
+                self.eof_index = n
+                continue
+            if s is TIMEOUT:
+                self.timeout_index = n
+                continue
+            self._strings.append((n, s))
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object."""
+
+        ss =  [ (ns[0],'    %d: "%s"' % ns) for ns in self._strings ]
+        ss.append((-1,'searcher_string:'))
+        if self.eof_index >= 0:
+            ss.append ((self.eof_index,'    %d: EOF' % self.eof_index))
+        if self.timeout_index >= 0:
+            ss.append ((self.timeout_index,'    %d: TIMEOUT' % self.timeout_index))
+        ss.sort()
+        ss = zip(*ss)[1]
+        return '\n'.join(ss)
+
+    def search(self, buffer, freshlen, searchwindowsize=None):
+
+        """This searches 'buffer' for the first occurence of one of the search
+        strings.  'freshlen' must indicate the number of bytes at the end of
+        'buffer' which have not been searched before. It helps to avoid
+        searching the same, possibly big, buffer over and over again.
+
+        See class spawn for the 'searchwindowsize' argument.
+
+        If there is a match this returns the index of that string, and sets
+        'start', 'end' and 'match'. Otherwise, this returns -1. """
+
+        absurd_match = len(buffer)
+        first_match = absurd_match
+
+        # 'freshlen' helps a lot here. Further optimizations could
+        # possibly include:
+        #
+        # using something like the Boyer-Moore Fast String Searching
+        # Algorithm; pre-compiling the search through a list of
+        # strings into something that can scan the input once to
+        # search for all N strings; realize that if we search for
+        # ['bar', 'baz'] and the input is '...foo' we need not bother
+        # rescanning until we've read three more bytes.
+        #
+        # Sadly, I don't know enough about this interesting topic. /grahn
+
+        for index, s in self._strings:
+            if searchwindowsize is None:
+                # the match, if any, can only be in the fresh data,
+                # or at the very end of the old data
+                offset = -(freshlen+len(s))
+            else:
+                # better obey searchwindowsize
+                offset = -searchwindowsize
+            n = buffer.find(s, offset)
+            if n >= 0 and n < first_match:
+                first_match = n
+                best_index, best_match = index, s
+        if first_match == absurd_match:
+            return -1
+        self.match = best_match
+        self.start = first_match
+        self.end = self.start + len(self.match)
+        return best_index
+
+class searcher_re (object):
+
+    """This is regular expression string search helper for the
+    spawn.expect_any() method.
+
+    Attributes:
+
+        eof_index     - index of EOF, or -1
+        timeout_index - index of TIMEOUT, or -1
+
+    After a successful match by the search() method the following attributes
+    are available:
+
+        start - index into the buffer, first byte of match
+        end   - index into the buffer, first byte after match
+        match - the re.match object returned by a succesful re.search
+
+    """
+
+    def __init__(self, patterns):
+
+        """This creates an instance that searches for 'patterns' Where
+        'patterns' may be a list or other sequence of compiled regular
+        expressions, or the EOF or TIMEOUT types."""
+
+        self.eof_index = -1
+        self.timeout_index = -1
+        self._searches = []
+        for n, s in zip(range(len(patterns)), patterns):
+            if s is EOF:
+                self.eof_index = n
+                continue
+            if s is TIMEOUT:
+                self.timeout_index = n
+                continue
+            self._searches.append((n, s))
+
+    def __str__(self):
+
+        """This returns a human-readable string that represents the state of
+        the object."""
+
+        ss =  [ (n,'    %d: re.compile("%s")' % (n,str(s.pattern))) for n,s in self._searches]
+        ss.append((-1,'searcher_re:'))
+        if self.eof_index >= 0:
+            ss.append ((self.eof_index,'    %d: EOF' % self.eof_index))
+        if self.timeout_index >= 0:
+            ss.append ((self.timeout_index,'    %d: TIMEOUT' % self.timeout_index))
+        ss.sort()
+        ss = zip(*ss)[1]
+        return '\n'.join(ss)
+
+    def search(self, buffer, freshlen, searchwindowsize=None):
+
+        """This searches 'buffer' for the first occurence of one of the regular
+        expressions. 'freshlen' must indicate the number of bytes at the end of
+        'buffer' which have not been searched before.
+
+        See class spawn for the 'searchwindowsize' argument.
+
+        If there is a match this returns the index of that string, and sets
+        'start', 'end' and 'match'. Otherwise, returns -1."""
+
+        absurd_match = len(buffer)
+        first_match = absurd_match
+        # 'freshlen' doesn't help here -- we cannot predict the
+        # length of a match, and the re module provides no help.
+        if searchwindowsize is None:
+            searchstart = 0
+        else:
+            searchstart = max(0, len(buffer)-searchwindowsize)
+        for index, s in self._searches:
+            match = s.search(buffer, searchstart)
+            if match is None:
+                continue
+            n = match.start()
+            if n < first_match:
+                first_match = n
+                the_match = match
+                best_index = index
+        if first_match == absurd_match:
+            return -1
+        self.start = first_match
+        self.match = the_match
+        self.end = self.match.end()
+        return best_index
+
+def which (filename):
+
+    """This takes a given filename; tries to find it in the environment path;
+    then checks if it is executable. This returns the full path to the filename
+    if found and executable. Otherwise this returns None."""
+
+    # Special case where filename already contains a path.
+    if os.path.dirname(filename) != '':
+        if os.access (filename, os.X_OK):
+            return filename
+
+    if not os.environ.has_key('PATH') or os.environ['PATH'] == '':
+        p = os.defpath
+    else:
+        p = os.environ['PATH']
+
+    # Oddly enough this was the one line that made Pexpect
+    # incompatible with Python 1.5.2.
+    #pathlist = p.split (os.pathsep)
+    pathlist = string.split (p, os.pathsep)
+
+    for path in pathlist:
+        f = os.path.join(path, filename)
+        if os.access(f, os.X_OK):
+            return f
+    return None
+
+def split_command_line(command_line):
+
+    """This splits a command line into a list of arguments. It splits arguments
+    on spaces, but handles embedded quotes, doublequotes, and escaped
+    characters. It's impossible to do this with a regular expression, so I
+    wrote a little state machine to parse the command line. """
+
+    arg_list = []
+    arg = ''
+
+    # Constants to name the states we can be in.
+    state_basic = 0
+    state_esc = 1
+    state_singlequote = 2
+    state_doublequote = 3
+    state_whitespace = 4 # The state of consuming whitespace between commands.
+    state = state_basic
+
+    for c in command_line:
+        if state == state_basic or state == state_whitespace:
+            if c == '\\': # Escape the next character
+                state = state_esc
+            elif c == r"'": # Handle single quote
+                state = state_singlequote
+            elif c == r'"': # Handle double quote
+                state = state_doublequote
+            elif c.isspace():
+                # Add arg to arg_list if we aren't in the middle of whitespace.
+                if state == state_whitespace:
+                    None # Do nothing.
+                else:
+                    arg_list.append(arg)
+                    arg = ''
+                    state = state_whitespace
+            else:
+                arg = arg + c
+                state = state_basic
+        elif state == state_esc:
+            arg = arg + c
+            state = state_basic
+        elif state == state_singlequote:
+            if c == r"'":
+                state = state_basic
+            else:
+                arg = arg + c
+        elif state == state_doublequote:
+            if c == r'"':
+                state = state_basic
+            else:
+                arg = arg + c
+
+    if arg != '':
+        arg_list.append(arg)
+    return arg_list
+
+# vi:ts=4:sw=4:expandtab:ft=python:


Follow ups