← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~raharper/cloud-init:ubuntu/devel/newupstream-20180426 into cloud-init:ubuntu/devel

 

Ryan Harper has proposed merging ~raharper/cloud-init:ubuntu/devel/newupstream-20180426 into cloud-init:ubuntu/devel.

Commit message:
cloud-init (18.2-27-g6ef92c98-0ubuntu1) bionic; urgency=medium

  * New upstream snapshot.
    - IBMCloud: recognize provisioning environment during debug boots.
      (LP: #1767166)
    - net: detect unstable network names and trigger a settle if needed
      (LP: #1766287)
    - IBMCloud: improve documentation in datasource.
    - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
    - packages/debian/control.in: add missing dependency on iproute2.
      (LP: #1766711)
    - DataSourceSmartOS: add locking of serial device.
      [Mike Gerdts] (LP: #1746605)
    - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
    - DataSourceSmartOS: list() should always return a list
      [Mike Gerdts] (LP: #1763480)
    - schema: in validation, raise ImportError if strict but no jsonschema.
    - set_passwords: Add newline to end of sshd config, only restart if
      updated. (LP: #1677205)
    - pylint: pay attention to unused variable warnings.
    - doc: Add documentation for AliYun datasource. [Junjie Wang]
    - Schema: do not warn on duplicate items in commands. (LP: #1764264)

 -- Ryan Harper <ryan.harper@xxxxxxxxxxxxx>  Thu, 26 Apr 2018 16:33:59 -0500


Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1746605 in cloud-init: "DataSourceSmartOS needs locking"
  https://bugs.launchpad.net/cloud-init/+bug/1746605
  Bug #1765085 in cloud-init: "DataSourceSmartOS ignores sdc:hostname"
  https://bugs.launchpad.net/cloud-init/+bug/1765085
  Bug #1766287 in cloud-init: "18.04 minimal images on GCE intermittently fail to set up networking "
  https://bugs.launchpad.net/cloud-init/+bug/1766287
  Bug #1766711 in cloud-init: "cloud-init missing dependency on iproute2"
  https://bugs.launchpad.net/cloud-init/+bug/1766711
  Bug #1767166 in cloud-init: "IBMCloud datasource does not recognize provisioning in debug mode."
  https://bugs.launchpad.net/cloud-init/+bug/1767166

For more details, see:
https://code.launchpad.net/~raharper/cloud-init/+git/cloud-init/+merge/344560
-- 
Your team cloud-init commiters is requested to review the proposed merge of ~raharper/cloud-init:ubuntu/devel/newupstream-20180426 into cloud-init:ubuntu/devel.
diff --git a/.pylintrc b/.pylintrc
index 0bdfa59..3bfa0c8 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -28,7 +28,7 @@ jobs=4
 # W0703(broad-except)
 # W1401(anomalous-backslash-in-string)
 
-disable=C, F, I, R, W0105, W0107, W0201, W0212, W0221, W0222, W0223, W0231, W0311, W0511, W0602, W0603, W0611, W0612, W0613, W0621, W0622, W0631, W0703, W1401
+disable=C, F, I, R, W0105, W0107, W0201, W0212, W0221, W0222, W0223, W0231, W0311, W0511, W0602, W0603, W0611, W0613, W0621, W0622, W0631, W0703, W1401
 
 
 [REPORTS]
diff --git a/cloudinit/analyze/dump.py b/cloudinit/analyze/dump.py
index b071aa1..1f3060d 100644
--- a/cloudinit/analyze/dump.py
+++ b/cloudinit/analyze/dump.py
@@ -112,7 +112,7 @@ def parse_ci_logline(line):
             return None
         event_description = stage_to_description[event_name]
     else:
-        (pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
+        (_pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
         event_description = eventstr.split(event_name)[1].strip()
 
     event = {
diff --git a/cloudinit/cmd/tests/test_main.py b/cloudinit/cmd/tests/test_main.py
index dbe421c..e2c54ae 100644
--- a/cloudinit/cmd/tests/test_main.py
+++ b/cloudinit/cmd/tests/test_main.py
@@ -56,7 +56,7 @@ class TestMain(FilesystemMockingTestCase):
         cmdargs = myargs(
             debug=False, files=None, force=False, local=False, reporter=None,
             subcommand='init')
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
@@ -85,7 +85,7 @@ class TestMain(FilesystemMockingTestCase):
         cmdargs = myargs(
             debug=False, files=None, force=False, local=False, reporter=None,
             subcommand='init')
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
@@ -133,7 +133,7 @@ class TestMain(FilesystemMockingTestCase):
             self.assertEqual(main.LOG, log)
             self.assertIsNone(args)
 
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index afaca46..e18944e 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -378,7 +378,7 @@ def apply_debconf_selections(cfg, target=None):
 
     # get a complete list of packages listed in input
     pkgs_cfgd = set()
-    for key, content in selsets.items():
+    for _key, content in selsets.items():
         for line in content.splitlines():
             if line.startswith("#"):
                 continue
diff --git a/cloudinit/config/cc_bootcmd.py b/cloudinit/config/cc_bootcmd.py
index 233da1e..db64f0a 100644
--- a/cloudinit/config/cc_bootcmd.py
+++ b/cloudinit/config/cc_bootcmd.py
@@ -63,7 +63,6 @@ schema = {
             'additionalProperties': False,
             'minItems': 1,
             'required': [],
-            'uniqueItems': True
         }
     }
 }
diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py
index c3e8c48..943089e 100644
--- a/cloudinit/config/cc_disk_setup.py
+++ b/cloudinit/config/cc_disk_setup.py
@@ -680,13 +680,13 @@ def read_parttbl(device):
     reliable way to probe the partition table.
     """
     blkdev_cmd = [BLKDEV_CMD, '--rereadpt', device]
-    udevadm_settle()
+    util.udevadm_settle()
     try:
         util.subp(blkdev_cmd)
     except Exception as e:
         util.logexc(LOG, "Failed reading the partition table %s" % e)
 
-    udevadm_settle()
+    util.udevadm_settle()
 
 
 def exec_mkpart_mbr(device, layout):
@@ -737,14 +737,10 @@ def exec_mkpart(table_type, device, layout):
     return get_dyn_func("exec_mkpart_%s", table_type, device, layout)
 
 
-def udevadm_settle():
-    util.subp(['udevadm', 'settle'])
-
-
 def assert_and_settle_device(device):
     """Assert that device exists and settle so it is fully recognized."""
     if not os.path.exists(device):
-        udevadm_settle()
+        util.udevadm_settle()
         if not os.path.exists(device):
             raise RuntimeError("Device %s did not exist and was not created "
                                "with a udevamd settle." % device)
@@ -752,7 +748,7 @@ def assert_and_settle_device(device):
     # Whether or not the device existed above, it is possible that udev
     # events that would populate udev database (for reading by lsdname) have
     # not yet finished. So settle again.
-    udevadm_settle()
+    util.udevadm_settle()
 
 
 def mkpart(device, definition):
diff --git a/cloudinit/config/cc_emit_upstart.py b/cloudinit/config/cc_emit_upstart.py
index 69dc2d5..eb9fbe6 100644
--- a/cloudinit/config/cc_emit_upstart.py
+++ b/cloudinit/config/cc_emit_upstart.py
@@ -43,7 +43,7 @@ def is_upstart_system():
         del myenv['UPSTART_SESSION']
     check_cmd = ['initctl', 'version']
     try:
-        (out, err) = util.subp(check_cmd, env=myenv)
+        (out, _err) = util.subp(check_cmd, env=myenv)
         return 'upstart' in out
     except util.ProcessExecutionError as e:
         LOG.debug("'%s' returned '%s', not using upstart",
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 013e69b..82f29e1 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -89,13 +89,11 @@ def _resize_zfs(mount_point, devpth):
 
 
 def _get_dumpfs_output(mount_point):
-    dumpfs_res, err = util.subp(['dumpfs', '-m', mount_point])
-    return dumpfs_res
+    return util.subp(['dumpfs', '-m', mount_point])[0]
 
 
 def _get_gpart_output(part):
-    gpart_res, err = util.subp(['gpart', 'show', part])
-    return gpart_res
+    return util.subp(['gpart', 'show', part])[0]
 
 
 def _can_skip_resize_ufs(mount_point, devpth):
@@ -113,7 +111,7 @@ def _can_skip_resize_ufs(mount_point, devpth):
         if not line.startswith('#'):
             newfs_cmd = shlex.split(line)
             opt_value = 'O:Ua:s:b:d:e:f:g:h:i:jk:m:o:'
-            optlist, args = getopt.getopt(newfs_cmd[1:], opt_value)
+            optlist, _args = getopt.getopt(newfs_cmd[1:], opt_value)
             for o, a in optlist:
                 if o == "-s":
                     cur_fs_sz = int(a)
diff --git a/cloudinit/config/cc_rh_subscription.py b/cloudinit/config/cc_rh_subscription.py
index 530808c..1c67943 100644
--- a/cloudinit/config/cc_rh_subscription.py
+++ b/cloudinit/config/cc_rh_subscription.py
@@ -209,8 +209,7 @@ class SubscriptionManager(object):
                 cmd.append("--serverurl={0}".format(self.server_hostname))
 
             try:
-                return_out, return_err = self._sub_man_cli(cmd,
-                                                           logstring_val=True)
+                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -233,8 +232,7 @@ class SubscriptionManager(object):
 
             # Attempting to register the system only
             try:
-                return_out, return_err = self._sub_man_cli(cmd,
-                                                           logstring_val=True)
+                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -257,7 +255,7 @@ class SubscriptionManager(object):
                .format(self.servicelevel)]
 
         try:
-            return_out, return_err = self._sub_man_cli(cmd)
+            return_out = self._sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             if e.stdout.rstrip() != '':
                 for line in e.stdout.split("\n"):
@@ -275,7 +273,7 @@ class SubscriptionManager(object):
     def _set_auto_attach(self):
         cmd = ['attach', '--auto']
         try:
-            return_out, return_err = self._sub_man_cli(cmd)
+            return_out = self._sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             self.log_warn("Auto-attach failed with: {0}".format(e))
             return False
@@ -294,12 +292,12 @@ class SubscriptionManager(object):
 
         # Get all available pools
         cmd = ['list', '--available', '--pool-only']
-        results, errors = self._sub_man_cli(cmd)
+        results = self._sub_man_cli(cmd)[0]
         available = (results.rstrip()).split("\n")
 
         # Get all consumed pools
         cmd = ['list', '--consumed', '--pool-only']
-        results, errors = self._sub_man_cli(cmd)
+        results = self._sub_man_cli(cmd)[0]
         consumed = (results.rstrip()).split("\n")
 
         return available, consumed
@@ -311,14 +309,14 @@ class SubscriptionManager(object):
         '''
 
         cmd = ['repos', '--list-enabled']
-        return_out, return_err = self._sub_man_cli(cmd)
+        return_out = self._sub_man_cli(cmd)[0]
         active_repos = []
         for repo in return_out.split("\n"):
             if "Repo ID:" in repo:
                 active_repos.append((repo.split(':')[1]).strip())
 
         cmd = ['repos', '--list-disabled']
-        return_out, return_err = self._sub_man_cli(cmd)
+        return_out = self._sub_man_cli(cmd)[0]
 
         inactive_repos = []
         for repo in return_out.split("\n"):
diff --git a/cloudinit/config/cc_runcmd.py b/cloudinit/config/cc_runcmd.py
index 539cbd5..b6f6c80 100644
--- a/cloudinit/config/cc_runcmd.py
+++ b/cloudinit/config/cc_runcmd.py
@@ -66,7 +66,6 @@ schema = {
             'additionalProperties': False,
             'minItems': 1,
             'required': [],
-            'uniqueItems': True
         }
     }
 }
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index bb24d57..5ef9737 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -68,16 +68,57 @@ import re
 import sys
 
 from cloudinit.distros import ug_util
-from cloudinit import ssh_util
+from cloudinit import log as logging
+from cloudinit.ssh_util import update_ssh_config
 from cloudinit import util
 
 from string import ascii_letters, digits
 
+LOG = logging.getLogger(__name__)
+
 # We are removing certain 'painful' letters/numbers
 PW_SET = (''.join([x for x in ascii_letters + digits
                    if x not in 'loLOI01']))
 
 
+def handle_ssh_pwauth(pw_auth, service_cmd=None, service_name="ssh"):
+    """Apply sshd PasswordAuthentication changes.
+
+    @param pw_auth: config setting from 'pw_auth'.
+                    Best given as True, False, or "unchanged".
+    @param service_cmd: The service command list (['service'])
+    @param service_name: The name of the sshd service for the system.
+
+    @return: None"""
+    cfg_name = "PasswordAuthentication"
+    if service_cmd is None:
+        service_cmd = ["service"]
+
+    if util.is_true(pw_auth):
+        cfg_val = 'yes'
+    elif util.is_false(pw_auth):
+        cfg_val = 'no'
+    else:
+        bmsg = "Leaving ssh config '%s' unchanged." % cfg_name
+        if pw_auth is None or pw_auth.lower() == 'unchanged':
+            LOG.debug("%s ssh_pwauth=%s", bmsg, pw_auth)
+        else:
+            LOG.warning("%s Unrecognized value: ssh_pwauth=%s", bmsg, pw_auth)
+        return
+
+    updated = update_ssh_config({cfg_name: cfg_val})
+    if not updated:
+        LOG.debug("No need to restart ssh service, %s not updated.", cfg_name)
+        return
+
+    if 'systemctl' in service_cmd:
+        cmd = list(service_cmd) + ["restart", service_name]
+    else:
+        cmd = list(service_cmd) + [service_name, "restart"]
+    util.subp(cmd)
+    LOG.debug("Restarted the ssh daemon.")
+
+
 def handle(_name, cfg, cloud, log, args):
     if len(args) != 0:
         # if run from command line, and give args, wipe the chpasswd['list']
@@ -170,65 +211,9 @@ def handle(_name, cfg, cloud, log, args):
             if expired_users:
                 log.debug("Expired passwords for: %s users", expired_users)
 
-    change_pwauth = False
-    pw_auth = None
-    if 'ssh_pwauth' in cfg:
-        if util.is_true(cfg['ssh_pwauth']):
-            change_pwauth = True
-            pw_auth = 'yes'
-        elif util.is_false(cfg['ssh_pwauth']):
-            change_pwauth = True
-            pw_auth = 'no'
-        elif str(cfg['ssh_pwauth']).lower() == 'unchanged':
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        elif not str(cfg['ssh_pwauth']).strip():
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        elif not cfg['ssh_pwauth']:
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        else:
-            msg = 'Unrecognized value %s for ssh_pwauth' % cfg['ssh_pwauth']
-            util.logexc(log, msg)
-
-    if change_pwauth:
-        replaced_auth = False
-
-        # See: man sshd_config
-        old_lines = ssh_util.parse_ssh_config(ssh_util.DEF_SSHD_CFG)
-        new_lines = []
-        i = 0
-        for (i, line) in enumerate(old_lines):
-            # Keywords are case-insensitive and arguments are case-sensitive
-            if line.key == 'passwordauthentication':
-                log.debug("Replacing auth line %s with %s", i + 1, pw_auth)
-                replaced_auth = True
-                line.value = pw_auth
-            new_lines.append(line)
-
-        if not replaced_auth:
-            log.debug("Adding new auth line %s", i + 1)
-            replaced_auth = True
-            new_lines.append(ssh_util.SshdConfigLine('',
-                                                     'PasswordAuthentication',
-                                                     pw_auth))
-
-        lines = [str(l) for l in new_lines]
-        util.write_file(ssh_util.DEF_SSHD_CFG, "\n".join(lines),
-                        copy_mode=True)
-
-        try:
-            cmd = cloud.distro.init_cmd  # Default service
-            cmd.append(cloud.distro.get_option('ssh_svcname', 'ssh'))
-            cmd.append('restart')
-            if 'systemctl' in cmd:  # Switch action ordering
-                cmd[1], cmd[2] = cmd[2], cmd[1]
-            cmd = filter(None, cmd)  # Remove empty arguments
-            util.subp(cmd)
-            log.debug("Restarted the ssh daemon")
-        except Exception:
-            util.logexc(log, "Restarting of the ssh daemon failed")
+    handle_ssh_pwauth(
+        cfg.get('ssh_pwauth'), service_cmd=cloud.distro.init_cmd,
+        service_name=cloud.distro.get_option('ssh_svcname', 'ssh'))
 
     if len(errors):
         log.debug("%s errors occured, re-raising the last one", len(errors))
diff --git a/cloudinit/config/cc_snap.py b/cloudinit/config/cc_snap.py
index 34a53fd..90724b8 100644
--- a/cloudinit/config/cc_snap.py
+++ b/cloudinit/config/cc_snap.py
@@ -110,7 +110,6 @@ schema = {
                     'additionalItems': False,  # Reject non-string & non-list
                     'minItems': 1,
                     'minProperties': 1,
-                    'uniqueItems': True
                 },
                 'squashfuse_in_container': {
                     'type': 'boolean'
@@ -204,12 +203,12 @@ def maybe_install_squashfuse(cloud):
         return
     try:
         cloud.distro.update_package_sources()
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Package update failed")
         raise
     try:
         cloud.distro.install_packages(['squashfuse'])
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Failed to install squashfuse")
         raise
 
diff --git a/cloudinit/config/cc_snappy.py b/cloudinit/config/cc_snappy.py
index bab80bb..15bee2d 100644
--- a/cloudinit/config/cc_snappy.py
+++ b/cloudinit/config/cc_snappy.py
@@ -213,7 +213,7 @@ def render_snap_op(op, name, path=None, cfgfile=None, config=None):
 
 def read_installed_packages():
     ret = []
-    for (name, date, version, dev) in read_pkg_data():
+    for (name, _date, _version, dev) in read_pkg_data():
         if dev:
             ret.append(NAMESPACE_DELIM.join([name, dev]))
         else:
@@ -222,7 +222,7 @@ def read_installed_packages():
 
 
 def read_pkg_data():
-    out, err = util.subp([SNAPPY_CMD, "list"])
+    out, _err = util.subp([SNAPPY_CMD, "list"])
     pkg_data = []
     for line in out.splitlines()[1:]:
         toks = line.split(sep=None, maxsplit=3)
diff --git a/cloudinit/config/cc_ubuntu_advantage.py b/cloudinit/config/cc_ubuntu_advantage.py
index 16b1868..5e082bd 100644
--- a/cloudinit/config/cc_ubuntu_advantage.py
+++ b/cloudinit/config/cc_ubuntu_advantage.py
@@ -87,7 +87,6 @@ schema = {
                     'additionalItems': False,  # Reject non-string & non-list
                     'minItems': 1,
                     'minProperties': 1,
-                    'uniqueItems': True
                 }
             },
             'additionalProperties': False,  # Reject keys not in schema
@@ -149,12 +148,12 @@ def maybe_install_ua_tools(cloud):
         return
     try:
         cloud.distro.update_package_sources()
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Package update failed")
         raise
     try:
         cloud.distro.install_packages(['ubuntu-advantage-tools'])
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Failed to install ubuntu-advantage-tools")
         raise
 
diff --git a/cloudinit/config/schema.py b/cloudinit/config/schema.py
index ca7d0d5..76826e0 100644
--- a/cloudinit/config/schema.py
+++ b/cloudinit/config/schema.py
@@ -297,8 +297,8 @@ def get_schema():
 
     configs_dir = os.path.dirname(os.path.abspath(__file__))
     potential_handlers = find_modules(configs_dir)
-    for (fname, mod_name) in potential_handlers.items():
-        mod_locs, looked_locs = importer.find_module(
+    for (_fname, mod_name) in potential_handlers.items():
+        mod_locs, _looked_locs = importer.find_module(
             mod_name, ['cloudinit.config'], ['schema'])
         if mod_locs:
             mod = importer.import_module(mod_locs[0])
diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
new file mode 100644
index 0000000..b051ec8
--- /dev/null
+++ b/cloudinit/config/tests/test_set_passwords.py
@@ -0,0 +1,71 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import mock
+
+from cloudinit.config import cc_set_passwords as setpass
+from cloudinit.tests.helpers import CiTestCase
+from cloudinit import util
+
+MODPATH = "cloudinit.config.cc_set_passwords."
+
+
+class TestHandleSshPwauth(CiTestCase):
+    """Test cc_set_passwords handling of ssh_pwauth in handle_ssh_pwauth."""
+
+    with_logs = True
+
+    @mock.patch(MODPATH + "util.subp")
+    def test_unknown_value_logs_warning(self, m_subp):
+        setpass.handle_ssh_pwauth("floo")
+        self.assertIn("Unrecognized value: ssh_pwauth=floo",
+                      self.logs.getvalue())
+        m_subp.assert_not_called()
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_systemctl_as_service_cmd(self, m_subp, m_update_ssh_config):
+        """If systemctl in service cmd: systemctl restart name."""
+        setpass.handle_ssh_pwauth(
+            True, service_cmd=["systemctl"], service_name="myssh")
+        self.assertEqual(mock.call(["systemctl", "restart", "myssh"]),
+                         m_subp.call_args)
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_service_as_service_cmd(self, m_subp, m_update_ssh_config):
+        """If systemctl in service cmd: systemctl restart name."""
+        setpass.handle_ssh_pwauth(
+            True, service_cmd=["service"], service_name="myssh")
+        self.assertEqual(mock.call(["service", "myssh", "restart"]),
+                         m_subp.call_args)
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=False)
+    @mock.patch(MODPATH + "util.subp")
+    def test_not_restarted_if_not_updated(self, m_subp, m_update_ssh_config):
+        """If config is not updated, then no system restart should be done."""
+        setpass.handle_ssh_pwauth(True)
+        m_subp.assert_not_called()
+        self.assertIn("No need to restart ssh", self.logs.getvalue())
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_unchanged_does_nothing(self, m_subp, m_update_ssh_config):
+        """If 'unchanged', then no updates to config and no restart."""
+        setpass.handle_ssh_pwauth(
+            "unchanged", service_cmd=["systemctl"], service_name="myssh")
+        m_update_ssh_config.assert_not_called()
+        m_subp.assert_not_called()
+
+    @mock.patch(MODPATH + "util.subp")
+    def test_valid_change_values(self, m_subp):
+        """If value is a valid changen value, then update should be called."""
+        upname = MODPATH + "update_ssh_config"
+        optname = "PasswordAuthentication"
+        for value in util.FALSE_STRINGS + util.TRUE_STRINGS:
+            optval = "yes" if value in util.TRUE_STRINGS else "no"
+            with mock.patch(upname, return_value=False) as m_update:
+                setpass.handle_ssh_pwauth(value)
+                m_update.assert_called_with({optname: optval})
+        m_subp.assert_not_called()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/config/tests/test_snap.py b/cloudinit/config/tests/test_snap.py
index c5b4a9d..34c80f1 100644
--- a/cloudinit/config/tests/test_snap.py
+++ b/cloudinit/config/tests/test_snap.py
@@ -9,7 +9,7 @@ from cloudinit.config.cc_snap import (
 from cloudinit.config.schema import validate_cloudconfig_schema
 from cloudinit import util
 from cloudinit.tests.helpers import (
-    CiTestCase, mock, wrap_and_call, skipUnlessJsonSchema)
+    CiTestCase, SchemaTestCaseMixin, mock, wrap_and_call, skipUnlessJsonSchema)
 
 
 SYSTEM_USER_ASSERTION = """\
@@ -245,9 +245,10 @@ class TestRunCommands(CiTestCase):
 
 
 @skipUnlessJsonSchema()
-class TestSchema(CiTestCase):
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
 
     with_logs = True
+    schema = schema
 
     def test_schema_warns_on_snap_not_as_dict(self):
         """If the snap configuration is not a dict, emit a warning."""
@@ -340,6 +341,30 @@ class TestSchema(CiTestCase):
             {'snap': {'assertions': {'01': 'also valid'}}}, schema)
         self.assertEqual('', self.logs.getvalue())
 
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': [["echo", "bye"], ["echo" "bye"]]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': ["echo bye", "echo bye"]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_array(self):
+        """Duplicated commands dict/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_string(self):
+        """Duplicated commands dict/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': "echo bye", '01': "echo bye"}},
+            "command entries can be duplicate.")
+
 
 class TestHandle(CiTestCase):
 
diff --git a/cloudinit/config/tests/test_ubuntu_advantage.py b/cloudinit/config/tests/test_ubuntu_advantage.py
index f2a59fa..f1beeff 100644
--- a/cloudinit/config/tests/test_ubuntu_advantage.py
+++ b/cloudinit/config/tests/test_ubuntu_advantage.py
@@ -7,7 +7,8 @@ from cloudinit.config.cc_ubuntu_advantage import (
     handle, maybe_install_ua_tools, run_commands, schema)
 from cloudinit.config.schema import validate_cloudconfig_schema
 from cloudinit import util
-from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJsonSchema
+from cloudinit.tests.helpers import (
+    CiTestCase, mock, SchemaTestCaseMixin, skipUnlessJsonSchema)
 
 
 # Module path used in mocks
@@ -105,9 +106,10 @@ class TestRunCommands(CiTestCase):
 
 
 @skipUnlessJsonSchema()
-class TestSchema(CiTestCase):
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
 
     with_logs = True
+    schema = schema
 
     def test_schema_warns_on_ubuntu_advantage_not_as_dict(self):
         """If ubuntu-advantage configuration is not a dict, emit a warning."""
@@ -169,6 +171,30 @@ class TestSchema(CiTestCase):
             {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema)
         self.assertEqual('', self.logs.getvalue())
 
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': [["echo", "bye"], ["echo" "bye"]]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': ["echo bye", "echo bye"]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_array(self):
+        """Duplicated commands dict/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_string(self):
+        """Duplicated commands dict/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': "echo bye", '01': "echo bye"}},
+            "command entries can be duplicate.")
+
 
 class TestHandle(CiTestCase):
 
diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
index 099fac5..5b1718a 100644
--- a/cloudinit/distros/freebsd.py
+++ b/cloudinit/distros/freebsd.py
@@ -113,7 +113,7 @@ class Distro(distros.Distro):
         n = re.search(r'\d+$', dev)
         index = n.group(0)
 
-        (out, err) = util.subp(['ifconfig', '-a'])
+        (out, _err) = util.subp(['ifconfig', '-a'])
         ifconfigoutput = [x for x in (out.strip()).splitlines()
                           if len(x.split()) > 0]
         bsddev = 'NOT_FOUND'
diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
index fdc1f62..6815410 100644
--- a/cloudinit/distros/ubuntu.py
+++ b/cloudinit/distros/ubuntu.py
@@ -25,7 +25,7 @@ class Distro(debian.Distro):
     def preferred_ntp_clients(self):
         """The preferred ntp client is dependent on the version."""
         if not self._preferred_ntp_clients:
-            (name, version, codename) = util.system_info()['dist']
+            (_name, _version, codename) = util.system_info()['dist']
             # Xenial cloud-init only installed ntp, UbuntuCore has timesyncd.
             if codename == "xenial" and not util.system_is_snappy():
                 self._preferred_ntp_clients = ['ntp']
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index f69c0ef..43226bd 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -107,6 +107,21 @@ def is_bond(devname):
     return os.path.exists(sys_dev_path(devname, "bonding"))
 
 
+def is_renamed(devname):
+    """
+    /* interface name assignment types (sysfs name_assign_type attribute) */
+    #define NET_NAME_UNKNOWN	0	/* unknown origin (not exposed to user) */
+    #define NET_NAME_ENUM		1	/* enumerated by kernel */
+    #define NET_NAME_PREDICTABLE	2	/* predictably named by the kernel */
+    #define NET_NAME_USER		3	/* provided by user-space */
+    #define NET_NAME_RENAMED	4	/* renamed by user-space */
+    """
+    name_assign_type = read_sys_net_safe(devname, 'name_assign_type')
+    if name_assign_type and name_assign_type in ['3', '4']:
+        return True
+    return False
+
+
 def is_vlan(devname):
     uevent = str(read_sys_net_safe(devname, "uevent"))
     return 'DEVTYPE=vlan' in uevent.splitlines()
@@ -180,6 +195,17 @@ def find_fallback_nic(blacklist_drivers=None):
     if not blacklist_drivers:
         blacklist_drivers = []
 
+    if 'net.ifnames=0' in util.get_cmdline():
+        LOG.debug('Stable ifnames disabled by net.ifnames=0 in /proc/cmdline')
+    else:
+        unstable = [device for device in get_devicelist()
+                    if device != 'lo' and not is_renamed(device)]
+        if len(unstable):
+            LOG.debug('Found unstable nic names: %s; calling udevadm settle',
+                      unstable)
+            msg = 'Waiting for udev events to settle'
+            util.log_time(LOG.debug, msg, func=util.udevadm_settle)
+
     # get list of interfaces that could have connections
     invalid_interfaces = set(['lo'])
     potential_interfaces = set([device for device in get_devicelist()
@@ -295,7 +321,7 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
 
     def _version_2(netcfg):
         renames = []
-        for key, ent in netcfg.get('ethernets', {}).items():
+        for ent in netcfg.get('ethernets', {}).values():
             # only rename if configured to do so
             name = ent.get('set-name')
             if not name:
diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
index 9e9fe0f..f89a0f7 100755
--- a/cloudinit/net/cmdline.py
+++ b/cloudinit/net/cmdline.py
@@ -65,7 +65,7 @@ def _klibc_to_config_entry(content, mac_addrs=None):
         iface['mac_address'] = mac_addrs[name]
 
     # Handle both IPv4 and IPv6 values
-    for v, pre in (('ipv4', 'IPV4'), ('ipv6', 'IPV6')):
+    for pre in ('IPV4', 'IPV6'):
         # if no IPV4ADDR or IPV6ADDR, then go on.
         if pre + "ADDR" not in data:
             continue
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index 087c0c0..12cf509 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -216,7 +216,7 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
     if leases_d is None:
         leases_d = NETWORKD_LEASES_DIR
     leases = networkd_load_leases(leases_d=leases_d)
-    for ifindex, data in sorted(leases.items()):
+    for _ifindex, data in sorted(leases.items()):
         if data.get(keyname):
             return data[keyname]
     return None
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 39d89c4..e53b9f1 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -287,7 +287,6 @@ class Renderer(renderer.Renderer):
             if subnet_type == 'dhcp6':
                 iface_cfg['IPV6INIT'] = True
                 iface_cfg['DHCPV6C'] = True
-                iface_cfg['BOOTPROTO'] = 'dhcp'
             elif subnet_type in ['dhcp4', 'dhcp']:
                 iface_cfg['BOOTPROTO'] = 'dhcp'
             elif subnet_type == 'static':
@@ -364,7 +363,7 @@ class Renderer(renderer.Renderer):
 
     @classmethod
     def _render_subnet_routes(cls, iface_cfg, route_cfg, subnets):
-        for i, subnet in enumerate(subnets, start=len(iface_cfg.children)):
+        for _, subnet in enumerate(subnets, start=len(iface_cfg.children)):
             for route in subnet.get('routes', []):
                 is_ipv6 = subnet.get('ipv6') or is_ipv6_addr(route['gateway'])
 
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 276556e..5c017d1 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -199,6 +199,7 @@ class TestGenerateFallbackConfig(CiTestCase):
         self.sysdir = self.tmp_dir() + '/'
         self.m_sys_path.return_value = self.sysdir
         self.addCleanup(sys_mock.stop)
+        self.add_patch('cloudinit.net.util.udevadm_settle', 'm_settle')
 
     def test_generate_fallback_finds_connected_eth_with_mac(self):
         """generate_fallback_config finds any connected device with a mac."""
diff --git a/cloudinit/reporting/events.py b/cloudinit/reporting/events.py
index 4f62d2f..e5dfab3 100644
--- a/cloudinit/reporting/events.py
+++ b/cloudinit/reporting/events.py
@@ -192,7 +192,7 @@ class ReportEventStack(object):
 
     def _childrens_finish_info(self):
         for cand_result in (status.FAIL, status.WARN):
-            for name, (value, msg) in self.children.items():
+            for _name, (value, _msg) in self.children.items():
                 if value == cand_result:
                     return (value, self.message)
         return (self.result, self.message)
diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py
index 22279d0..858e082 100644
--- a/cloudinit/sources/DataSourceAliYun.py
+++ b/cloudinit/sources/DataSourceAliYun.py
@@ -45,7 +45,7 @@ def _is_aliyun():
 
 def parse_public_keys(public_keys):
     keys = []
-    for key_id, key_body in public_keys.items():
+    for _key_id, key_body in public_keys.items():
         if isinstance(key_body, str):
             keys.append(key_body.strip())
         elif isinstance(key_body, list):
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index e1d0055..f6e86f3 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -29,7 +29,6 @@ CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
 
 # Shell command lists
 CMD_PROBE_FLOPPY = ['modprobe', 'floppy']
-CMD_UDEVADM_SETTLE = ['udevadm', 'settle', '--timeout=5']
 
 META_DATA_NOT_SUPPORTED = {
     'block-device-mapping': {},
@@ -196,9 +195,7 @@ class DataSourceAltCloud(sources.DataSource):
 
         # udevadm settle for floppy device
         try:
-            cmd = CMD_UDEVADM_SETTLE
-            cmd.append('--exit-if-exists=' + floppy_dev)
-            (cmd_out, _err) = util.subp(cmd)
+            (cmd_out, _err) = util.udevadm_settle(exists=floppy_dev, timeout=5)
             LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
         except ProcessExecutionError as _err:
             util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 0ee622e..a71197a 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -107,31 +107,24 @@ def find_dev_from_busdev(camcontrol_out, busdev):
     return None
 
 
-def get_dev_storvsc_sysctl():
+def execute_or_debug(cmd, fail_ret=None):
     try:
-        sysctl_out, err = util.subp(['sysctl', 'dev.storvsc'])
+        return util.subp(cmd)[0]
     except util.ProcessExecutionError:
-        LOG.debug("Fail to execute sysctl dev.storvsc")
-        sysctl_out = ""
-    return sysctl_out
+        LOG.debug("Failed to execute: %s", ' '.join(cmd))
+        return fail_ret
+
+
+def get_dev_storvsc_sysctl():
+    return execute_or_debug(["sysctl", "dev.storvsc"], fail_ret="")
 
 
 def get_camcontrol_dev_bus():
-    try:
-        camcontrol_b_out, err = util.subp(['camcontrol', 'devlist', '-b'])
-    except util.ProcessExecutionError:
-        LOG.debug("Fail to execute camcontrol devlist -b")
-        return None
-    return camcontrol_b_out
+    return execute_or_debug(['camcontrol', 'devlist', '-b'])
 
 
 def get_camcontrol_dev():
-    try:
-        camcontrol_out, err = util.subp(['camcontrol', 'devlist'])
-    except util.ProcessExecutionError:
-        LOG.debug("Fail to execute camcontrol devlist")
-        return None
-    return camcontrol_out
+    return execute_or_debug(['camcontrol', 'devlist'])
 
 
 def get_resource_disk_on_freebsd(port_id):
@@ -474,7 +467,7 @@ class DataSourceAzure(sources.DataSource):
            before we go into our polling loop."""
         try:
             get_metadata_from_fabric(None, lease['unknown-245'])
-        except Exception as exc:
+        except Exception:
             LOG.warning(
                 "Error communicating with Azure fabric; You may experience."
                 "connectivity issues.", exc_info=True)
@@ -492,7 +485,7 @@ class DataSourceAzure(sources.DataSource):
         jump back into the polling loop in order to retrieve the ovf_env."""
         if not ret:
             return False
-        (md, self.userdata_raw, cfg, files) = ret
+        (_md, self.userdata_raw, cfg, _files) = ret
         path = REPROVISION_MARKER_FILE
         if (cfg.get('PreprovisionedVm') is True or
                 os.path.isfile(path)):
@@ -528,7 +521,7 @@ class DataSourceAzure(sources.DataSource):
                   self.ds_cfg['agent_command'])
         try:
             fabric_data = metadata_func()
-        except Exception as exc:
+        except Exception:
             LOG.warning(
                 "Error communicating with Azure fabric; You may experience."
                 "connectivity issues.", exc_info=True)
diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py
index 02b3d56..01106ec 100644
--- a/cloudinit/sources/DataSourceIBMCloud.py
+++ b/cloudinit/sources/DataSourceIBMCloud.py
@@ -8,17 +8,11 @@ There are 2 different api exposed launch methods.
  * template: This is the legacy method of launching instances.
    When booting from an image template, the system boots first into
    a "provisioning" mode.  There, host <-> guest mechanisms are utilized
-   to execute code in the guest and provision it.
+   to execute code in the guest and configure it.  The configuration
+   includes configuring the system network and possibly installing
+   packages and other software stack.
 
-   Cloud-init will disable itself when it detects that it is in the
-   provisioning mode.  It detects this by the presence of
-   a file '/root/provisioningConfiguration.cfg'.
-
-   When provided with user-data, the "first boot" will contain a
-   ConfigDrive-like disk labeled with 'METADATA'.  If there is no user-data
-   provided, then there is no data-source.
-
-   Cloud-init never does any network configuration in this mode.
+   After the provisioning is finished, the system reboots.
 
  * os_code: Essentially "launch by OS Code" (Operating System Code).
    This is a more modern approach.  There is no specific "provisioning" boot.
@@ -30,11 +24,73 @@ There are 2 different api exposed launch methods.
    mean that 1 in 8^16 (~4 billion) Xen ConfigDrive systems will be
    incorrectly identified as IBMCloud.
 
+The combination of these 2 launch methods and with or without user-data
+creates 6 boot scenarios.
+ A. os_code with user-data
+ B. os_code without user-data
+    Cloud-init is fully operational in this mode.
+
+    There is a block device attached with label 'config-2'.
+    As it differs from OpenStack's config-2, we have to differentiate.
+    We do so by requiring the UUID on the filesystem to be "9796-932E".
+
+    This disk will have the following files. Specifically note, there
+    is no versioned path to the meta-data, only 'latest':
+      openstack/latest/meta_data.json
+      openstack/latest/network_data.json
+      openstack/latest/user_data [optional]
+      openstack/latest/vendor_data.json
+
+    vendor_data.json as of 2018-04 looks like this:
+      {"cloud-init":"#!/bin/bash\necho 'root:$6$<snip>' | chpasswd -e"}
+
+    The only difference between A and B in this mode is the presence
+    of user_data on the config disk.
+
+ C. template, provisioning boot with user-data
+ D. template, provisioning boot without user-data.
+    With ds-identify cloud-init is fully disabled in this mode.
+    Without ds-identify, cloud-init None datasource will be used.
+
+    This is currently identified by the presence of
+    /root/provisioningConfiguration.cfg . That file is placed into the
+    system before it is booted.
+
+    The difference between C and D is the presence of the METADATA disk
+    as described in E below.  There is no METADATA disk attached unless
+    user-data is provided.
+
+ E. template, post-provisioning boot with user-data.
+    Cloud-init is fully operational in this mode.
+
+    This is identified by a block device with filesystem label "METADATA".
+    The looks similar to a version-1 OpenStack config drive.  It will
+    have the following files:
+
+       openstack/latest/user_data
+       openstack/latest/meta_data.json
+       openstack/content/interfaces
+       meta.js
+
+    meta.js contains something similar to user_data.  cloud-init ignores it.
+    cloud-init ignores the 'interfaces' style file here.
+    In this mode, cloud-init has networking code disabled.  It relies
+    on the provisioning boot to have configured networking.
+
+ F. template, post-provisioning boot without user-data.
+    With ds-identify, cloud-init will be fully disabled.
+    Without ds-identify, cloud-init None datasource will be used.
+
+    There is no information available to identify this scenario.
+
+    The user will be able to ssh in as as root with their public keys that
+    have been installed into /root/ssh/.authorized_keys
+    during the provisioning stage.
+
 TODO:
  * is uuid (/sys/hypervisor/uuid) stable for life of an instance?
    it seems it is not the same as data's uuid in the os_code case
    but is in the template case.
-
 """
 import base64
 import json
@@ -138,8 +194,30 @@ def _is_xen():
     return os.path.exists("/proc/xen")
 
 
-def _is_ibm_provisioning():
-    return os.path.exists("/root/provisioningConfiguration.cfg")
+def _is_ibm_provisioning(
+        prov_cfg="/root/provisioningConfiguration.cfg",
+        inst_log="/root/swinstall.log",
+        boot_ref="/proc/1/environ"):
+    """Return boolean indicating if this boot is ibm provisioning boot."""
+    if os.path.exists(prov_cfg):
+        msg = "config '%s' exists." % prov_cfg
+        result = True
+        if os.path.exists(inst_log):
+            if os.path.exists(boot_ref):
+                result = (os.stat(inst_log).st_mtime >
+                          os.stat(boot_ref).st_mtime)
+                msg += (" log '%s' from %s boot." %
+                        (inst_log, "current" if result else "previous"))
+            else:
+                msg += (" log '%s' existed, but no reference file '%s'." %
+                        (inst_log, boot_ref))
+                result = False
+        else:
+            msg += " log '%s' did not exist." % inst_log
+    else:
+        result, msg = (False, "config '%s' did not exist." % prov_cfg)
+    LOG.debug("ibm_provisioning=%s: %s", result, msg)
+    return result
 
 
 def get_ibm_platform():
@@ -189,7 +267,7 @@ def get_ibm_platform():
         else:
             return (Platforms.TEMPLATE_LIVE_METADATA, metadata_path)
     elif _is_ibm_provisioning():
-            return (Platforms.TEMPLATE_PROVISIONING_NODATA, None)
+        return (Platforms.TEMPLATE_PROVISIONING_NODATA, None)
     return not_found
 
 
diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
index 6ac8863..aa56add 100644
--- a/cloudinit/sources/DataSourceMAAS.py
+++ b/cloudinit/sources/DataSourceMAAS.py
@@ -204,7 +204,7 @@ def read_maas_seed_url(seed_url, read_file_or_url=None, timeout=None,
         seed_url = seed_url[:-1]
 
     md = {}
-    for path, dictname, binary, optional in DS_FIELDS:
+    for path, _dictname, binary, optional in DS_FIELDS:
         if version is None:
             url = "%s/%s" % (seed_url, path)
         else:
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index dc914a7..178ccb0 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -556,7 +556,7 @@ def search_file(dirpath, filename):
     if not dirpath or not filename:
         return None
 
-    for root, dirs, files in os.walk(dirpath):
+    for root, _dirs, files in os.walk(dirpath):
         if filename in files:
             return os.path.join(root, filename)
 
diff --git a/cloudinit/sources/DataSourceOpenStack.py b/cloudinit/sources/DataSourceOpenStack.py
index e55a763..fb166ae 100644
--- a/cloudinit/sources/DataSourceOpenStack.py
+++ b/cloudinit/sources/DataSourceOpenStack.py
@@ -86,7 +86,7 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
             md_urls.append(md_url)
             url2base[md_url] = url
 
-        (max_wait, timeout, retries) = self._get_url_settings()
+        (max_wait, timeout, _retries) = self._get_url_settings()
         start_time = time.time()
         avail_url = url_helper.wait_for_url(urls=md_urls, max_wait=max_wait,
                                             timeout=timeout)
@@ -106,7 +106,7 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
         except IOError:
             return False
 
-        (max_wait, timeout, retries) = self._get_url_settings()
+        (_max_wait, timeout, retries) = self._get_url_settings()
 
         try:
             results = util.log_time(LOG.debug,
diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
index c8998b4..4ea00eb 100644
--- a/cloudinit/sources/DataSourceSmartOS.py
+++ b/cloudinit/sources/DataSourceSmartOS.py
@@ -11,7 +11,7 @@
 #    SmartOS hosts use a serial console (/dev/ttyS1) on KVM Linux Guests
 #        The meta-data is transmitted via key/value pairs made by
 #        requests on the console. For example, to get the hostname, you
-#        would send "GET hostname" on /dev/ttyS1.
+#        would send "GET sdc:hostname" on /dev/ttyS1.
 #        For Linux Guests running in LX-Brand Zones on SmartOS hosts
 #        a socket (/native/.zonecontrol/metadata.sock) is used instead
 #        of a serial console.
@@ -23,6 +23,7 @@
 import base64
 import binascii
 import errno
+import fcntl
 import json
 import os
 import random
@@ -273,8 +274,14 @@ class DataSourceSmartOS(sources.DataSource):
         write_boot_content(u_data, u_data_f)
 
         # Handle the cloud-init regular meta
+
+        # The hostname may or may not be qualified with the local domain name.
+        # This follows section 3.14 of RFC 2132.
         if not md['local-hostname']:
-            md['local-hostname'] = md['instance-id']
+            if md['hostname']:
+                md['local-hostname'] = md['hostname']
+            else:
+                md['local-hostname'] = md['instance-id']
 
         ud = None
         if md['user-data']:
@@ -455,9 +462,9 @@ class JoyentMetadataClient(object):
 
     def list(self):
         result = self.request(rtype='KEYS')
-        if result:
-            result = result.split('\n')
-        return result
+        if not result:
+            return []
+        return result.split('\n')
 
     def put(self, key, val):
         param = b' '.join([base64.b64encode(i.encode())
@@ -520,6 +527,7 @@ class JoyentMetadataSerialClient(JoyentMetadataClient):
             if not ser.isOpen():
                 raise SystemError("Unable to open %s" % self.device)
             self.fp = ser
+            fcntl.lockf(ser, fcntl.LOCK_EX)
         self._flush()
         self._negotiate()
 
diff --git a/cloudinit/sources/helpers/digitalocean.py b/cloudinit/sources/helpers/digitalocean.py
index 693f8d5..0e7ccca 100644
--- a/cloudinit/sources/helpers/digitalocean.py
+++ b/cloudinit/sources/helpers/digitalocean.py
@@ -41,10 +41,9 @@ def assign_ipv4_link_local(nic=None):
                            "address")
 
     try:
-        (result, _err) = util.subp(ip_addr_cmd)
+        util.subp(ip_addr_cmd)
         LOG.debug("assigned ip4LL address '%s' to '%s'", addr, nic)
-
-        (result, _err) = util.subp(ip_link_cmd)
+        util.subp(ip_link_cmd)
         LOG.debug("brought device '%s' up", nic)
     except Exception:
         util.logexc(LOG, "ip4LL address assignment of '%s' to '%s' failed."
@@ -75,7 +74,7 @@ def del_ipv4_link_local(nic=None):
     ip_addr_cmd = ['ip', 'addr', 'flush', 'dev', nic]
 
     try:
-        (result, _err) = util.subp(ip_addr_cmd)
+        util.subp(ip_addr_cmd)
         LOG.debug("removed ip4LL addresses from %s", nic)
 
     except Exception as e:
diff --git a/cloudinit/sources/helpers/openstack.py b/cloudinit/sources/helpers/openstack.py
index 26f3168..a4cf066 100644
--- a/cloudinit/sources/helpers/openstack.py
+++ b/cloudinit/sources/helpers/openstack.py
@@ -638,7 +638,7 @@ def convert_net_json(network_json=None, known_macs=None):
             known_macs = net.get_interfaces_by_mac()
 
         # go through and fill out the link_id_info with names
-        for link_id, info in link_id_info.items():
+        for _link_id, info in link_id_info.items():
             if info.get('name'):
                 continue
             if info.get('mac') in known_macs:
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index 2d8900e..3ef8c62 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -73,7 +73,7 @@ class NicConfigurator(object):
         The mac address(es) are in the lower case
         """
         cmd = ['ip', 'addr', 'show']
-        (output, err) = util.subp(cmd)
+        output, _err = util.subp(cmd)
         sections = re.split(r'\n\d+: ', '\n' + output)[1:]
 
         macPat = r'link/ether (([0-9A-Fa-f]{2}[:]){5}([0-9A-Fa-f]{2}))'
diff --git a/cloudinit/sources/helpers/vmware/imc/config_passwd.py b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
index 75cfbaa..8c91fa4 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_passwd.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
@@ -56,10 +56,10 @@ class PasswordConfigurator(object):
         LOG.info('Expiring password.')
         for user in uidUserList:
             try:
-                out, err = util.subp(['passwd', '--expire', user])
+                util.subp(['passwd', '--expire', user])
             except util.ProcessExecutionError as e:
                 if os.path.exists('/usr/bin/chage'):
-                    out, e = util.subp(['chage', '-d', '0', user])
+                    util.subp(['chage', '-d', '0', user])
                 else:
                     LOG.warning('Failed to expire password for %s with error: '
                                 '%s', user, e)
diff --git a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
index 4407525..a590f32 100644
--- a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
+++ b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
@@ -91,7 +91,7 @@ def enable_nics(nics):
 
     for attempt in range(0, enableNicsWaitRetries):
         logger.debug("Trying to connect interfaces, attempt %d", attempt)
-        (out, err) = set_customization_status(
+        (out, _err) = set_customization_status(
             GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
             GuestCustEventEnum.GUESTCUST_EVENT_ENABLE_NICS,
             nics)
@@ -104,7 +104,7 @@ def enable_nics(nics):
             return
 
         for count in range(0, enableNicsWaitCount):
-            (out, err) = set_customization_status(
+            (out, _err) = set_customization_status(
                 GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
                 GuestCustEventEnum.GUESTCUST_EVENT_QUERY_NICS,
                 nics)
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index e7fda22..452e921 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -278,7 +278,7 @@ class TestDataSource(CiTestCase):
         base_args = get_args(DataSource.get_hostname)  # pylint: disable=W1505
         # Import all DataSource subclasses so we can inspect them.
         modules = util.find_modules(os.path.dirname(os.path.dirname(__file__)))
-        for loc, name in modules.items():
+        for _loc, name in modules.items():
             mod_locs, _ = importer.find_module(name, ['cloudinit.sources'], [])
             if mod_locs:
                 importer.import_module(mod_locs[0])
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index 882517f..73c3177 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -279,24 +279,28 @@ class SshdConfigLine(object):
 
 
 def parse_ssh_config(fname):
+    if not os.path.isfile(fname):
+        return []
+    return parse_ssh_config_lines(util.load_file(fname).splitlines())
+
+
+def parse_ssh_config_lines(lines):
     # See: man sshd_config
     # The file contains keyword-argument pairs, one per line.
     # Lines starting with '#' and empty lines are interpreted as comments.
     # Note: key-words are case-insensitive and arguments are case-sensitive
-    lines = []
-    if not os.path.isfile(fname):
-        return lines
-    for line in util.load_file(fname).splitlines():
+    ret = []
+    for line in lines:
         line = line.strip()
         if not line or line.startswith("#"):
-            lines.append(SshdConfigLine(line))
+            ret.append(SshdConfigLine(line))
             continue
         try:
             key, val = line.split(None, 1)
         except ValueError:
             key, val = line.split('=', 1)
-        lines.append(SshdConfigLine(line, key, val))
-    return lines
+        ret.append(SshdConfigLine(line, key, val))
+    return ret
 
 
 def parse_ssh_config_map(fname):
@@ -310,4 +314,56 @@ def parse_ssh_config_map(fname):
         ret[line.key] = line.value
     return ret
 
+
+def update_ssh_config(updates, fname=DEF_SSHD_CFG):
+    """Read fname, and update if changes are necessary.
+
+    @param updates: dictionary of desired values {Option: value}
+    @return: boolean indicating if an update was done."""
+    lines = parse_ssh_config(fname)
+    changed = update_ssh_config_lines(lines=lines, updates=updates)
+    if changed:
+        util.write_file(
+            fname, "\n".join([str(l) for l in lines]) + "\n", copy_mode=True)
+    return len(changed) != 0
+
+
+def update_ssh_config_lines(lines, updates):
+    """Update the ssh config lines per updates.
+
+    @param lines: array of SshdConfigLine.  This array is updated in place.
+    @param updates: dictionary of desired values {Option: value}
+    @return: A list of keys in updates that were changed."""
+    found = set()
+    changed = []
+
+    # Keywords are case-insensitive and arguments are case-sensitive
+    casemap = dict([(k.lower(), k) for k in updates.keys()])
+
+    for (i, line) in enumerate(lines, start=1):
+        if not line.key:
+            continue
+        if line.key in casemap:
+            key = casemap[line.key]
+            value = updates[key]
+            found.add(key)
+            if line.value == value:
+                LOG.debug("line %d: option %s already set to %s",
+                          i, key, value)
+            else:
+                changed.append(key)
+                LOG.debug("line %d: option %s updated %s -> %s", i,
+                          key, line.value, value)
+                line.value = value
+
+    if len(found) != len(updates):
+        for key, value in updates.items():
+            if key in found:
+                continue
+            changed.append(key)
+            lines.append(SshdConfigLine('', key, value))
+            LOG.debug("line %d: option %s added with %s",
+                      len(lines), key, value)
+    return changed
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/templater.py b/cloudinit/templater.py
index 9a087e1..7e7acb8 100644
--- a/cloudinit/templater.py
+++ b/cloudinit/templater.py
@@ -147,7 +147,7 @@ def render_string(content, params):
     Warning: py2 str with non-ascii chars will cause UnicodeDecodeError."""
     if not params:
         params = {}
-    template_type, renderer, content = detect_template(content)
+    _template_type, renderer, content = detect_template(content)
     return renderer(content, params)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index 82fd347..117a9cf 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -8,6 +8,7 @@ import os
 import shutil
 import sys
 import tempfile
+import time
 import unittest
 
 import mock
@@ -24,6 +25,8 @@ try:
 except ImportError:
     from ConfigParser import ConfigParser
 
+from cloudinit.config.schema import (
+    SchemaValidationError, validate_cloudconfig_schema)
 from cloudinit import helpers as ch
 from cloudinit import util
 
@@ -261,7 +264,8 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
             os.path: [('isfile', 1), ('exists', 1),
                       ('islink', 1), ('isdir', 1), ('lexists', 1)],
             os: [('listdir', 1), ('mkdir', 1),
-                 ('lstat', 1), ('symlink', 2)]
+                 ('lstat', 1), ('symlink', 2),
+                 ('stat', 1)]
         }
 
         if hasattr(os, 'scandir'):
@@ -312,6 +316,23 @@ class HttprettyTestCase(CiTestCase):
         super(HttprettyTestCase, self).tearDown()
 
 
+class SchemaTestCaseMixin(unittest2.TestCase):
+
+    def assertSchemaValid(self, cfg, msg="Valid Schema failed validation."):
+        """Assert the config is valid per self.schema.
+
+        If there is only one top level key in the schema properties, then
+        the cfg will be put under that key."""
+        props = list(self.schema.get('properties'))
+        # put cfg under top level key if there is only one in the schema
+        if len(props) == 1:
+            cfg = {props[0]: cfg}
+        try:
+            validate_cloudconfig_schema(cfg, self.schema, strict=True)
+        except SchemaValidationError:
+            self.fail(msg)
+
+
 def populate_dir(path, files):
     if not os.path.exists(path):
         os.makedirs(path)
@@ -330,11 +351,20 @@ def populate_dir(path, files):
     return ret
 
 
+def populate_dir_with_ts(path, data):
+    """data is {'file': ('contents', mtime)}.  mtime relative to now."""
+    populate_dir(path, dict((k, v[0]) for k, v in data.items()))
+    btime = time.time()
+    for fpath, (_contents, mtime) in data.items():
+        ts = btime + mtime if mtime else btime
+        os.utime(os.path.sep.join((path, fpath)), (ts, ts))
+
+
 def dir2dict(startdir, prefix=None):
     flist = {}
     if prefix is None:
         prefix = startdir
-    for root, dirs, files in os.walk(startdir):
+    for root, _dirs, files in os.walk(startdir):
         for fname in files:
             fpath = os.path.join(root, fname)
             key = fpath[len(prefix):]
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index 3f37dbb..3c05a43 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -135,7 +135,7 @@ class TestGetHostnameFqdn(CiTestCase):
     def test_get_hostname_fqdn_from_passes_metadata_only_to_cloud(self):
         """Calls to cloud.get_hostname pass the metadata_only parameter."""
         mycloud = FakeCloud('cloudhost', 'cloudhost.mycloud.com')
-        hostname, fqdn = util.get_hostname_fqdn(
+        _hn, _fqdn = util.get_hostname_fqdn(
             cfg={}, cloud=mycloud, metadata_only=True)
         self.assertEqual(
             [{'fqdn': True, 'metadata_only': True},
@@ -212,4 +212,53 @@ class TestBlkid(CiTestCase):
                                   capture=True, decode="replace")
 
 
+@mock.patch('cloudinit.util.subp')
+class TestUdevadmSettle(CiTestCase):
+    def test_with_no_params(self, m_subp):
+        """called with no parameters."""
+        util.udevadm_settle()
+        m_subp.called_once_with(mock.call(['udevadm', 'settle']))
+
+    def test_with_exists_and_not_exists(self, m_subp):
+        """with exists=file where file does not exist should invoke subp."""
+        mydev = self.tmp_path("mydev")
+        util.udevadm_settle(exists=mydev)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--exit-if-exists=%s' % mydev])
+
+    def test_with_exists_and_file_exists(self, m_subp):
+        """with exists=file where file does exist should not invoke subp."""
+        mydev = self.tmp_path("mydev")
+        util.write_file(mydev, "foo\n")
+        util.udevadm_settle(exists=mydev)
+        self.assertIsNone(m_subp.call_args)
+
+    def test_with_timeout_int(self, m_subp):
+        """timeout can be an integer."""
+        timeout = 9
+        util.udevadm_settle(timeout=timeout)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--timeout=%s' % timeout])
+
+    def test_with_timeout_string(self, m_subp):
+        """timeout can be a string."""
+        timeout = "555"
+        util.udevadm_settle(timeout=timeout)
+        m_subp.assert_called_once_with(
+            ['udevadm', 'settle', '--timeout=%s' % timeout])
+
+    def test_with_exists_and_timeout(self, m_subp):
+        """test call with both exists and timeout."""
+        mydev = self.tmp_path("mydev")
+        timeout = "3"
+        util.udevadm_settle(exists=mydev)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--exit-if-exists=%s' % mydev,
+             '--timeout=%s' % timeout])
+
+    def test_subp_exception_raises_to_caller(self, m_subp):
+        m_subp.side_effect = util.ProcessExecutionError("BOOM")
+        self.assertRaises(util.ProcessExecutionError, util.udevadm_settle)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 03a573a..1de07b1 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -519,7 +519,7 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
         resource_owner_secret=token_secret,
         signature_method=oauth1.SIGNATURE_PLAINTEXT,
         timestamp=timestamp)
-    uri, signed_headers, body = client.sign(url)
+    _uri, signed_headers, _body = client.sign(url)
     return signed_headers
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 1717b52..2828ca3 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -2214,7 +2214,7 @@ def parse_mtab(path):
 def find_freebsd_part(label_part):
     if label_part.startswith("/dev/label/"):
         target_label = label_part[5:]
-        (label_part, err) = subp(['glabel', 'status', '-s'])
+        (label_part, _err) = subp(['glabel', 'status', '-s'])
         for labels in label_part.split("\n"):
             items = labels.split()
             if len(items) > 0 and items[0].startswith(target_label):
@@ -2727,4 +2727,19 @@ def mount_is_read_write(mount_point):
     mount_opts = result[-1].split(',')
     return mount_opts[0] == 'rw'
 
+
+def udevadm_settle(exists=None, timeout=None):
+    """Invoke udevadm settle with optional exists and timeout parameters"""
+    settle_cmd = ["udevadm", "settle"]
+    if exists:
+        # skip the settle if the requested path already exists
+        if os.path.exists(exists):
+            return
+        settle_cmd.extend(['--exit-if-exists=%s' % exists])
+    if timeout:
+        settle_cmd.extend(['--timeout=%s' % timeout])
+
+    return subp(settle_cmd)
+
+
 # vi: ts=4 expandtab
diff --git a/debian/changelog b/debian/changelog
index 45016a5..7199b4f 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,28 @@
+cloud-init (18.2-27-g6ef92c98-0ubuntu1) bionic; urgency=medium
+
+  * New upstream snapshot.
+    - IBMCloud: recognize provisioning environment during debug boots.
+      (LP: #1767166)
+    - net: detect unstable network names and trigger a settle if needed
+      (LP: #1766287)
+    - IBMCloud: improve documentation in datasource.
+    - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+    - packages/debian/control.in: add missing dependency on iproute2.
+      (LP: #1766711)
+    - DataSourceSmartOS: add locking of serial device.
+      [Mike Gerdts] (LP: #1746605)
+    - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
+    - DataSourceSmartOS: list() should always return a list
+      [Mike Gerdts] (LP: #1763480)
+    - schema: in validation, raise ImportError if strict but no jsonschema.
+    - set_passwords: Add newline to end of sshd config, only restart if
+      updated. (LP: #1677205)
+    - pylint: pay attention to unused variable warnings.
+    - doc: Add documentation for AliYun datasource. [Junjie Wang]
+    - Schema: do not warn on duplicate items in commands. (LP: #1764264)
+
+ -- Ryan Harper <ryan.harper@xxxxxxxxxxxxx>  Thu, 26 Apr 2018 16:33:59 -0500
+
 cloud-init (18.2-14-g6d48d265-0ubuntu1) bionic; urgency=medium
 
   * New upstream snapshot.
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 7e2854d..38ba75d 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -80,6 +80,7 @@ Follow for more information.
 .. toctree::
    :maxdepth: 2
 
+   datasources/aliyun.rst
    datasources/altcloud.rst
    datasources/azure.rst
    datasources/cloudsigma.rst
diff --git a/doc/rtd/topics/datasources/aliyun.rst b/doc/rtd/topics/datasources/aliyun.rst
new file mode 100644
index 0000000..3f4f40c
--- /dev/null
+++ b/doc/rtd/topics/datasources/aliyun.rst
@@ -0,0 +1,74 @@
+.. _datasource_aliyun:
+
+Alibaba Cloud (AliYun)
+======================
+The ``AliYun`` datasource reads data from Alibaba Cloud ECS.  Support is
+present in cloud-init since 0.7.9.
+
+Metadata Service
+----------------
+The Alibaba Cloud metadata service is available at the well known url
+``http://100.100.100.200/``. For more information see
+Alibaba Cloud ECS on `metadata
+<https://www.alibabacloud.com/help/zh/faq-detail/49122.htm>`__.
+
+Versions
+^^^^^^^^
+Like the EC2 metadata service, Alibaba Cloud's metadata service provides
+versioned data under specific paths.  As of April 2018, there are only
+``2016-01-01`` and ``latest`` versions.
+
+It is expected that the dated version will maintain a stable interface but
+``latest`` may change content at a future date.
+
+Cloud-init uses the ``2016-01-01`` version.
+
+You can list the versions available to your instance with:
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/
+    2016-01-01
+    latest
+
+Metadata
+^^^^^^^^
+Instance metadata can be queried at
+``http://100.100.100.200/2016-01-01/meta-data``
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/2016-01-01/meta-data
+    dns-conf/
+    eipv4
+    hostname
+    image-id
+    instance-id
+    instance/
+    mac
+    network-type
+    network/
+    ntp-conf/
+    owner-account-id
+    private-ipv4
+    public-keys/
+    region-id
+    serial-number
+    source-address
+    sub-private-ipv4-list
+    vpc-cidr-block
+    vpc-id
+
+Userdata
+^^^^^^^^
+If provided, user-data will appear at
+``http://100.100.100.200/2016-01-01/user-data``.
+If no user-data is provided, this will return a 404.
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/2016-01-01/user-data
+    #!/bin/sh
+    echo "Hello World."
+
+.. vi: textwidth=78
diff --git a/packages/debian/control.in b/packages/debian/control.in
index 46da6df..e9ed64f 100644
--- a/packages/debian/control.in
+++ b/packages/debian/control.in
@@ -11,6 +11,7 @@ Package: cloud-init
 Architecture: all
 Depends: ${misc:Depends},
          ${${python}:Depends},
+         iproute2,
          isc-dhcp-client
 Recommends: eatmydata, sudo, software-properties-common, gdisk
 XB-Python-Version: ${python:Versions}
diff --git a/tests/cloud_tests/bddeb.py b/tests/cloud_tests/bddeb.py
index b9cfcfa..f04d0cd 100644
--- a/tests/cloud_tests/bddeb.py
+++ b/tests/cloud_tests/bddeb.py
@@ -113,7 +113,7 @@ def bddeb(args):
     @return_value: fail count
     """
     LOG.info('preparing to build cloud-init deb')
-    (res, failed) = run_stage('build deb', [partial(setup_build, args)])
+    _res, failed = run_stage('build deb', [partial(setup_build, args)])
     return failed
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py
index d4f9135..1ba7285 100644
--- a/tests/cloud_tests/collect.py
+++ b/tests/cloud_tests/collect.py
@@ -25,7 +25,8 @@ def collect_script(instance, base_dir, script, script_name):
         script.encode(), rcs=False,
         description='collect: {}'.format(script_name))
     if err:
-        LOG.debug("collect script %s had stderr: %s", script_name, err)
+        LOG.debug("collect script %s exited '%s' and had stderr: %s",
+                  script_name, err, exit)
     if not isinstance(out, bytes):
         raise util.PlatformError(
             "Collection of '%s' returned type %s, expected bytes: %s" %
diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py
index 3bad021..cc439d2 100644
--- a/tests/cloud_tests/platforms/instances.py
+++ b/tests/cloud_tests/platforms/instances.py
@@ -108,7 +108,7 @@ class Instance(TargetBase):
                 return client
             except (ConnectionRefusedError, AuthenticationException,
                     BadHostKeyException, ConnectionResetError, SSHException,
-                    OSError) as e:
+                    OSError):
                 retries -= 1
                 time.sleep(10)
 
diff --git a/tests/cloud_tests/platforms/lxd/instance.py b/tests/cloud_tests/platforms/lxd/instance.py
index 0d957bc..1c17c78 100644
--- a/tests/cloud_tests/platforms/lxd/instance.py
+++ b/tests/cloud_tests/platforms/lxd/instance.py
@@ -152,9 +152,8 @@ class LXDInstance(Instance):
                 return fp.read()
 
         try:
-            stdout, stderr = subp(
-                ['lxc', 'console', '--show-log', self.name], decode=False)
-            return stdout
+            return subp(['lxc', 'console', '--show-log', self.name],
+                        decode=False)[0]
         except ProcessExecutionError as e:
             raise PlatformError(
                 "console log",
@@ -214,11 +213,10 @@ def _has_proper_console_support():
             reason = "LXD Driver version not 3.x+ (%s)" % dver
         else:
             try:
-                stdout, stderr = subp(['lxc', 'console', '--help'],
-                                      decode=False)
+                stdout = subp(['lxc', 'console', '--help'], decode=False)[0]
                 if not (b'console' in stdout and b'log' in stdout):
                     reason = "no '--log' in lxc console --help"
-            except ProcessExecutionError as e:
+            except ProcessExecutionError:
                 reason = "no 'console' command in lxc client"
 
     if reason:
diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py
index 6d24211..4e19570 100644
--- a/tests/cloud_tests/setup_image.py
+++ b/tests/cloud_tests/setup_image.py
@@ -25,10 +25,9 @@ def installed_package_version(image, package, ensure_installed=True):
     else:
         raise NotImplementedError
 
-    msg = 'query version for package: {}'.format(package)
-    (out, err, exit) = image.execute(
-        cmd, description=msg, rcs=(0,) if ensure_installed else range(0, 256))
-    return out.strip()
+    return image.execute(
+        cmd, description='query version for package: {}'.format(package),
+        rcs=(0,) if ensure_installed else range(0, 256))[0].strip()
 
 
 def install_deb(args, image):
@@ -54,7 +53,7 @@ def install_deb(args, image):
          remote_path], description=msg)
     # check installed deb version matches package
     fmt = ['-W', "--showformat=${Version}"]
-    (out, err, exit) = image.execute(['dpkg-deb'] + fmt + [remote_path])
+    out = image.execute(['dpkg-deb'] + fmt + [remote_path])[0]
     expected_version = out.strip()
     found_version = installed_package_version(image, 'cloud-init')
     if expected_version != found_version:
@@ -85,7 +84,7 @@ def install_rpm(args, image):
     image.execute(['rpm', '-U', remote_path], description=msg)
 
     fmt = ['--queryformat', '"%{VERSION}"']
-    (out, err, exit) = image.execute(['rpm', '-q'] + fmt + [remote_path])
+    (out, _err, _exit) = image.execute(['rpm', '-q'] + fmt + [remote_path])
     expected_version = out.strip()
     found_version = installed_package_version(image, 'cloud-init')
     if expected_version != found_version:
diff --git a/tests/cloud_tests/testcases/base.py b/tests/cloud_tests/testcases/base.py
index 4fda8f9..0d1916b 100644
--- a/tests/cloud_tests/testcases/base.py
+++ b/tests/cloud_tests/testcases/base.py
@@ -159,7 +159,7 @@ class CloudTestCase(unittest.TestCase):
         expected_net_keys = [
             'public-ipv4s', 'ipv4-associations', 'local-hostname',
             'public-hostname']
-        for mac, mac_data in macs.items():
+        for mac_data in macs.values():
             for key in expected_net_keys:
                 self.assertIn(key, mac_data)
         self.assertIsNotNone(
diff --git a/tests/cloud_tests/testcases/examples/including_user_groups.py b/tests/cloud_tests/testcases/examples/including_user_groups.py
index 93b7a82..4067348 100644
--- a/tests/cloud_tests/testcases/examples/including_user_groups.py
+++ b/tests/cloud_tests/testcases/examples/including_user_groups.py
@@ -42,7 +42,7 @@ class TestUserGroups(base.CloudTestCase):
 
     def test_user_root_in_secret(self):
         """Test root user is in 'secret' group."""
-        user, _, groups = self.get_data_file('root_groups').partition(":")
+        _user, _, groups = self.get_data_file('root_groups').partition(":")
         self.assertIn("secret", groups.split(),
                       msg="User root is not in group 'secret'")
 
diff --git a/tests/cloud_tests/testcases/modules/user_groups.py b/tests/cloud_tests/testcases/modules/user_groups.py
index 93b7a82..4067348 100644
--- a/tests/cloud_tests/testcases/modules/user_groups.py
+++ b/tests/cloud_tests/testcases/modules/user_groups.py
@@ -42,7 +42,7 @@ class TestUserGroups(base.CloudTestCase):
 
     def test_user_root_in_secret(self):
         """Test root user is in 'secret' group."""
-        user, _, groups = self.get_data_file('root_groups').partition(":")
+        _user, _, groups = self.get_data_file('root_groups').partition(":")
         self.assertIn("secret", groups.split(),
                       msg="User root is not in group 'secret'")
 
diff --git a/tests/cloud_tests/util.py b/tests/cloud_tests/util.py
index 3dd4996..06f7d86 100644
--- a/tests/cloud_tests/util.py
+++ b/tests/cloud_tests/util.py
@@ -358,7 +358,7 @@ class TargetBase(object):
         # when sh is invoked with '-c', then the first argument is "$0"
         # which is commonly understood as the "program name".
         # 'read_data' is the program name, and 'remote_path' is '$1'
-        stdout, stderr, rc = self._execute(
+        stdout, _stderr, rc = self._execute(
             ["sh", "-c", 'exec cat "$1"', 'read_data', remote_path])
         if rc != 0:
             raise RuntimeError("Failed to read file '%s'" % remote_path)
diff --git a/tests/unittests/test__init__.py b/tests/unittests/test__init__.py
index 25878d7..f1ab02e 100644
--- a/tests/unittests/test__init__.py
+++ b/tests/unittests/test__init__.py
@@ -214,7 +214,7 @@ class TestCmdlineUrl(CiTestCase):
     def test_no_key_found(self, m_read):
         cmdline = "ro mykey=http://example.com/foo root=foo"
         fpath = self.tmp_path("ccpath")
-        lvl, msg = main.attempt_cmdline_url(
+        lvl, _msg = main.attempt_cmdline_url(
             fpath, network=True, cmdline=cmdline)
 
         m_read.assert_not_called()
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 3e8b791..88fe76c 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -214,7 +214,7 @@ scbus-1 on xpt0 bus 0
                 self.assertIn(tag, x)
 
         def tags_equal(x, y):
-            for x_tag, x_val in x.items():
+            for x_val in x.values():
                 y_val = y.get(x_val.tag)
                 self.assertEqual(x_val.text, y_val.text)
 
@@ -1216,7 +1216,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         fake_resp.return_value = mock.MagicMock(status_code=200, text=content,
                                                 content=content)
         dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        md, ud, cfg, d = dsa._reprovision()
+        md, _ud, cfg, _d = dsa._reprovision()
         self.assertEqual(md['local-hostname'], hostname)
         self.assertEqual(cfg['system_info']['default_user']['name'], username)
         self.assertEqual(fake_resp.call_args_list,
diff --git a/tests/unittests/test_datasource/test_ibmcloud.py b/tests/unittests/test_datasource/test_ibmcloud.py
index 621cfe4..e639ae4 100644
--- a/tests/unittests/test_datasource/test_ibmcloud.py
+++ b/tests/unittests/test_datasource/test_ibmcloud.py
@@ -259,4 +259,54 @@ class TestReadMD(test_helpers.CiTestCase):
                          ret['metadata'])
 
 
+class TestIsIBMProvisioning(test_helpers.FilesystemMockingTestCase):
+    """Test the _is_ibm_provisioning method."""
+    inst_log = "/root/swinstall.log"
+    prov_cfg = "/root/provisioningConfiguration.cfg"
+    boot_ref = "/proc/1/environ"
+    with_logs = True
+
+    def _call_with_root(self, rootd):
+        self.reRoot(rootd)
+        return ibm._is_ibm_provisioning()
+
+    def test_no_config(self):
+        """No provisioning config means not provisioning."""
+        self.assertFalse(self._call_with_root(self.tmp_dir()))
+
+    def test_config_only(self):
+        """A provisioning config without a log means provisioning."""
+        rootd = self.tmp_dir()
+        test_helpers.populate_dir(rootd, {self.prov_cfg: "key=value"})
+        self.assertTrue(self._call_with_root(rootd))
+
+    def test_config_with_old_log(self):
+        """A config with a log from previous boot is not provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", -30),
+                self.boot_ref: ("PWD=/", 0)}
+        test_helpers.populate_dir_with_ts(rootd, data)
+        self.assertFalse(self._call_with_root(rootd=rootd))
+        self.assertIn("from previous boot", self.logs.getvalue())
+
+    def test_config_with_new_log(self):
+        """A config with a log from this boot is provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", 30),
+                self.boot_ref: ("PWD=/", 0)}
+        test_helpers.populate_dir_with_ts(rootd, data)
+        self.assertTrue(self._call_with_root(rootd=rootd))
+        self.assertIn("from current boot", self.logs.getvalue())
+
+    def test_config_and_log_no_reference(self):
+        """If the config and log existed, but no reference, assume not."""
+        rootd = self.tmp_dir()
+        test_helpers.populate_dir(
+            rootd, {self.prov_cfg: "key=value", self.inst_log: "log data\n"})
+        self.assertFalse(self._call_with_root(rootd=rootd))
+        self.assertIn("no reference file", self.logs.getvalue())
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_maas.py b/tests/unittests/test_datasource/test_maas.py
index 6e4031c..c84d067 100644
--- a/tests/unittests/test_datasource/test_maas.py
+++ b/tests/unittests/test_datasource/test_maas.py
@@ -53,7 +53,7 @@ class TestMAASDataSource(CiTestCase):
         my_d = os.path.join(self.tmp, "valid_extra")
         populate_dir(my_d, data)
 
-        ud, md, vd = DataSourceMAAS.read_maas_seed_dir(my_d)
+        ud, md, _vd = DataSourceMAAS.read_maas_seed_dir(my_d)
 
         self.assertEqual(userdata, ud)
         for key in ('instance-id', 'local-hostname'):
@@ -149,7 +149,7 @@ class TestMAASDataSource(CiTestCase):
             'meta-data/local-hostname': 'test-hostname',
             'meta-data/vendor-data': yaml.safe_dump(expected_vd).encode(),
         }
-        ud, md, vd = self.mock_read_maas_seed_url(
+        _ud, md, vd = self.mock_read_maas_seed_url(
             valid, "http://example.com/foo";)
 
         self.assertEqual(valid['meta-data/instance-id'], md['instance-id'])
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index 70d50de..cdbd1e1 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -51,9 +51,6 @@ class TestNoCloudDataSource(CiTestCase):
         class PsuedoException(Exception):
             pass
 
-        def my_find_devs_with(*args, **kwargs):
-            raise PsuedoException
-
         self.mocks.enter_context(
             mock.patch.object(util, 'find_devs_with',
                               side_effect=PsuedoException))
diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
index 2bea7a1..706e8eb 100644
--- a/tests/unittests/test_datasource/test_smartos.py
+++ b/tests/unittests/test_datasource/test_smartos.py
@@ -16,23 +16,27 @@ from __future__ import print_function
 
 from binascii import crc32
 import json
+import multiprocessing
 import os
 import os.path
 import re
 import shutil
+import signal
 import stat
 import tempfile
+import unittest2
 import uuid
 
 from cloudinit import serial
 from cloudinit.sources import DataSourceSmartOS
 from cloudinit.sources.DataSourceSmartOS import (
-    convert_smartos_network_data as convert_net)
+    convert_smartos_network_data as convert_net,
+    SMARTOS_ENV_KVM, SERIAL_DEVICE, get_smartos_environ)
 
 import six
 
 from cloudinit import helpers as c_helpers
-from cloudinit.util import b64e
+from cloudinit.util import (b64e, subp)
 
 from cloudinit.tests.helpers import mock, FilesystemMockingTestCase, TestCase
 
@@ -319,6 +323,12 @@ MOCK_RETURNS = {
 
 DMI_DATA_RETURN = 'smartdc'
 
+# Useful for calculating the length of a frame body.  A SUCCESS body will be
+# followed by more characters or be one character less if SUCCESS with no
+# payload.  See Section 4.3 of https://eng.joyent.com/mdata/protocol.html.
+SUCCESS_LEN = len('0123abcd SUCCESS ')
+NOTFOUND_LEN = len('0123abcd NOTFOUND')
+
 
 class PsuedoJoyentClient(object):
     def __init__(self, data=None):
@@ -431,6 +441,34 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
         self.assertEqual(MOCK_RETURNS['hostname'],
                          dsrc.metadata['local-hostname'])
 
+    def test_hostname_if_no_sdc_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        my_returns['sdc:hostname'] = 'sdc-' + my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['hostname'],
+                         dsrc.metadata['local-hostname'])
+
+    def test_sdc_hostname_if_no_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        my_returns['sdc:hostname'] = 'sdc-' + my_returns['hostname']
+        del my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['sdc:hostname'],
+                         dsrc.metadata['local-hostname'])
+
+    def test_sdc_uuid_if_no_hostname_or_sdc_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        del my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['sdc:uuid'],
+                         dsrc.metadata['local-hostname'])
+
     def test_userdata(self):
         dsrc = self._get_ds(mockdata=MOCK_RETURNS)
         ret = dsrc.get_data()
@@ -651,7 +689,7 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
         self.response_parts = {
             'command': 'SUCCESS',
             'crc': 'b5a9ff00',
-            'length': 17 + len(b64e(self.metadata_value)),
+            'length': SUCCESS_LEN + len(b64e(self.metadata_value)),
             'payload': b64e(self.metadata_value),
             'request_id': '{0:08x}'.format(self.request_id),
         }
@@ -787,7 +825,7 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
     def test_get_metadata_returns_None_if_value_not_found(self):
         self.response_parts['payload'] = ''
         self.response_parts['command'] = 'NOTFOUND'
-        self.response_parts['length'] = 17
+        self.response_parts['length'] = NOTFOUND_LEN
         client = self._get_client()
         client._checksum = lambda _: self.response_parts['crc']
         self.assertIsNone(client.get('some_key'))
@@ -838,6 +876,22 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
         client.open_transport()
         self.assertTrue(reader.emptied)
 
+    def test_list_metadata_returns_list(self):
+        parts = ['foo', 'bar']
+        value = b64e('\n'.join(parts))
+        self.response_parts['payload'] = value
+        self.response_parts['crc'] = '40873553'
+        self.response_parts['length'] = SUCCESS_LEN + len(value)
+        client = self._get_client()
+        self.assertEqual(client.list(), parts)
+
+    def test_list_metadata_returns_empty_list_if_no_customer_metadata(self):
+        del self.response_parts['payload']
+        self.response_parts['length'] = SUCCESS_LEN - 1
+        self.response_parts['crc'] = '14e563ba'
+        client = self._get_client()
+        self.assertEqual(client.list(), [])
+
 
 class TestNetworkConversion(TestCase):
     def test_convert_simple(self):
@@ -973,4 +1027,63 @@ class TestNetworkConversion(TestCase):
         found = convert_net(SDC_NICS_SINGLE_GATEWAY)
         self.assertEqual(expected, found)
 
+
+@unittest2.skipUnless(get_smartos_environ() == SMARTOS_ENV_KVM,
+                      "Only supported on KVM and bhyve guests under SmartOS")
+@unittest2.skipUnless(os.access(SERIAL_DEVICE, os.W_OK),
+                      "Requires write access to " + SERIAL_DEVICE)
+class TestSerialConcurrency(TestCase):
+    """
+       This class tests locking on an actual serial port, and as such can only
+       be run in a kvm or bhyve guest running on a SmartOS host.  A test run on
+       a metadata socket will not be valid because a metadata socket ensures
+       there is only one session over a connection.  In contrast, in the
+       absence of proper locking multiple processes opening the same serial
+       port can corrupt each others' exchanges with the metadata server.
+    """
+    def setUp(self):
+        self.mdata_proc = multiprocessing.Process(target=self.start_mdata_loop)
+        self.mdata_proc.start()
+        super(TestSerialConcurrency, self).setUp()
+
+    def tearDown(self):
+        # os.kill() rather than mdata_proc.terminate() to avoid console spam.
+        os.kill(self.mdata_proc.pid, signal.SIGKILL)
+        self.mdata_proc.join()
+        super(TestSerialConcurrency, self).tearDown()
+
+    def start_mdata_loop(self):
+        """
+           The mdata-get command is repeatedly run in a separate process so
+           that it may try to race with metadata operations performed in the
+           main test process.  Use of mdata-get is better than two processes
+           using the protocol implementation in DataSourceSmartOS because we
+           are testing to be sure that cloud-init and mdata-get respect each
+           others locks.
+        """
+        rcs = list(range(0, 256))
+        while True:
+            subp(['mdata-get', 'sdc:routes'], rcs=rcs)
+
+    def test_all_keys(self):
+        self.assertIsNotNone(self.mdata_proc.pid)
+        ds = DataSourceSmartOS
+        keys = [tup[0] for tup in ds.SMARTOS_ATTRIB_MAP.values()]
+        keys.extend(ds.SMARTOS_ATTRIB_JSON.values())
+
+        client = ds.jmc_client_factory()
+        self.assertIsNotNone(client)
+
+        # The behavior that we are testing for was observed mdata-get running
+        # 10 times at roughly the same time as cloud-init fetched each key
+        # once.  cloud-init would regularly see failures before making it
+        # through all keys once.
+        for _ in range(0, 3):
+            for key in keys:
+                # We don't care about the return value, just that it doesn't
+                # thrown any exceptions.
+                client.get(key)
+
+        self.assertIsNone(self.mdata_proc.exitcode)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 5364398..ad7fe41 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+from collections import namedtuple
 import copy
 import os
 from uuid import uuid4
@@ -7,7 +8,7 @@ from uuid import uuid4
 from cloudinit import safeyaml
 from cloudinit import util
 from cloudinit.tests.helpers import (
-    CiTestCase, dir2dict, populate_dir)
+    CiTestCase, dir2dict, populate_dir, populate_dir_with_ts)
 
 from cloudinit.sources import DataSourceIBMCloud as dsibm
 
@@ -66,7 +67,6 @@ P_SYS_VENDOR = "sys/class/dmi/id/sys_vendor"
 P_SEED_DIR = "var/lib/cloud/seed"
 P_DSID_CFG = "etc/cloud/ds-identify.cfg"
 
-IBM_PROVISIONING_CHECK_PATH = "/root/provisioningConfiguration.cfg"
 IBM_CONFIG_UUID = "9796-932E"
 
 MOCK_VIRT_IS_KVM = {'name': 'detect_virt', 'RET': 'kvm', 'ret': 0}
@@ -74,11 +74,17 @@ MOCK_VIRT_IS_VMWARE = {'name': 'detect_virt', 'RET': 'vmware', 'ret': 0}
 MOCK_VIRT_IS_XEN = {'name': 'detect_virt', 'RET': 'xen', 'ret': 0}
 MOCK_UNAME_IS_PPC64 = {'name': 'uname', 'out': UNAME_PPC64EL, 'ret': 0}
 
+shell_true = 0
+shell_false = 1
 
-class TestDsIdentify(CiTestCase):
+CallReturn = namedtuple('CallReturn',
+                        ['rc', 'stdout', 'stderr', 'cfg', 'files'])
+
+
+class DsIdentifyBase(CiTestCase):
     dsid_path = os.path.realpath('tools/ds-identify')
 
-    def call(self, rootd=None, mocks=None, args=None, files=None,
+    def call(self, rootd=None, mocks=None, func="main", args=None, files=None,
              policy_dmi=DI_DEFAULT_POLICY,
              policy_no_dmi=DI_DEFAULT_POLICY_NO_DMI,
              ec2_strict_id=DI_EC2_STRICT_ID_DEFAULT):
@@ -135,7 +141,7 @@ class TestDsIdentify(CiTestCase):
                 mocklines.append(write_mock(d))
 
         endlines = [
-            'main %s' % ' '.join(['"%s"' % s for s in args])
+            func + ' ' + ' '.join(['"%s"' % s for s in args])
         ]
 
         with open(wrap, "w") as fp:
@@ -159,7 +165,7 @@ class TestDsIdentify(CiTestCase):
                 cfg = {"_INVALID_YAML": contents,
                        "_EXCEPTION": str(e)}
 
-        return rc, out, err, cfg, dir2dict(rootd)
+        return CallReturn(rc, out, err, cfg, dir2dict(rootd))
 
     def _call_via_dict(self, data, rootd=None, **kwargs):
         # return output of self.call with a dict input like VALID_CFG[item]
@@ -190,6 +196,8 @@ class TestDsIdentify(CiTestCase):
                 _print_run_output(rc, out, err, cfg, files)
         return rc, out, err, cfg, files
 
+
+class TestDsIdentify(DsIdentifyBase):
     def test_wb_print_variables(self):
         """_print_info reports an array of discovered variables to stderr."""
         data = VALID_CFG['Azure-dmi-detection']
@@ -250,7 +258,10 @@ class TestDsIdentify(CiTestCase):
         Template provisioning with user-data has METADATA disk,
         datasource should return not found."""
         data = copy.deepcopy(VALID_CFG['IBMCloud-metadata'])
-        data['files'] = {IBM_PROVISIONING_CHECK_PATH: 'xxx'}
+        # change the 'is_ibm_provisioning' mock to return 1 (false)
+        isprov_m = [m for m in data['mocks']
+                    if m["name"] == "is_ibm_provisioning"][0]
+        isprov_m['ret'] = shell_true
         return self._check_via_dict(data, RC_NOT_FOUND)
 
     def test_ibmcloud_template_userdata(self):
@@ -265,7 +276,8 @@ class TestDsIdentify(CiTestCase):
 
         no disks attached.  Datasource should return not found."""
         data = copy.deepcopy(VALID_CFG['IBMCloud-nodisks'])
-        data['files'] = {IBM_PROVISIONING_CHECK_PATH: 'xxx'}
+        data['mocks'].append(
+            {'name': 'is_ibm_provisioning', 'ret': shell_true})
         return self._check_via_dict(data, RC_NOT_FOUND)
 
     def test_ibmcloud_template_no_userdata(self):
@@ -446,6 +458,47 @@ class TestDsIdentify(CiTestCase):
         self._test_ds_found('Hetzner')
 
 
+class TestIsIBMProvisioning(DsIdentifyBase):
+    """Test the is_ibm_provisioning method in ds-identify."""
+
+    inst_log = "/root/swinstall.log"
+    prov_cfg = "/root/provisioningConfiguration.cfg"
+    boot_ref = "/proc/1/environ"
+    funcname = "is_ibm_provisioning"
+
+    def test_no_config(self):
+        """No provisioning config means not provisioning."""
+        ret = self.call(files={}, func=self.funcname)
+        self.assertEqual(shell_false, ret.rc)
+
+    def test_config_only(self):
+        """A provisioning config without a log means provisioning."""
+        ret = self.call(files={self.prov_cfg: "key=value"}, func=self.funcname)
+        self.assertEqual(shell_true, ret.rc)
+
+    def test_config_with_old_log(self):
+        """A config with a log from previous boot is not provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", -30),
+                self.boot_ref: ("PWD=/", 0)}
+        populate_dir_with_ts(rootd, data)
+        ret = self.call(rootd=rootd, func=self.funcname)
+        self.assertEqual(shell_false, ret.rc)
+        self.assertIn("from previous boot", ret.stderr)
+
+    def test_config_with_new_log(self):
+        """A config with a log from this boot is provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", 30),
+                self.boot_ref: ("PWD=/", 0)}
+        populate_dir_with_ts(rootd, data)
+        ret = self.call(rootd=rootd, func=self.funcname)
+        self.assertEqual(shell_true, ret.rc)
+        self.assertIn("from current boot", ret.stderr)
+
+
 def blkid_out(disks=None):
     """Convert a list of disk dictionaries into blkid content."""
     if disks is None:
@@ -639,6 +692,7 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'vfat', 'PARTUUID': uuid4()},
@@ -652,6 +706,7 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'ext3', 'PARTUUID': uuid4(),
@@ -669,6 +724,7 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'vfat', 'PARTUUID': uuid4()},
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index 7bb1b7c..e486862 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -528,7 +528,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         expected = sorted([npre + suff for opre, npre, suff in files])
         # create files
-        for (opre, npre, suff) in files:
+        for (opre, _npre, suff) in files:
             fpath = os.path.join(apt_lists_d, opre + suff)
             util.write_file(fpath, content=fpath)
 
diff --git a/tests/unittests/test_handler/test_handler_bootcmd.py b/tests/unittests/test_handler/test_handler_bootcmd.py
index 29fc25e..b137526 100644
--- a/tests/unittests/test_handler/test_handler_bootcmd.py
+++ b/tests/unittests/test_handler/test_handler_bootcmd.py
@@ -1,9 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.config import cc_bootcmd
+from cloudinit.config.cc_bootcmd import handle, schema
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
-from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJsonSchema
+from cloudinit.tests.helpers import (
+    CiTestCase, mock, SchemaTestCaseMixin, skipUnlessJsonSchema)
 
 import logging
 import tempfile
@@ -50,7 +51,7 @@ class TestBootcmd(CiTestCase):
         """When the provided config doesn't contain bootcmd, skip it."""
         cfg = {}
         mycloud = self._get_cloud('ubuntu')
-        cc_bootcmd.handle('notimportant', cfg, mycloud, LOG, None)
+        handle('notimportant', cfg, mycloud, LOG, None)
         self.assertIn(
             "Skipping module named notimportant, no 'bootcmd' key",
             self.logs.getvalue())
@@ -60,7 +61,7 @@ class TestBootcmd(CiTestCase):
         invalid_config = {'bootcmd': 1}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError) as context_manager:
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         self.assertIn('Failed to shellify bootcmd', self.logs.getvalue())
         self.assertEqual(
             "Input to shellify was type 'int'. Expected list or tuple.",
@@ -76,7 +77,7 @@ class TestBootcmd(CiTestCase):
         invalid_config = {'bootcmd': 1}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError):
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         self.assertIn(
             'Invalid config:\nbootcmd: 1 is not of type \'array\'',
             self.logs.getvalue())
@@ -93,7 +94,7 @@ class TestBootcmd(CiTestCase):
             'bootcmd': ['ls /', 20, ['wget', 'http://stuff/blah'], {'a': 'n'}]}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError) as context_manager:
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         expected_warnings = [
             'bootcmd.1: 20 is not valid under any of the given schemas',
             'bootcmd.3: {\'a\': \'n\'} is not valid under any of the given'
@@ -117,7 +118,7 @@ class TestBootcmd(CiTestCase):
             'echo {0} $INSTANCE_ID > {1}'.format(my_id, out_file)]}
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
-            cc_bootcmd.handle('cc_bootcmd', valid_config, cc, LOG, [])
+            handle('cc_bootcmd', valid_config, cc, LOG, [])
         self.assertEqual(my_id + ' iid-datasource-none\n',
                          util.load_file(out_file))
 
@@ -128,7 +129,7 @@ class TestBootcmd(CiTestCase):
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
             with self.assertRaises(util.ProcessExecutionError) as ctxt_manager:
-                cc_bootcmd.handle('does-not-matter', valid_config, cc, LOG, [])
+                handle('does-not-matter', valid_config, cc, LOG, [])
         self.assertIn(
             'Unexpected error while running command.\n'
             "Command: ['/bin/sh',",
@@ -138,4 +139,21 @@ class TestBootcmd(CiTestCase):
             self.logs.getvalue())
 
 
+@skipUnlessJsonSchema()
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
+    """Directly test schema rather than through handle."""
+
+    schema = schema
+
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            ["byebye", "byebye"], 'command entries can be duplicate')
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            ["echo bye", "echo bye"], "command entries can be duplicate.")
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 02676aa..17c5355 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -76,7 +76,7 @@ class TestNtp(FilesystemMockingTestCase):
             template = TIMESYNCD_TEMPLATE
         else:
             template = NTP_TEMPLATE
-        (confpath, template_fn) = self._generate_template(template=template)
+        (confpath, _template_fn) = self._generate_template(template=template)
         ntpconfig = copy.deepcopy(dcfg[client])
         ntpconfig['confpath'] = confpath
         ntpconfig['template_name'] = os.path.basename(confpath)
diff --git a/tests/unittests/test_handler/test_handler_runcmd.py b/tests/unittests/test_handler/test_handler_runcmd.py
index dbbb271..9ce334a 100644
--- a/tests/unittests/test_handler/test_handler_runcmd.py
+++ b/tests/unittests/test_handler/test_handler_runcmd.py
@@ -1,10 +1,11 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.config import cc_runcmd
+from cloudinit.config.cc_runcmd import handle, schema
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
 from cloudinit.tests.helpers import (
-    FilesystemMockingTestCase, skipUnlessJsonSchema)
+    CiTestCase, FilesystemMockingTestCase, SchemaTestCaseMixin,
+    skipUnlessJsonSchema)
 
 import logging
 import os
@@ -35,7 +36,7 @@ class TestRuncmd(FilesystemMockingTestCase):
         """When the provided config doesn't contain runcmd, skip it."""
         cfg = {}
         mycloud = self._get_cloud('ubuntu')
-        cc_runcmd.handle('notimportant', cfg, mycloud, LOG, None)
+        handle('notimportant', cfg, mycloud, LOG, None)
         self.assertIn(
             "Skipping module named notimportant, no 'runcmd' key",
             self.logs.getvalue())
@@ -44,7 +45,7 @@ class TestRuncmd(FilesystemMockingTestCase):
         """Commands which can't be converted to shell will raise errors."""
         invalid_config = {'runcmd': 1}
         cc = self._get_cloud('ubuntu')
-        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        handle('cc_runcmd', invalid_config, cc, LOG, [])
         self.assertIn(
             'Failed to shellify 1 into file'
             ' /var/lib/cloud/instances/iid-datasource-none/scripts/runcmd',
@@ -59,7 +60,7 @@ class TestRuncmd(FilesystemMockingTestCase):
         """
         invalid_config = {'runcmd': 1}
         cc = self._get_cloud('ubuntu')
-        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        handle('cc_runcmd', invalid_config, cc, LOG, [])
         self.assertIn(
             'Invalid config:\nruncmd: 1 is not of type \'array\'',
             self.logs.getvalue())
@@ -75,7 +76,7 @@ class TestRuncmd(FilesystemMockingTestCase):
         invalid_config = {
             'runcmd': ['ls /', 20, ['wget', 'http://stuff/blah'], {'a': 'n'}]}
         cc = self._get_cloud('ubuntu')
-        cc_runcmd.handle('cc_runcmd', invalid_config, cc, LOG, [])
+        handle('cc_runcmd', invalid_config, cc, LOG, [])
         expected_warnings = [
             'runcmd.1: 20 is not valid under any of the given schemas',
             'runcmd.3: {\'a\': \'n\'} is not valid under any of the given'
@@ -90,7 +91,7 @@ class TestRuncmd(FilesystemMockingTestCase):
         """Valid runcmd schema is written to a runcmd shell script."""
         valid_config = {'runcmd': [['ls', '/']]}
         cc = self._get_cloud('ubuntu')
-        cc_runcmd.handle('cc_runcmd', valid_config, cc, LOG, [])
+        handle('cc_runcmd', valid_config, cc, LOG, [])
         runcmd_file = os.path.join(
             self.new_root,
             'var/lib/cloud/instances/iid-datasource-none/scripts/runcmd')
@@ -99,4 +100,22 @@ class TestRuncmd(FilesystemMockingTestCase):
         self.assertEqual(0o700, stat.S_IMODE(file_stat.st_mode))
 
 
+@skipUnlessJsonSchema()
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
+    """Directly test schema rather than through handle."""
+
+    schema = schema
+
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            [["echo", "bye"], ["echo", "bye"]],
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            ["echo bye", "echo bye"],
+            "command entries can be duplicate.")
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index c12a487..fac8267 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -553,6 +553,43 @@ NETWORK_CONFIGS = {
                 """),
         },
     },
+    'dhcpv6_only': {
+        'expected_eni': textwrap.dedent("""\
+            auto lo
+            iface lo inet loopback
+
+            auto iface0
+            iface iface0 inet6 dhcp
+        """).rstrip(' '),
+        'expected_netplan': textwrap.dedent("""
+            network:
+                version: 2
+                ethernets:
+                    iface0:
+                        dhcp6: true
+        """).rstrip(' '),
+        'yaml': textwrap.dedent("""\
+            version: 1
+            config:
+              - type: 'physical'
+                name: 'iface0'
+                subnets:
+                - {'type': 'dhcp6'}
+        """).rstrip(' '),
+        'expected_sysconfig': {
+            'ifcfg-iface0': textwrap.dedent("""\
+                BOOTPROTO=none
+                DEVICE=iface0
+                DHCPV6C=yes
+                IPV6INIT=yes
+                DEVICE=iface0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+        },
+    },
     'all': {
         'expected_eni': ("""\
 auto lo
@@ -740,7 +777,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                                            """miimon=100"
                 BONDING_SLAVE0=eth1
                 BONDING_SLAVE1=eth2
-                BOOTPROTO=dhcp
+                BOOTPROTO=none
                 DEVICE=bond0
                 DHCPV6C=yes
                 IPV6INIT=yes
@@ -1405,6 +1442,7 @@ DEFAULT_DEV_ATTRS = {
         "address": "07-1C-C6-75-A4-BE",
         "device/driver": None,
         "device/device": None,
+        "name_assign_type": "4",
     }
 }
 
@@ -1452,11 +1490,14 @@ class TestGenerateFallbackConfig(CiTestCase):
             'eth0': {
                 'bridge': False, 'carrier': False, 'dormant': False,
                 'operstate': 'down', 'address': '00:11:22:33:44:55',
-                'device/driver': 'hv_netsvc', 'device/device': '0x3'},
+                'device/driver': 'hv_netsvc', 'device/device': '0x3',
+                'name_assign_type': '4'},
             'eth1': {
                 'bridge': False, 'carrier': False, 'dormant': False,
                 'operstate': 'down', 'address': '00:11:22:33:44:55',
-                'device/driver': 'mlx4_core', 'device/device': '0x7'},
+                'device/driver': 'mlx4_core', 'device/device': '0x7',
+                'name_assign_type': '4'},
+
         }
 
         tmp_dir = self.tmp_dir()
@@ -1512,11 +1553,13 @@ iface eth0 inet dhcp
             'eth1': {
                 'bridge': False, 'carrier': False, 'dormant': False,
                 'operstate': 'down', 'address': '00:11:22:33:44:55',
-                'device/driver': 'hv_netsvc', 'device/device': '0x3'},
+                'device/driver': 'hv_netsvc', 'device/device': '0x3',
+                'name_assign_type': '4'},
             'eth0': {
                 'bridge': False, 'carrier': False, 'dormant': False,
                 'operstate': 'down', 'address': '00:11:22:33:44:55',
-                'device/driver': 'mlx4_core', 'device/device': '0x7'},
+                'device/driver': 'mlx4_core', 'device/device': '0x7',
+                'name_assign_type': '4'},
         }
 
         tmp_dir = self.tmp_dir()
@@ -1565,6 +1608,65 @@ iface eth1 inet dhcp
         ]
         self.assertEqual(", ".join(expected_rule) + '\n', contents.lstrip())
 
+    @mock.patch("cloudinit.util.udevadm_settle")
+    @mock.patch("cloudinit.net.sys_dev_path")
+    @mock.patch("cloudinit.net.read_sys_net")
+    @mock.patch("cloudinit.net.get_devicelist")
+    def test_unstable_names(self, mock_get_devicelist, mock_read_sys_net,
+                            mock_sys_dev_path, mock_settle):
+        """verify that udevadm settle is called when we find unstable names"""
+        devices = {
+            'eth0': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'hv_netsvc', 'device/device': '0x3',
+                'name_assign_type': False},
+            'ens4': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'mlx4_core', 'device/device': '0x7',
+                'name_assign_type': '4'},
+
+        }
+
+        tmp_dir = self.tmp_dir()
+        _setup_test(tmp_dir, mock_get_devicelist,
+                    mock_read_sys_net, mock_sys_dev_path,
+                    dev_attrs=devices)
+        net.generate_fallback_config(config_driver=True)
+        self.assertEqual(1, mock_settle.call_count)
+
+    @mock.patch("cloudinit.util.get_cmdline")
+    @mock.patch("cloudinit.util.udevadm_settle")
+    @mock.patch("cloudinit.net.sys_dev_path")
+    @mock.patch("cloudinit.net.read_sys_net")
+    @mock.patch("cloudinit.net.get_devicelist")
+    def test_unstable_names_disabled(self, mock_get_devicelist,
+                                     mock_read_sys_net, mock_sys_dev_path,
+                                     mock_settle, m_get_cmdline):
+        """verify udevadm settle not called when cmdline has net.ifnames=0"""
+        devices = {
+            'eth0': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'hv_netsvc', 'device/device': '0x3',
+                'name_assign_type': False},
+            'ens4': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'mlx4_core', 'device/device': '0x7',
+                'name_assign_type': '4'},
+
+        }
+
+        m_get_cmdline.return_value = 'net.ifnames=0'
+        tmp_dir = self.tmp_dir()
+        _setup_test(tmp_dir, mock_get_devicelist,
+                    mock_read_sys_net, mock_sys_dev_path,
+                    dev_attrs=devices)
+        net.generate_fallback_config(config_driver=True)
+        self.assertEqual(0, mock_settle.call_count)
+
 
 class TestSysConfigRendering(CiTestCase):
 
@@ -1829,6 +1931,12 @@ USERCTL=no
         self._compare_files_to_expected(entry['expected_sysconfig'], found)
         self._assert_headers(found)
 
+    def test_dhcpv6_only_config(self):
+        entry = NETWORK_CONFIGS['dhcpv6_only']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._assert_headers(found)
+
 
 class TestEniNetRendering(CiTestCase):
 
@@ -2277,6 +2385,13 @@ class TestNetplanRoundTrip(CiTestCase):
             entry['expected_netplan'].splitlines(),
             files['/etc/netplan/50-cloud-init.yaml'].splitlines())
 
+    def testsimple_render_dhcpv6_only(self):
+        entry = NETWORK_CONFIGS['dhcpv6_only']
+        files = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self.assertEqual(
+            entry['expected_netplan'].splitlines(),
+            files['/etc/netplan/50-cloud-init.yaml'].splitlines())
+
     def testsimple_render_all(self):
         entry = NETWORK_CONFIGS['all']
         files = self._render_and_read(network_config=yaml.load(entry['yaml']))
@@ -2345,6 +2460,13 @@ class TestEniRoundTrip(CiTestCase):
             entry['expected_eni'].splitlines(),
             files['/etc/network/interfaces'].splitlines())
 
+    def testsimple_render_dhcpv6_only(self):
+        entry = NETWORK_CONFIGS['dhcpv6_only']
+        files = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self.assertEqual(
+            entry['expected_eni'].splitlines(),
+            files['/etc/network/interfaces'].splitlines())
+
     def testsimple_render_v4_and_v6_static(self):
         entry = NETWORK_CONFIGS['v4_and_v6_static']
         files = self._render_and_read(network_config=yaml.load(entry['yaml']))
diff --git a/tests/unittests/test_sshutil.py b/tests/unittests/test_sshutil.py
index 4c62c8b..73ae897 100644
--- a/tests/unittests/test_sshutil.py
+++ b/tests/unittests/test_sshutil.py
@@ -4,6 +4,7 @@ from mock import patch
 
 from cloudinit import ssh_util
 from cloudinit.tests import helpers as test_helpers
+from cloudinit import util
 
 
 VALID_CONTENT = {
@@ -56,7 +57,7 @@ TEST_OPTIONS = (
     'user \"root\".\';echo;sleep 10"')
 
 
-class TestAuthKeyLineParser(test_helpers.TestCase):
+class TestAuthKeyLineParser(test_helpers.CiTestCase):
 
     def test_simple_parse(self):
         # test key line with common 3 fields (keytype, base64, comment)
@@ -126,7 +127,7 @@ class TestAuthKeyLineParser(test_helpers.TestCase):
         self.assertFalse(key.valid())
 
 
-class TestUpdateAuthorizedKeys(test_helpers.TestCase):
+class TestUpdateAuthorizedKeys(test_helpers.CiTestCase):
 
     def test_new_keys_replace(self):
         """new entries with the same base64 should replace old."""
@@ -168,7 +169,7 @@ class TestUpdateAuthorizedKeys(test_helpers.TestCase):
         self.assertEqual(expected, found)
 
 
-class TestParseSSHConfig(test_helpers.TestCase):
+class TestParseSSHConfig(test_helpers.CiTestCase):
 
     def setUp(self):
         self.load_file_patch = patch('cloudinit.ssh_util.util.load_file')
@@ -235,4 +236,94 @@ class TestParseSSHConfig(test_helpers.TestCase):
         self.assertEqual('foo', ret[0].key)
         self.assertEqual('bar', ret[0].value)
 
+
+class TestUpdateSshConfigLines(test_helpers.CiTestCase):
+    """Test the update_ssh_config_lines method."""
+    exlines = [
+        "#PasswordAuthentication yes",
+        "UsePAM yes",
+        "# Comment line",
+        "AcceptEnv LANG LC_*",
+        "X11Forwarding no",
+    ]
+    pwauth = "PasswordAuthentication"
+
+    def check_line(self, line, opt, val):
+        self.assertEqual(line.key, opt.lower())
+        self.assertEqual(line.value, val)
+        self.assertIn(opt, str(line))
+        self.assertIn(val, str(line))
+
+    def test_new_option_added(self):
+        """A single update of non-existing option."""
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, {'MyKey': 'MyVal'})
+        self.assertEqual(['MyKey'], result)
+        self.check_line(lines[-1], "MyKey", "MyVal")
+
+    def test_commented_out_not_updated_but_appended(self):
+        """Implementation does not un-comment and update lines."""
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, {self.pwauth: "no"})
+        self.assertEqual([self.pwauth], result)
+        self.check_line(lines[-1], self.pwauth, "no")
+
+    def test_single_option_updated(self):
+        """A single update should have change made and line updated."""
+        opt, val = ("UsePAM", "no")
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, {opt: val})
+        self.assertEqual([opt], result)
+        self.check_line(lines[1], opt, val)
+
+    def test_multiple_updates_with_add(self):
+        """Verify multiple updates some added some changed, some not."""
+        updates = {"UsePAM": "no", "X11Forwarding": "no", "NewOpt": "newval",
+                   "AcceptEnv": "LANG ADD LC_*"}
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, updates)
+        self.assertEqual(set(["UsePAM", "NewOpt", "AcceptEnv"]), set(result))
+        self.check_line(lines[3], "AcceptEnv", updates["AcceptEnv"])
+
+    def test_return_empty_if_no_changes(self):
+        """If there are no changes, then return should be empty list."""
+        updates = {"UsePAM": "yes"}
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, updates)
+        self.assertEqual([], result)
+        self.assertEqual(self.exlines, [str(l) for l in lines])
+
+    def test_keycase_not_modified(self):
+        """Original case of key should not be changed on update.
+        This behavior is to keep original config as much intact as can be."""
+        updates = {"usepam": "no"}
+        lines = ssh_util.parse_ssh_config_lines(list(self.exlines))
+        result = ssh_util.update_ssh_config_lines(lines, updates)
+        self.assertEqual(["usepam"], result)
+        self.assertEqual("UsePAM no", str(lines[1]))
+
+
+class TestUpdateSshConfig(test_helpers.CiTestCase):
+    cfgdata = '\n'.join(["#Option val", "MyKey ORIG_VAL", ""])
+
+    def test_modified(self):
+        mycfg = self.tmp_path("ssh_config_1")
+        util.write_file(mycfg, self.cfgdata)
+        ret = ssh_util.update_ssh_config({"MyKey": "NEW_VAL"}, mycfg)
+        self.assertTrue(ret)
+        found = util.load_file(mycfg)
+        self.assertEqual(self.cfgdata.replace("ORIG_VAL", "NEW_VAL"), found)
+        # assert there is a newline at end of file (LP: #1677205)
+        self.assertEqual('\n', found[-1])
+
+    def test_not_modified(self):
+        mycfg = self.tmp_path("ssh_config_2")
+        util.write_file(mycfg, self.cfgdata)
+        with patch("cloudinit.ssh_util.util.write_file") as m_write_file:
+            ret = ssh_util.update_ssh_config({"MyKey": "ORIG_VAL"}, mycfg)
+        self.assertFalse(ret)
+        self.assertEqual(self.cfgdata, util.load_file(mycfg))
+        m_write_file.assert_not_called()
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_templating.py b/tests/unittests/test_templating.py
index 1080e13..20c87ef 100644
--- a/tests/unittests/test_templating.py
+++ b/tests/unittests/test_templating.py
@@ -50,12 +50,12 @@ class TestTemplates(test_helpers.CiTestCase):
     def test_detection(self):
         blob = "## template:cheetah"
 
-        (template_type, renderer, contents) = templater.detect_template(blob)
+        (template_type, _renderer, contents) = templater.detect_template(blob)
         self.assertIn("cheetah", template_type)
         self.assertEqual("", contents.strip())
 
         blob = "blahblah $blah"
-        (template_type, renderer, contents) = templater.detect_template(blob)
+        (template_type, _renderer, _contents) = templater.detect_template(blob)
         self.assertIn("cheetah", template_type)
         self.assertEqual(blob, contents)
 
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index e04ea03..84941c7 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -774,11 +774,11 @@ class TestSubp(helpers.CiTestCase):
 
     def test_subp_reads_env(self):
         with mock.patch.dict("os.environ", values={'FOO': 'BAR'}):
-            out, err = util.subp(self.printenv + ['FOO'], capture=True)
+            out, _err = util.subp(self.printenv + ['FOO'], capture=True)
         self.assertEqual('FOO=BAR', out.splitlines()[0])
 
     def test_subp_env_and_update_env(self):
-        out, err = util.subp(
+        out, _err = util.subp(
             self.printenv + ['FOO', 'HOME', 'K1', 'K2'], capture=True,
             env={'FOO': 'BAR'},
             update_env={'HOME': '/myhome', 'K2': 'V2'})
@@ -788,7 +788,7 @@ class TestSubp(helpers.CiTestCase):
     def test_subp_update_env(self):
         extra = {'FOO': 'BAR', 'HOME': '/root', 'K1': 'V1'}
         with mock.patch.dict("os.environ", values=extra):
-            out, err = util.subp(
+            out, _err = util.subp(
                 self.printenv + ['FOO', 'HOME', 'K1', 'K2'], capture=True,
                 update_env={'HOME': '/myhome', 'K2': 'V2'})
 
diff --git a/tools/ds-identify b/tools/ds-identify
index 9a2db5c..7fff5d1 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -125,6 +125,7 @@ DI_ON_NOTFOUND=""
 DI_EC2_STRICT_ID_DEFAULT="true"
 
 _IS_IBM_CLOUD=""
+_IS_IBM_PROVISIONING=""
 
 error() {
     set -- "ERROR:" "$@";
@@ -1006,7 +1007,25 @@ dscheck_Hetzner() {
 }
 
 is_ibm_provisioning() {
-    [ -f "${PATH_ROOT}/root/provisioningConfiguration.cfg" ]
+    local pcfg="${PATH_ROOT}/root/provisioningConfiguration.cfg"
+    local logf="${PATH_ROOT}/root/swinstall.log"
+    local is_prov=false msg="config '$pcfg' did not exist."
+    if [ -f "$pcfg" ]; then
+        msg="config '$pcfg' exists."
+        is_prov=true
+        if [ -f "$logf" ]; then
+            if [ "$logf" -nt "$PATH_PROC_1_ENVIRON" ]; then
+                msg="$msg log '$logf' from current boot."
+            else
+                is_prov=false
+                msg="$msg log '$logf' from previous boot."
+            fi
+        else
+            msg="$msg log '$logf' did not exist."
+        fi
+    fi
+    debug 2 "ibm_provisioning=$is_prov: $msg"
+    [ "$is_prov" = "true" ]
 }
 
 is_ibm_cloud() {

Follow ups