← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/artful into cloud-init:ubuntu/artful

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/artful into cloud-init:ubuntu/artful.

Commit message:
New upstream snapshot for SRU into Artful cloud-init version 18.3.

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1768600 in cloud-init: "UTF-8 support in User Data (text/x-shellscript) is broken"
  https://bugs.launchpad.net/cloud-init/+bug/1768600
  Bug #1770462 in cloud-init: "Allow empty stages"
  https://bugs.launchpad.net/cloud-init/+bug/1770462
  Bug #1771468 in cloud-init: "Allow a way to explicitly disable sudo for a user"
  https://bugs.launchpad.net/cloud-init/+bug/1771468
  Bug #1776701 in cloud-init: "ec2: xenial unnecessary  openstack datasource probes during discovery"
  https://bugs.launchpad.net/cloud-init/+bug/1776701
  Bug #1776958 in cloud-init: "error creating lxdbr0."
  https://bugs.launchpad.net/cloud-init/+bug/1776958
  Bug #1777743 in cloud-init: "Release 18.3"
  https://bugs.launchpad.net/cloud-init/+bug/1777743

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348310
-- 
The attached diff has been truncated due to its size.
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/artful into cloud-init:ubuntu/artful.
diff --git a/.pylintrc b/.pylintrc
index 0bdfa59..3bfa0c8 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -28,7 +28,7 @@ jobs=4
 # W0703(broad-except)
 # W1401(anomalous-backslash-in-string)
 
-disable=C, F, I, R, W0105, W0107, W0201, W0212, W0221, W0222, W0223, W0231, W0311, W0511, W0602, W0603, W0611, W0612, W0613, W0621, W0622, W0631, W0703, W1401
+disable=C, F, I, R, W0105, W0107, W0201, W0212, W0221, W0222, W0223, W0231, W0311, W0511, W0602, W0603, W0611, W0613, W0621, W0622, W0631, W0703, W1401
 
 
 [REPORTS]
diff --git a/ChangeLog b/ChangeLog
index daa7ccf..72c5287 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,229 @@
+18.3:
+ - docs: represent sudo:false in docs for user_groups config module
+ - Explicitly prevent `sudo` access for user module
+   [Jacob Bednarz] (LP: #1771468)
+ - lxd: Delete default network and detach device if lxd-init created them.
+   (LP: #1776958)
+ - openstack: avoid unneeded metadata probe on non-openstack platforms
+   (LP: #1776701)
+ - stages: fix tracebacks if a module stage is undefined or empty
+   [Robert Schweikert] (LP: #1770462)
+ - Be more safe on string/bytes when writing multipart user-data to disk.
+   (LP: #1768600)
+ - Fix get_proc_env for pids that have non-utf8 content in environment.
+   (LP: #1775371)
+ - tests: fix salt_minion integration test on bionic and later
+ - tests: provide human-readable integration test summary when --verbose
+ - tests: skip chrony integration tests on lxd running artful or older
+ - test: add optional --preserve-instance arg to integraiton tests
+ - netplan: fix mtu if provided by network config for all rendered types
+   (LP: #1774666)
+ - tests: remove pip install workarounds for pylxd, take upstream fix.
+ - subp: support combine_capture argument.
+ - tests: ordered tox dependencies for pylxd install
+ - util: add get_linux_distro function to replace platform.dist
+   [Robert Schweikert] (LP: #1745235)
+ - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+ - - Do not use the systemd_prefix macro, not available in this environment
+   [Robert Schweikert]
+ - doc: Add config info to ec2, openstack and cloudstack datasource docs
+ - Enable SmartOS network metadata to work with netplan via per-subnet
+   routes [Dan McDonald] (LP: #1763512)
+ - openstack: Allow discovery in init-local using dhclient in a sandbox.
+   (LP: #1749717)
+ - tests: Avoid using https in httpretty, improve HttPretty test case.
+   (LP: #1771659)
+ - yaml_load/schema: Add invalid line and column nums to error message
+ - Azure: Ignore NTFS mount errors when checking ephemeral drive
+   [Paul Meyer]
+ - packages/brpm: Get proper dependencies for cmdline distro.
+ - packages: Make rpm spec files patch in package version like in debs.
+ - tools/run-container: replace tools/run-centos with more generic.
+ - Update version.version_string to contain packaged version. (LP: #1770712)
+ - cc_mounts: Do not add devices to fstab that are already present.
+   [Lars Kellogg-Stedman]
+ - ds-identify: ensure that we have certain tokens in PATH. (LP: #1771382)
+ - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+ - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+ - cloud_tests: help pylint [Ryan Harper]
+ - flake8: fix flake8 errors in previous commit.
+ - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+ - tests: restructure SSH and initial connections [Joshua Powers]
+ - ds-identify: recognize container-other as a container, test SmartOS.
+ - cloud-config.service: run After snap.seeded.service. (LP: #1767131)
+ - tests: do not rely on host /proc/cmdline in test_net.py
+   [Lars Kellogg-Stedman] (LP: #1769952)
+ - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+ - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+ - tests: fix package and ca_cert cloud_tests on bionic
+   (LP: #1769985)
+ - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+ - pycodestyle: Fix deprecated string literals, move away from flake8.
+ - azure: Add reported ready marker file. [Joshua Chan] (LP: #1765214)
+ - tools: Support adding a release suffix through packages/bddeb.
+ - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+   [Harm Weites] (LP: #1404745)
+ - tools: Re-use the orig tarball in packages/bddeb if it is around.
+ - netinfo: fix netdev_pformat when a nic does not have an address
+   assigned. (LP: #1766302)
+ - collect-logs: add -v flag, write to stderr, limit journal to single
+   boot. (LP: #1766335)
+ - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+   (LP: #1766401)
+ - Add reporting events and log_time around early source of blocking time
+   [Ryan Harper]
+ - IBMCloud: recognize provisioning environment during debug boots.
+   (LP: #1767166)
+ - net: detect unstable network names and trigger a settle if needed
+   [Ryan Harper] (LP: #1766287)
+ - IBMCloud: improve documentation in datasource.
+ - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+ - packages/debian/control.in: add missing dependency on iproute2.
+   (LP: #1766711)
+ - DataSourceSmartOS: add locking of serial device.
+   [Mike Gerdts] (LP: #1746605)
+ - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
+ - DataSourceSmartOS: list() should always return a list
+   [Mike Gerdts] (LP: #1763480)
+ - schema: in validation, raise ImportError if strict but no jsonschema.
+ - set_passwords: Add newline to end of sshd config, only restart if
+   updated. (LP: #1677205)
+ - pylint: pay attention to unused variable warnings.
+ - doc: Add documentation for AliYun datasource. [Junjie Wang]
+ - Schema: do not warn on duplicate items in commands. (LP: #1764264)
+ - net: Depend on iproute2's ip instead of net-tools ifconfig or route
+ - DataSourceSmartOS: fix hang when metadata service is down
+   [Mike Gerdts] (LP: #1667735)
+ - DataSourceSmartOS: change default fs on ephemeral disk from ext3 to
+   ext4. [Mike Gerdts] (LP: #1763511)
+ - pycodestyle: Fix invalid escape sequences in string literals.
+ - Implement bash completion script for cloud-init command line
+   [Ryan Harper]
+ - tools: Fix make-tarball cli tool usage for development
+ - renderer: support unicode in render_from_file.
+ - Implement ntp client spec with auto support for distro selection
+   [Ryan Harper] (LP: #1749722)
+ - Apport: add Brightbox, IBM, LXD, and OpenTelekomCloud to list of clouds.
+ - tests: fix ec2 integration network metadata validation
+ - tests: fix integration tests to support lxd 3.0 release
+ - correct documentation to match correct attribute name usage.
+   [Dominic Schlegel] (LP: #1420018)
+ - cc_resizefs, util: handle no /dev/zfs [Ryan Harper]
+ - doc: Fix links in OpenStack datasource documentation.
+   [Dominic Schlegel] (LP: #1721660)
+ - docs: represent sudo:false in docs for user_groups config module
+ - Explicitly prevent `sudo` access for user module
+   [Jacob Bednarz] (LP: #1771468)
+ - lxd: Delete default network and detach device if lxd-init created them.
+   (LP: #1776958)
+ - openstack: avoid unneeded metadata probe on non-openstack platforms
+   (LP: #1776701)
+ - stages: fix tracebacks if a module stage is undefined or empty
+   [Robert Schweikert] (LP: #1770462)
+ - Be more safe on string/bytes when writing multipart user-data to disk.
+   (LP: #1768600)
+ - Fix get_proc_env for pids that have non-utf8 content in environment.
+   (LP: #1775371)
+ - tests: fix salt_minion integration test on bionic and later
+ - tests: provide human-readable integration test summary when --verbose
+ - tests: skip chrony integration tests on lxd running artful or older
+ - test: add optional --preserve-instance arg to integraiton tests
+ - netplan: fix mtu if provided by network config for all rendered types
+   (LP: #1774666)
+ - tests: remove pip install workarounds for pylxd, take upstream fix.
+ - subp: support combine_capture argument.
+ - tests: ordered tox dependencies for pylxd install
+ - util: add get_linux_distro function to replace platform.dist
+   [Robert Schweikert] (LP: #1745235)
+ - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+ - - Do not use the systemd_prefix macro, not available in this environment
+   [Robert Schweikert]
+ - doc: Add config info to ec2, openstack and cloudstack datasource docs
+ - Enable SmartOS network metadata to work with netplan via per-subnet
+   routes [Dan McDonald] (LP: #1763512)
+ - openstack: Allow discovery in init-local using dhclient in a sandbox.
+   (LP: #1749717)
+ - tests: Avoid using https in httpretty, improve HttPretty test case.
+   (LP: #1771659)
+ - yaml_load/schema: Add invalid line and column nums to error message
+ - Azure: Ignore NTFS mount errors when checking ephemeral drive
+   [Paul Meyer]
+ - packages/brpm: Get proper dependencies for cmdline distro.
+ - packages: Make rpm spec files patch in package version like in debs.
+ - tools/run-container: replace tools/run-centos with more generic.
+ - Update version.version_string to contain packaged version. (LP: #1770712)
+ - cc_mounts: Do not add devices to fstab that are already present.
+   [Lars Kellogg-Stedman]
+ - ds-identify: ensure that we have certain tokens in PATH. (LP: #1771382)
+ - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+ - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+ - cloud_tests: help pylint [Ryan Harper]
+ - flake8: fix flake8 errors in previous commit.
+ - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+ - tests: restructure SSH and initial connections [Joshua Powers]
+ - ds-identify: recognize container-other as a container, test SmartOS.
+ - cloud-config.service: run After snap.seeded.service. (LP: #1767131)
+ - tests: do not rely on host /proc/cmdline in test_net.py
+   [Lars Kellogg-Stedman] (LP: #1769952)
+ - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+ - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+ - tests: fix package and ca_cert cloud_tests on bionic
+   (LP: #1769985)
+ - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+ - pycodestyle: Fix deprecated string literals, move away from flake8.
+ - azure: Add reported ready marker file. [Joshua Chan] (LP: #1765214)
+ - tools: Support adding a release suffix through packages/bddeb.
+ - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+   [Harm Weites] (LP: #1404745)
+ - tools: Re-use the orig tarball in packages/bddeb if it is around.
+ - netinfo: fix netdev_pformat when a nic does not have an address
+   assigned. (LP: #1766302)
+ - collect-logs: add -v flag, write to stderr, limit journal to single
+   boot. (LP: #1766335)
+ - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+   (LP: #1766401)
+ - Add reporting events and log_time around early source of blocking time
+   [Ryan Harper]
+ - IBMCloud: recognize provisioning environment during debug boots.
+   (LP: #1767166)
+ - net: detect unstable network names and trigger a settle if needed
+   [Ryan Harper] (LP: #1766287)
+ - IBMCloud: improve documentation in datasource.
+ - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+ - packages/debian/control.in: add missing dependency on iproute2.
+   (LP: #1766711)
+ - DataSourceSmartOS: add locking of serial device.
+   [Mike Gerdts] (LP: #1746605)
+ - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
+ - DataSourceSmartOS: list() should always return a list
+   [Mike Gerdts] (LP: #1763480)
+ - schema: in validation, raise ImportError if strict but no jsonschema.
+ - set_passwords: Add newline to end of sshd config, only restart if
+   updated. (LP: #1677205)
+ - pylint: pay attention to unused variable warnings.
+ - doc: Add documentation for AliYun datasource. [Junjie Wang]
+ - Schema: do not warn on duplicate items in commands. (LP: #1764264)
+ - net: Depend on iproute2's ip instead of net-tools ifconfig or route
+ - DataSourceSmartOS: fix hang when metadata service is down
+   [Mike Gerdts] (LP: #1667735)
+ - DataSourceSmartOS: change default fs on ephemeral disk from ext3 to
+   ext4. [Mike Gerdts] (LP: #1763511)
+ - pycodestyle: Fix invalid escape sequences in string literals.
+ - Implement bash completion script for cloud-init command line
+   [Ryan Harper]
+ - tools: Fix make-tarball cli tool usage for development
+ - renderer: support unicode in render_from_file.
+ - Implement ntp client spec with auto support for distro selection
+   [Ryan Harper] (LP: #1749722)
+ - Apport: add Brightbox, IBM, LXD, and OpenTelekomCloud to list of clouds.
+ - tests: fix ec2 integration network metadata validation
+ - tests: fix integration tests to support lxd 3.0 release
+ - correct documentation to match correct attribute name usage.
+   [Dominic Schlegel] (LP: #1420018)
+ - cc_resizefs, util: handle no /dev/zfs [Ryan Harper]
+ - doc: Fix links in OpenStack datasource documentation.
+   [Dominic Schlegel] (LP: #1721660)
+
 18.2:
  - Hetzner: Exit early if dmi system-manufacturer is not Hetzner.
  - Add missing dependency on isc-dhcp-client to trunk ubuntu packaging.
diff --git a/MANIFEST.in b/MANIFEST.in
index 1a4d771..57a85ea 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,5 +1,6 @@
 include *.py MANIFEST.in LICENSE* ChangeLog
 global-include *.txt *.rst *.ini *.in *.conf *.cfg *.sh
+graft bash_completion
 graft config
 graft doc
 graft packages
diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init
new file mode 100644
index 0000000..581432c
--- /dev/null
+++ b/bash_completion/cloud-init
@@ -0,0 +1,77 @@
+# Copyright (C) 2018 Canonical Ltd.
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+# bash completion for cloud-init cli
+_cloudinit_complete()
+{
+
+    local cur_word prev_word
+    cur_word="${COMP_WORDS[COMP_CWORD]}"
+    prev_word="${COMP_WORDS[COMP_CWORD-1]}"
+
+    subcmds="analyze clean collect-logs devel dhclient-hook features init modules single status"
+    base_params="--help --file --version --debug --force"
+    case ${COMP_CWORD} in
+        1)
+            COMPREPLY=($(compgen -W "$base_params $subcmds" -- $cur_word))
+            ;;
+        2)
+            case ${prev_word} in
+                analyze)
+                    COMPREPLY=($(compgen -W "--help blame dump show" -- $cur_word))
+                    ;;
+                clean)
+                    COMPREPLY=($(compgen -W "--help --logs --reboot --seed" -- $cur_word))
+                    ;;
+                collect-logs)
+                    COMPREPLY=($(compgen -W "--help --tarfile --include-userdata" -- $cur_word))
+                    ;;
+                devel)
+                    COMPREPLY=($(compgen -W "--help schema" -- $cur_word))
+                    ;;
+                dhclient-hook|features)
+                    COMPREPLY=($(compgen -W "--help" -- $cur_word))
+                    ;;
+                init)
+                    COMPREPLY=($(compgen -W "--help --local" -- $cur_word))
+                    ;;
+                modules)
+                    COMPREPLY=($(compgen -W "--help --mode" -- $cur_word))
+                    ;;
+
+                single)
+                    COMPREPLY=($(compgen -W "--help --name --frequency --report" -- $cur_word))
+                    ;;
+                status)
+                    COMPREPLY=($(compgen -W "--help --long --wait" -- $cur_word))
+                    ;;
+            esac
+            ;;
+        3)
+            case ${prev_word} in
+                blame|dump)
+                    COMPREPLY=($(compgen -W "--help --infile --outfile" -- $cur_word))
+                    ;;
+                --mode)
+                    COMPREPLY=($(compgen -W "--help init config final" -- $cur_word))
+                    ;;
+                --frequency)
+                    COMPREPLY=($(compgen -W "--help instance always once" -- $cur_word))
+                    ;;
+                schema)
+                    COMPREPLY=($(compgen -W "--help --config-file --doc --annotate" -- $cur_word))
+                    ;;
+                show)
+                    COMPREPLY=($(compgen -W "--help --format --infile --outfile" -- $cur_word))
+                    ;;
+            esac
+            ;;
+        *)
+            COMPREPLY=()
+            ;;
+    esac
+}
+complete -F _cloudinit_complete cloud-init
+
+# vi: syntax=bash expandtab
diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
index 3ba5903..f861365 100644
--- a/cloudinit/analyze/__main__.py
+++ b/cloudinit/analyze/__main__.py
@@ -69,7 +69,7 @@ def analyze_blame(name, args):
     """
     (infh, outfh) = configure_io(args)
     blame_format = '     %ds (%n)'
-    r = re.compile('(^\s+\d+\.\d+)', re.MULTILINE)
+    r = re.compile(r'(^\s+\d+\.\d+)', re.MULTILINE)
     for idx, record in enumerate(show.show_events(_get_events(infh),
                                                   blame_format)):
         srecs = sorted(filter(r.match, record), reverse=True)
diff --git a/cloudinit/analyze/dump.py b/cloudinit/analyze/dump.py
index b071aa1..1f3060d 100644
--- a/cloudinit/analyze/dump.py
+++ b/cloudinit/analyze/dump.py
@@ -112,7 +112,7 @@ def parse_ci_logline(line):
             return None
         event_description = stage_to_description[event_name]
     else:
-        (pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
+        (_pymodloglvl, event_type, event_name) = eventstr.split()[0:3]
         event_description = eventstr.split(event_name)[1].strip()
 
     event = {
diff --git a/cloudinit/apport.py b/cloudinit/apport.py
index 618b016..130ff26 100644
--- a/cloudinit/apport.py
+++ b/cloudinit/apport.py
@@ -13,10 +13,29 @@ except ImportError:
 
 
 KNOWN_CLOUD_NAMES = [
-    'Amazon - Ec2', 'AliYun', 'AltCloud', 'Azure', 'Bigstep', 'CloudSigma',
-    'CloudStack', 'DigitalOcean', 'GCE - Google Compute Engine',
-    'Hetzner Cloud', 'MAAS', 'NoCloud', 'OpenNebula', 'OpenStack', 'OVF',
-    'Scaleway', 'SmartOS', 'VMware', 'Other']
+    'AliYun',
+    'AltCloud',
+    'Amazon - Ec2',
+    'Azure',
+    'Bigstep',
+    'Brightbox',
+    'CloudSigma',
+    'CloudStack',
+    'DigitalOcean',
+    'GCE - Google Compute Engine',
+    'Hetzner Cloud',
+    'IBM - (aka SoftLayer or BlueMix)',
+    'LXD',
+    'MAAS',
+    'NoCloud',
+    'OpenNebula',
+    'OpenStack',
+    'OVF',
+    'OpenTelekomCloud',
+    'Scaleway',
+    'SmartOS',
+    'VMware',
+    'Other']
 
 # Potentially clear text collected logs
 CLOUDINIT_LOG = '/var/log/cloud-init.log'
diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
index 35ca478..df72520 100644
--- a/cloudinit/cmd/devel/logs.py
+++ b/cloudinit/cmd/devel/logs.py
@@ -11,6 +11,7 @@ from cloudinit.temp_utils import tempdir
 from datetime import datetime
 import os
 import shutil
+import sys
 
 
 CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
@@ -31,6 +32,8 @@ def get_parser(parser=None):
         parser = argparse.ArgumentParser(
             prog='collect-logs',
             description='Collect and tar all cloud-init debug info')
+    parser.add_argument('--verbose', '-v', action='count', default=0,
+                        dest='verbosity', help="Be more verbose.")
     parser.add_argument(
         "--tarfile", '-t', default='cloud-init.tar.gz',
         help=('The tarfile to create containing all collected logs.'
@@ -43,17 +46,33 @@ def get_parser(parser=None):
     return parser
 
 
-def _write_command_output_to_file(cmd, filename):
+def _write_command_output_to_file(cmd, filename, msg, verbosity):
     """Helper which runs a command and writes output or error to filename."""
     try:
         out, _ = subp(cmd)
     except ProcessExecutionError as e:
         write_file(filename, str(e))
+        _debug("collecting %s failed.\n" % msg, 1, verbosity)
     else:
         write_file(filename, out)
+        _debug("collected %s\n" % msg, 1, verbosity)
+        return out
 
 
-def collect_logs(tarfile, include_userdata):
+def _debug(msg, level, verbosity):
+    if level <= verbosity:
+        sys.stderr.write(msg)
+
+
+def _collect_file(path, out_dir, verbosity):
+    if os.path.isfile(path):
+        copy(path, out_dir)
+        _debug("collected file: %s\n" % path, 1, verbosity)
+    else:
+        _debug("file %s did not exist\n" % path, 2, verbosity)
+
+
+def collect_logs(tarfile, include_userdata, verbosity=0):
     """Collect all cloud-init logs and tar them up into the provided tarfile.
 
     @param tarfile: The path of the tar-gzipped file to create.
@@ -64,28 +83,46 @@ def collect_logs(tarfile, include_userdata):
     log_dir = 'cloud-init-logs-{0}'.format(date)
     with tempdir(dir='/tmp') as tmp_dir:
         log_dir = os.path.join(tmp_dir, log_dir)
-        _write_command_output_to_file(
+        version = _write_command_output_to_file(
+            ['cloud-init', '--version'],
+            os.path.join(log_dir, 'version'),
+            "cloud-init --version", verbosity)
+        dpkg_ver = _write_command_output_to_file(
             ['dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'],
-            os.path.join(log_dir, 'version'))
+            os.path.join(log_dir, 'dpkg-version'),
+            "dpkg version", verbosity)
+        if not version:
+            version = dpkg_ver if dpkg_ver else "not-available"
+        _debug("collected cloud-init version: %s\n" % version, 1, verbosity)
         _write_command_output_to_file(
-            ['dmesg'], os.path.join(log_dir, 'dmesg.txt'))
+            ['dmesg'], os.path.join(log_dir, 'dmesg.txt'),
+            "dmesg output", verbosity)
         _write_command_output_to_file(
-            ['journalctl', '-o', 'short-precise'],
-            os.path.join(log_dir, 'journal.txt'))
+            ['journalctl', '--boot=0', '-o', 'short-precise'],
+            os.path.join(log_dir, 'journal.txt'),
+            "systemd journal of current boot", verbosity)
+
         for log in CLOUDINIT_LOGS:
-            copy(log, log_dir)
+            _collect_file(log, log_dir, verbosity)
         if include_userdata:
-            copy(USER_DATA_FILE, log_dir)
+            _collect_file(USER_DATA_FILE, log_dir, verbosity)
         run_dir = os.path.join(log_dir, 'run')
         ensure_dir(run_dir)
-        shutil.copytree(CLOUDINIT_RUN_DIR, os.path.join(run_dir, 'cloud-init'))
+        if os.path.exists(CLOUDINIT_RUN_DIR):
+            shutil.copytree(CLOUDINIT_RUN_DIR,
+                            os.path.join(run_dir, 'cloud-init'))
+            _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)
+        else:
+            _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,
+                   verbosity)
         with chdir(tmp_dir):
             subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
+    sys.stderr.write("Wrote %s\n" % tarfile)
 
 
 def handle_collect_logs_args(name, args):
     """Handle calls to 'cloud-init collect-logs' as a subcommand."""
-    collect_logs(args.tarfile, args.userdata)
+    collect_logs(args.tarfile, args.userdata, args.verbosity)
 
 
 def main():
diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
index dc4947c..98b4756 100644
--- a/cloudinit/cmd/devel/tests/test_logs.py
+++ b/cloudinit/cmd/devel/tests/test_logs.py
@@ -4,6 +4,7 @@ from cloudinit.cmd.devel import logs
 from cloudinit.util import ensure_dir, load_file, subp, write_file
 from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
 from datetime import datetime
+import mock
 import os
 
 
@@ -27,11 +28,13 @@ class TestCollectLogs(FilesystemMockingTestCase):
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
         date_logdir = 'cloud-init-logs-{0}'.format(date)
 
+        version_out = '/usr/bin/cloud-init 18.2fake\n'
         expected_subp = {
             ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
                 '0.7fake\n',
+            ('cloud-init', '--version'): version_out,
             ('dmesg',): 'dmesg-out\n',
-            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('journalctl', '--boot=0', '-o', 'short-precise'): 'journal-out\n',
             ('tar', 'czvf', output_tarfile, date_logdir): ''
         }
 
@@ -44,9 +47,12 @@ class TestCollectLogs(FilesystemMockingTestCase):
                 subp(cmd)  # Pass through tar cmd so we can check output
             return expected_subp[cmd_tuple], ''
 
+        fake_stderr = mock.MagicMock()
+
         wrap_and_call(
             'cloudinit.cmd.devel.logs',
             {'subp': {'side_effect': fake_subp},
+             'sys.stderr': {'new': fake_stderr},
              'CLOUDINIT_LOGS': {'new': [log1, log2]},
              'CLOUDINIT_RUN_DIR': {'new': self.run_dir}},
             logs.collect_logs, output_tarfile, include_userdata=False)
@@ -55,7 +61,9 @@ class TestCollectLogs(FilesystemMockingTestCase):
         out_logdir = self.tmp_path(date_logdir, self.new_root)
         self.assertEqual(
             '0.7fake\n',
-            load_file(os.path.join(out_logdir, 'version')))
+            load_file(os.path.join(out_logdir, 'dpkg-version')))
+        self.assertEqual(version_out,
+                         load_file(os.path.join(out_logdir, 'version')))
         self.assertEqual(
             'cloud-init-log',
             load_file(os.path.join(out_logdir, 'cloud-init.log')))
@@ -72,6 +80,7 @@ class TestCollectLogs(FilesystemMockingTestCase):
             'results',
             load_file(
                 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
+        fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
 
     def test_collect_logs_includes_optional_userdata(self):
         """collect-logs include userdata when --include-userdata is set."""
@@ -88,11 +97,13 @@ class TestCollectLogs(FilesystemMockingTestCase):
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
         date_logdir = 'cloud-init-logs-{0}'.format(date)
 
+        version_out = '/usr/bin/cloud-init 18.2fake\n'
         expected_subp = {
             ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
                 '0.7fake',
+            ('cloud-init', '--version'): version_out,
             ('dmesg',): 'dmesg-out\n',
-            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('journalctl', '--boot=0', '-o', 'short-precise'): 'journal-out\n',
             ('tar', 'czvf', output_tarfile, date_logdir): ''
         }
 
@@ -105,9 +116,12 @@ class TestCollectLogs(FilesystemMockingTestCase):
                 subp(cmd)  # Pass through tar cmd so we can check output
             return expected_subp[cmd_tuple], ''
 
+        fake_stderr = mock.MagicMock()
+
         wrap_and_call(
             'cloudinit.cmd.devel.logs',
             {'subp': {'side_effect': fake_subp},
+             'sys.stderr': {'new': fake_stderr},
              'CLOUDINIT_LOGS': {'new': [log1, log2]},
              'CLOUDINIT_RUN_DIR': {'new': self.run_dir},
              'USER_DATA_FILE': {'new': userdata}},
@@ -118,3 +132,4 @@ class TestCollectLogs(FilesystemMockingTestCase):
         self.assertEqual(
             'user-data',
             load_file(os.path.join(out_logdir, 'user-data.txt')))
+        fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 3f2dbb9..d6ba90f 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -187,7 +187,7 @@ def attempt_cmdline_url(path, network=True, cmdline=None):
     data = None
     header = b'#cloud-config'
     try:
-        resp = util.read_file_or_url(**kwargs)
+        resp = url_helper.read_file_or_url(**kwargs)
         if resp.ok():
             data = resp.contents
             if not resp.contents.startswith(header):
diff --git a/cloudinit/cmd/tests/test_main.py b/cloudinit/cmd/tests/test_main.py
index dbe421c..e2c54ae 100644
--- a/cloudinit/cmd/tests/test_main.py
+++ b/cloudinit/cmd/tests/test_main.py
@@ -56,7 +56,7 @@ class TestMain(FilesystemMockingTestCase):
         cmdargs = myargs(
             debug=False, files=None, force=False, local=False, reporter=None,
             subcommand='init')
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
@@ -85,7 +85,7 @@ class TestMain(FilesystemMockingTestCase):
         cmdargs = myargs(
             debug=False, files=None, force=False, local=False, reporter=None,
             subcommand='init')
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
@@ -133,7 +133,7 @@ class TestMain(FilesystemMockingTestCase):
             self.assertEqual(main.LOG, log)
             self.assertIsNone(args)
 
-        (item1, item2) = wrap_and_call(
+        (_item1, item2) = wrap_and_call(
             'cloudinit.cmd.main',
             {'util.close_stdin': True,
              'netinfo.debug_info': 'my net debug info',
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index 5b9cbca..e18944e 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -121,7 +121,7 @@ and https protocols respectively. The ``proxy`` key also exists as an alias for
 All source entries in ``apt-sources`` that match regex in
 ``add_apt_repo_match`` will be added to the system using
 ``add-apt-repository``. If ``add_apt_repo_match`` is not specified, it defaults
-to ``^[\w-]+:\w``
+to ``^[\\w-]+:\\w``
 
 **Add source list entries:**
 
@@ -378,7 +378,7 @@ def apply_debconf_selections(cfg, target=None):
 
     # get a complete list of packages listed in input
     pkgs_cfgd = set()
-    for key, content in selsets.items():
+    for _key, content in selsets.items():
         for line in content.splitlines():
             if line.startswith("#"):
                 continue
diff --git a/cloudinit/config/cc_bootcmd.py b/cloudinit/config/cc_bootcmd.py
index 233da1e..db64f0a 100644
--- a/cloudinit/config/cc_bootcmd.py
+++ b/cloudinit/config/cc_bootcmd.py
@@ -63,7 +63,6 @@ schema = {
             'additionalProperties': False,
             'minItems': 1,
             'required': [],
-            'uniqueItems': True
         }
     }
 }
diff --git a/cloudinit/config/cc_disable_ec2_metadata.py b/cloudinit/config/cc_disable_ec2_metadata.py
index c56319b..885b313 100644
--- a/cloudinit/config/cc_disable_ec2_metadata.py
+++ b/cloudinit/config/cc_disable_ec2_metadata.py
@@ -32,13 +32,23 @@ from cloudinit.settings import PER_ALWAYS
 
 frequency = PER_ALWAYS
 
-REJECT_CMD = ['route', 'add', '-host', '169.254.169.254', 'reject']
+REJECT_CMD_IF = ['route', 'add', '-host', '169.254.169.254', 'reject']
+REJECT_CMD_IP = ['ip', 'route', 'add', 'prohibit', '169.254.169.254']
 
 
 def handle(name, cfg, _cloud, log, _args):
     disabled = util.get_cfg_option_bool(cfg, "disable_ec2_metadata", False)
     if disabled:
-        util.subp(REJECT_CMD, capture=False)
+        reject_cmd = None
+        if util.which('ip'):
+            reject_cmd = REJECT_CMD_IP
+        elif util.which('ifconfig'):
+            reject_cmd = REJECT_CMD_IF
+        else:
+            log.error(('Neither "route" nor "ip" command found, unable to '
+                       'manipulate routing table'))
+            return
+        util.subp(reject_cmd, capture=False)
     else:
         log.debug(("Skipping module named %s,"
                    " disabling the ec2 route not enabled"), name)
diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py
index c3e8c48..943089e 100644
--- a/cloudinit/config/cc_disk_setup.py
+++ b/cloudinit/config/cc_disk_setup.py
@@ -680,13 +680,13 @@ def read_parttbl(device):
     reliable way to probe the partition table.
     """
     blkdev_cmd = [BLKDEV_CMD, '--rereadpt', device]
-    udevadm_settle()
+    util.udevadm_settle()
     try:
         util.subp(blkdev_cmd)
     except Exception as e:
         util.logexc(LOG, "Failed reading the partition table %s" % e)
 
-    udevadm_settle()
+    util.udevadm_settle()
 
 
 def exec_mkpart_mbr(device, layout):
@@ -737,14 +737,10 @@ def exec_mkpart(table_type, device, layout):
     return get_dyn_func("exec_mkpart_%s", table_type, device, layout)
 
 
-def udevadm_settle():
-    util.subp(['udevadm', 'settle'])
-
-
 def assert_and_settle_device(device):
     """Assert that device exists and settle so it is fully recognized."""
     if not os.path.exists(device):
-        udevadm_settle()
+        util.udevadm_settle()
         if not os.path.exists(device):
             raise RuntimeError("Device %s did not exist and was not created "
                                "with a udevamd settle." % device)
@@ -752,7 +748,7 @@ def assert_and_settle_device(device):
     # Whether or not the device existed above, it is possible that udev
     # events that would populate udev database (for reading by lsdname) have
     # not yet finished. So settle again.
-    udevadm_settle()
+    util.udevadm_settle()
 
 
 def mkpart(device, definition):
diff --git a/cloudinit/config/cc_emit_upstart.py b/cloudinit/config/cc_emit_upstart.py
index 69dc2d5..eb9fbe6 100644
--- a/cloudinit/config/cc_emit_upstart.py
+++ b/cloudinit/config/cc_emit_upstart.py
@@ -43,7 +43,7 @@ def is_upstart_system():
         del myenv['UPSTART_SESSION']
     check_cmd = ['initctl', 'version']
     try:
-        (out, err) = util.subp(check_cmd, env=myenv)
+        (out, _err) = util.subp(check_cmd, env=myenv)
         return 'upstart' in out
     except util.ProcessExecutionError as e:
         LOG.debug("'%s' returned '%s', not using upstart",
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 09374d2..ac72ac4 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -47,11 +47,16 @@ lxd-bridge will be configured accordingly.
             domain: <domain>
 """
 
+from cloudinit import log as logging
 from cloudinit import util
 import os
 
 distros = ['ubuntu']
 
+LOG = logging.getLogger(__name__)
+
+_DEFAULT_NETWORK_NAME = "lxdbr0"
+
 
 def handle(name, cfg, cloud, log, args):
     # Get config
@@ -109,6 +114,7 @@ def handle(name, cfg, cloud, log, args):
     # Set up lxd-bridge if bridge config is given
     dconf_comm = "debconf-communicate"
     if bridge_cfg:
+        net_name = bridge_cfg.get("name", _DEFAULT_NETWORK_NAME)
         if os.path.exists("/etc/default/lxd-bridge") \
                 and util.which(dconf_comm):
             # Bridge configured through packaging
@@ -135,15 +141,18 @@ def handle(name, cfg, cloud, log, args):
         else:
             # Built-in LXD bridge support
             cmd_create, cmd_attach = bridge_to_cmd(bridge_cfg)
+            maybe_cleanup_default(
+                net_name=net_name, did_init=bool(init_cfg),
+                create=bool(cmd_create), attach=bool(cmd_attach))
             if cmd_create:
                 log.debug("Creating lxd bridge: %s" %
                           " ".join(cmd_create))
-                util.subp(cmd_create)
+                _lxc(cmd_create)
 
             if cmd_attach:
                 log.debug("Setting up default lxd bridge: %s" %
                           " ".join(cmd_create))
-                util.subp(cmd_attach)
+                _lxc(cmd_attach)
 
     elif bridge_cfg:
         raise RuntimeError(
@@ -204,10 +213,10 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("mode") == "none":
         return None, None
 
-    bridge_name = bridge_cfg.get("name", "lxdbr0")
+    bridge_name = bridge_cfg.get("name", _DEFAULT_NETWORK_NAME)
     cmd_create = []
-    cmd_attach = ["lxc", "network", "attach-profile", bridge_name,
-                  "default", "eth0", "--force-local"]
+    cmd_attach = ["network", "attach-profile", bridge_name,
+                  "default", "eth0"]
 
     if bridge_cfg.get("mode") == "existing":
         return None, cmd_attach
@@ -215,7 +224,7 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("mode") != "new":
         raise Exception("invalid bridge mode \"%s\"" % bridge_cfg.get("mode"))
 
-    cmd_create = ["lxc", "network", "create", bridge_name]
+    cmd_create = ["network", "create", bridge_name]
 
     if bridge_cfg.get("ipv4_address") and bridge_cfg.get("ipv4_netmask"):
         cmd_create.append("ipv4.address=%s/%s" %
@@ -247,8 +256,47 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("domain"):
         cmd_create.append("dns.domain=%s" % bridge_cfg.get("domain"))
 
-    cmd_create.append("--force-local")
-
     return cmd_create, cmd_attach
 
+
+def _lxc(cmd):
+    env = {'LC_ALL': 'C'}
+    util.subp(['lxc'] + list(cmd) + ["--force-local"], update_env=env)
+
+
+def maybe_cleanup_default(net_name, did_init, create, attach,
+                          profile="default", nic_name="eth0"):
+    """Newer versions of lxc (3.0.1+) create a lxdbr0 network when
+    'lxd init --auto' is run.  Older versions did not.
+
+    By removing ay that lxd-init created, we simply leave the add/attach
+    code in-tact.
+
+    https://github.com/lxc/lxd/issues/4649""";
+    if net_name != _DEFAULT_NETWORK_NAME or not did_init:
+        return
+
+    fail_assume_enoent = " failed. Assuming it did not exist."
+    succeeded = " succeeded."
+    if create:
+        msg = "Deletion of lxd network '%s'" % net_name
+        try:
+            _lxc(["network", "delete", net_name])
+            LOG.debug(msg + succeeded)
+        except util.ProcessExecutionError as e:
+            if e.exit_code != 1:
+                raise e
+            LOG.debug(msg + fail_assume_enoent)
+
+    if attach:
+        msg = "Removal of device '%s' from profile '%s'" % (nic_name, profile)
+        try:
+            _lxc(["profile", "device", "remove", profile, nic_name])
+            LOG.debug(msg + succeeded)
+        except util.ProcessExecutionError as e:
+            if e.exit_code != 1:
+                raise e
+            LOG.debug(msg + fail_assume_enoent)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
index f14a4fc..339baba 100644
--- a/cloudinit/config/cc_mounts.py
+++ b/cloudinit/config/cc_mounts.py
@@ -76,6 +76,7 @@ DEVICE_NAME_FILTER = r"^([x]{0,1}[shv]d[a-z][0-9]*|sr[0-9]+)$"
 DEVICE_NAME_RE = re.compile(DEVICE_NAME_FILTER)
 WS = re.compile("[%s]+" % (whitespace))
 FSTAB_PATH = "/etc/fstab"
+MNT_COMMENT = "comment=cloudconfig"
 
 LOG = logging.getLogger(__name__)
 
@@ -232,8 +233,8 @@ def setup_swapfile(fname, size=None, maxsize=None):
     if str(size).lower() == "auto":
         try:
             memsize = util.read_meminfo()['total']
-        except IOError as e:
-            LOG.debug("Not creating swap. failed to read meminfo")
+        except IOError:
+            LOG.debug("Not creating swap: failed to read meminfo")
             return
 
         util.ensure_dir(tdir)
@@ -280,17 +281,17 @@ def handle_swapcfg(swapcfg):
 
     if os.path.exists(fname):
         if not os.path.exists("/proc/swaps"):
-            LOG.debug("swap file %s existed. no /proc/swaps. Being safe.",
-                      fname)
+            LOG.debug("swap file %s exists, but no /proc/swaps exists, "
+                      "being safe", fname)
             return fname
         try:
             for line in util.load_file("/proc/swaps").splitlines():
                 if line.startswith(fname + " "):
-                    LOG.debug("swap file %s already in use.", fname)
+                    LOG.debug("swap file %s already in use", fname)
                     return fname
-            LOG.debug("swap file %s existed, but not in /proc/swaps", fname)
+            LOG.debug("swap file %s exists, but not in /proc/swaps", fname)
         except Exception:
-            LOG.warning("swap file %s existed. Error reading /proc/swaps",
+            LOG.warning("swap file %s exists. Error reading /proc/swaps",
                         fname)
             return fname
 
@@ -327,6 +328,22 @@ def handle(_name, cfg, cloud, log, _args):
 
     LOG.debug("mounts configuration is %s", cfgmnt)
 
+    fstab_lines = []
+    fstab_devs = {}
+    fstab_removed = []
+
+    for line in util.load_file(FSTAB_PATH).splitlines():
+        if MNT_COMMENT in line:
+            fstab_removed.append(line)
+            continue
+
+        try:
+            toks = WS.split(line)
+        except Exception:
+            pass
+        fstab_devs[toks[0]] = line
+        fstab_lines.append(line)
+
     for i in range(len(cfgmnt)):
         # skip something that wasn't a list
         if not isinstance(cfgmnt[i], list):
@@ -336,12 +353,17 @@ def handle(_name, cfg, cloud, log, _args):
 
         start = str(cfgmnt[i][0])
         sanitized = sanitize_devname(start, cloud.device_name_to_device, log)
+        if sanitized != start:
+            log.debug("changed %s => %s" % (start, sanitized))
+
         if sanitized is None:
-            log.debug("Ignorming nonexistant named mount %s", start)
+            log.debug("Ignoring nonexistent named mount %s", start)
+            continue
+        elif sanitized in fstab_devs:
+            log.info("Device %s already defined in fstab: %s",
+                     sanitized, fstab_devs[sanitized])
             continue
 
-        if sanitized != start:
-            log.debug("changed %s => %s" % (start, sanitized))
         cfgmnt[i][0] = sanitized
 
         # in case the user did not quote a field (likely fs-freq, fs_passno)
@@ -373,11 +395,17 @@ def handle(_name, cfg, cloud, log, _args):
     for defmnt in defmnts:
         start = defmnt[0]
         sanitized = sanitize_devname(start, cloud.device_name_to_device, log)
-        if sanitized is None:
-            log.debug("Ignoring nonexistant default named mount %s", start)
-            continue
         if sanitized != start:
             log.debug("changed default device %s => %s" % (start, sanitized))
+
+        if sanitized is None:
+            log.debug("Ignoring nonexistent default named mount %s", start)
+            continue
+        elif sanitized in fstab_devs:
+            log.debug("Device %s already defined in fstab: %s",
+                      sanitized, fstab_devs[sanitized])
+            continue
+
         defmnt[0] = sanitized
 
         cfgmnt_has = False
@@ -397,7 +425,7 @@ def handle(_name, cfg, cloud, log, _args):
     actlist = []
     for x in cfgmnt:
         if x[1] is None:
-            log.debug("Skipping non-existent device named %s", x[0])
+            log.debug("Skipping nonexistent device named %s", x[0])
         else:
             actlist.append(x)
 
@@ -406,34 +434,21 @@ def handle(_name, cfg, cloud, log, _args):
         actlist.append([swapret, "none", "swap", "sw", "0", "0"])
 
     if len(actlist) == 0:
-        log.debug("No modifications to fstab needed.")
+        log.debug("No modifications to fstab needed")
         return
 
-    comment = "comment=cloudconfig"
     cc_lines = []
     needswap = False
     dirs = []
     for line in actlist:
         # write 'comment' in the fs_mntops, entry,  claiming this
-        line[3] = "%s,%s" % (line[3], comment)
+        line[3] = "%s,%s" % (line[3], MNT_COMMENT)
         if line[2] == "swap":
             needswap = True
         if line[1].startswith("/"):
             dirs.append(line[1])
         cc_lines.append('\t'.join(line))
 
-    fstab_lines = []
-    removed = []
-    for line in util.load_file(FSTAB_PATH).splitlines():
-        try:
-            toks = WS.split(line)
-            if toks[3].find(comment) != -1:
-                removed.append(line)
-                continue
-        except Exception:
-            pass
-        fstab_lines.append(line)
-
     for d in dirs:
         try:
             util.ensure_dir(d)
@@ -441,7 +456,7 @@ def handle(_name, cfg, cloud, log, _args):
             util.logexc(log, "Failed to make '%s' config-mount", d)
 
     sadds = [WS.sub(" ", n) for n in cc_lines]
-    sdrops = [WS.sub(" ", n) for n in removed]
+    sdrops = [WS.sub(" ", n) for n in fstab_removed]
 
     sops = (["- " + drop for drop in sdrops if drop not in sadds] +
             ["+ " + add for add in sadds if add not in sdrops])
diff --git a/cloudinit/config/cc_ntp.py b/cloudinit/config/cc_ntp.py
index cbd0237..9e074bd 100644
--- a/cloudinit/config/cc_ntp.py
+++ b/cloudinit/config/cc_ntp.py
@@ -10,20 +10,95 @@ from cloudinit.config.schema import (
     get_schema_doc, validate_cloudconfig_schema)
 from cloudinit import log as logging
 from cloudinit.settings import PER_INSTANCE
+from cloudinit import temp_utils
 from cloudinit import templater
 from cloudinit import type_utils
 from cloudinit import util
 
+import copy
 import os
+import six
 from textwrap import dedent
 
 LOG = logging.getLogger(__name__)
 
 frequency = PER_INSTANCE
 NTP_CONF = '/etc/ntp.conf'
-TIMESYNCD_CONF = '/etc/systemd/timesyncd.conf.d/cloud-init.conf'
 NR_POOL_SERVERS = 4
-distros = ['centos', 'debian', 'fedora', 'opensuse', 'sles', 'ubuntu']
+distros = ['centos', 'debian', 'fedora', 'opensuse', 'rhel', 'sles', 'ubuntu']
+
+NTP_CLIENT_CONFIG = {
+    'chrony': {
+        'check_exe': 'chronyd',
+        'confpath': '/etc/chrony.conf',
+        'packages': ['chrony'],
+        'service_name': 'chrony',
+        'template_name': 'chrony.conf.{distro}',
+        'template': None,
+    },
+    'ntp': {
+        'check_exe': 'ntpd',
+        'confpath': NTP_CONF,
+        'packages': ['ntp'],
+        'service_name': 'ntp',
+        'template_name': 'ntp.conf.{distro}',
+        'template': None,
+    },
+    'ntpdate': {
+        'check_exe': 'ntpdate',
+        'confpath': NTP_CONF,
+        'packages': ['ntpdate'],
+        'service_name': 'ntpdate',
+        'template_name': 'ntp.conf.{distro}',
+        'template': None,
+    },
+    'systemd-timesyncd': {
+        'check_exe': '/lib/systemd/systemd-timesyncd',
+        'confpath': '/etc/systemd/timesyncd.conf.d/cloud-init.conf',
+        'packages': [],
+        'service_name': 'systemd-timesyncd',
+        'template_name': 'timesyncd.conf',
+        'template': None,
+    },
+}
+
+# This is Distro-specific configuration overrides of the base config
+DISTRO_CLIENT_CONFIG = {
+    'debian': {
+        'chrony': {
+            'confpath': '/etc/chrony/chrony.conf',
+        },
+    },
+    'opensuse': {
+        'chrony': {
+            'service_name': 'chronyd',
+        },
+        'ntp': {
+            'confpath': '/etc/ntp.conf',
+            'service_name': 'ntpd',
+        },
+        'systemd-timesyncd': {
+            'check_exe': '/usr/lib/systemd/systemd-timesyncd',
+        },
+    },
+    'sles': {
+        'chrony': {
+            'service_name': 'chronyd',
+        },
+        'ntp': {
+            'confpath': '/etc/ntp.conf',
+            'service_name': 'ntpd',
+        },
+        'systemd-timesyncd': {
+            'check_exe': '/usr/lib/systemd/systemd-timesyncd',
+        },
+    },
+    'ubuntu': {
+        'chrony': {
+            'confpath': '/etc/chrony/chrony.conf',
+        },
+    },
+}
 
 
 # The schema definition for each cloud-config module is a strict contract for
@@ -48,7 +123,34 @@ schema = {
     'distros': distros,
     'examples': [
         dedent("""\
+        # Override ntp with chrony configuration on Ubuntu
+        ntp:
+          enabled: true
+          ntp_client: chrony  # Uses cloud-init default chrony configuration
+        """),
+        dedent("""\
+        # Provide a custom ntp client configuration
         ntp:
+          enabled: true
+          ntp_client: myntpclient
+          config:
+             confpath: /etc/myntpclient/myntpclient.conf
+             check_exe: myntpclientd
+             packages:
+               - myntpclient
+             service_name: myntpclient
+             template: |
+                 ## template:jinja
+                 # My NTP Client config
+                 {% if pools -%}# pools{% endif %}
+                 {% for pool in pools -%}
+                 pool {{pool}} iburst
+                 {% endfor %}
+                 {%- if servers %}# servers
+                 {% endif %}
+                 {% for server in servers -%}
+                 server {{server}} iburst
+                 {% endfor %}
           pools: [0.int.pool.ntp.org, 1.int.pool.ntp.org, ntp.myorg.org]
           servers:
             - ntp.server.local
@@ -83,79 +185,159 @@ schema = {
                         List of ntp servers. If both pools and servers are
                          empty, 4 default pool servers will be provided with
                          the format ``{0-3}.{distro}.pool.ntp.org``.""")
-                }
+                },
+                'ntp_client': {
+                    'type': 'string',
+                    'default': 'auto',
+                    'description': dedent("""\
+                        Name of an NTP client to use to configure system NTP.
+                         When unprovided or 'auto' the default client preferred
+                         by the distribution will be used. The following
+                         built-in client names can be used to override existing
+                         configuration defaults: chrony, ntp, ntpdate,
+                         systemd-timesyncd."""),
+                },
+                'enabled': {
+                    'type': 'boolean',
+                    'default': True,
+                    'description': dedent("""\
+                        Attempt to enable ntp clients if set to True.  If set
+                         to False, ntp client will not be configured or
+                         installed"""),
+                },
+                'config': {
+                    'description': dedent("""\
+                        Configuration settings or overrides for the
+                         ``ntp_client`` specified."""),
+                    'type': ['object'],
+                    'properties': {
+                        'confpath': {
+                            'type': 'string',
+                            'description': dedent("""\
+                                The path to where the ``ntp_client``
+                                 configuration is written."""),
+                        },
+                        'check_exe': {
+                            'type': 'string',
+                            'description': dedent("""\
+                                The executable name for the ``ntp_client``.
+                                 For example, ntp service ``check_exe`` is
+                                 'ntpd' because it runs the ntpd binary."""),
+                        },
+                        'packages': {
+                            'type': 'array',
+                            'items': {
+                                'type': 'string',
+                            },
+                            'uniqueItems': True,
+                            'description': dedent("""\
+                                List of packages needed to be installed for the
+                                 selected ``ntp_client``."""),
+                        },
+                        'service_name': {
+                            'type': 'string',
+                            'description': dedent("""\
+                                The systemd or sysvinit service name used to
+                                 start and stop the ``ntp_client``
+                                 service."""),
+                        },
+                        'template': {
+                            'type': 'string',
+                            'description': dedent("""\
+                                Inline template allowing users to define their
+                                 own ``ntp_client`` configuration template.
+                                 The value must start with '## template:jinja'
+                                 to enable use of templating support.
+                                """),
+                        },
+                    },
+                    # Don't use REQUIRED_NTP_CONFIG_KEYS to allow for override
+                    # of builtin client values.
+                    'required': [],
+                    'minProperties': 1,  # If we have config, define something
+                    'additionalProperties': False
+                },
             },
             'required': [],
             'additionalProperties': False
         }
     }
 }
-
-__doc__ = get_schema_doc(schema)  # Supplement python help()
+REQUIRED_NTP_CONFIG_KEYS = frozenset([
+    'check_exe', 'confpath', 'packages', 'service_name'])
 
 
-def handle(name, cfg, cloud, log, _args):
-    """Enable and configure ntp."""
-    if 'ntp' not in cfg:
-        LOG.debug(
-            "Skipping module named %s, not present or disabled by cfg", name)
-        return
-    ntp_cfg = cfg['ntp']
-    if ntp_cfg is None:
-        ntp_cfg = {}  # Allow empty config which will install the package
+__doc__ = get_schema_doc(schema)  # Supplement python help()
 
-    # TODO drop this when validate_cloudconfig_schema is strict=True
-    if not isinstance(ntp_cfg, (dict)):
-        raise RuntimeError(
-            "'ntp' key existed in config, but not a dictionary type,"
-            " is a {_type} instead".format(_type=type_utils.obj_name(ntp_cfg)))
 
-    validate_cloudconfig_schema(cfg, schema)
-    if ntp_installable():
-        service_name = 'ntp'
-        confpath = NTP_CONF
-        template_name = None
-        packages = ['ntp']
-        check_exe = 'ntpd'
-    else:
-        service_name = 'systemd-timesyncd'
-        confpath = TIMESYNCD_CONF
-        template_name = 'timesyncd.conf'
-        packages = []
-        check_exe = '/lib/systemd/systemd-timesyncd'
-
-    rename_ntp_conf()
-    # ensure when ntp is installed it has a configuration file
-    # to use instead of starting up with packaged defaults
-    write_ntp_config_template(ntp_cfg, cloud, confpath, template=template_name)
-    install_ntp(cloud.distro.install_packages, packages=packages,
-                check_exe=check_exe)
+def distro_ntp_client_configs(distro):
+    """Construct a distro-specific ntp client config dictionary by merging
+       distro specific changes into base config.
 
-    try:
-        reload_ntp(service_name, systemd=cloud.distro.uses_systemd())
-    except util.ProcessExecutionError as e:
-        LOG.exception("Failed to reload/start ntp service: %s", e)
-        raise
+    @param distro: String providing the distro class name.
+    @returns: Dict of distro configurations for ntp clients.
+    """
+    dcfg = DISTRO_CLIENT_CONFIG
+    cfg = copy.copy(NTP_CLIENT_CONFIG)
+    if distro in dcfg:
+        cfg = util.mergemanydict([cfg, dcfg[distro]], reverse=True)
+    return cfg
 
 
-def ntp_installable():
-    """Check if we can install ntp package
+def select_ntp_client(ntp_client, distro):
+    """Determine which ntp client is to be used, consulting the distro
+       for its preference.
 
-    Ubuntu-Core systems do not have an ntp package available, so
-    we always return False.  Other systems require package managers to install
-    the ntp package If we fail to find one of the package managers, then we
-    cannot install ntp.
+    @param ntp_client: String name of the ntp client to use.
+    @param distro: Distro class instance.
+    @returns: Dict of the selected ntp client or {} if none selected.
     """
-    if util.system_is_snappy():
-        return False
 
-    if any(map(util.which, ['apt-get', 'dnf', 'yum', 'zypper'])):
-        return True
+    # construct distro-specific ntp_client_config dict
+    distro_cfg = distro_ntp_client_configs(distro.name)
+
+    # user specified client, return its config
+    if ntp_client and ntp_client != 'auto':
+        LOG.debug('Selected NTP client "%s" via user-data configuration',
+                  ntp_client)
+        return distro_cfg.get(ntp_client, {})
+
+    # default to auto if unset in distro
+    distro_ntp_client = distro.get_option('ntp_client', 'auto')
+
+    clientcfg = {}
+    if distro_ntp_client == "auto":
+        for client in distro.preferred_ntp_clients:
+            cfg = distro_cfg.get(client)
+            if util.which(cfg.get('check_exe')):
+                LOG.debug('Selected NTP client "%s", already installed',
+                          client)
+                clientcfg = cfg
+                break
+
+        if not clientcfg:
+            client = distro.preferred_ntp_clients[0]
+            LOG.debug(
+                'Selected distro preferred NTP client "%s", not yet installed',
+                client)
+            clientcfg = distro_cfg.get(client)
+    else:
+        LOG.debug('Selected NTP client "%s" via distro system config',
+                  distro_ntp_client)
+        clientcfg = distro_cfg.get(distro_ntp_client, {})
+
+    return clientcfg
 
-    return False
 
+def install_ntp_client(install_func, packages=None, check_exe="ntpd"):
+    """Install ntp client package if not already installed.
 
-def install_ntp(install_func, packages=None, check_exe="ntpd"):
+    @param install_func: function.  This parameter is invoked with the contents
+    of the packages parameter.
+    @param packages: list.  This parameter defaults to ['ntp'].
+    @param check_exe: string.  The name of a binary that indicates the package
+    the specified package is already installed.
+    """
     if util.which(check_exe):
         return
     if packages is None:
@@ -164,15 +346,23 @@ def install_ntp(install_func, packages=None, check_exe="ntpd"):
     install_func(packages)
 
 
-def rename_ntp_conf(config=None):
-    """Rename any existing ntp.conf file"""
-    if config is None:  # For testing
-        config = NTP_CONF
-    if os.path.exists(config):
-        util.rename(config, config + ".dist")
+def rename_ntp_conf(confpath=None):
+    """Rename any existing ntp client config file
+
+    @param confpath: string. Specify a path to an existing ntp client
+    configuration file.
+    """
+    if os.path.exists(confpath):
+        util.rename(confpath, confpath + ".dist")
 
 
 def generate_server_names(distro):
+    """Generate a list of server names to populate an ntp client configuration
+    file.
+
+    @param distro: string.  Specify the distro name
+    @returns: list: A list of strings representing ntp servers for this distro.
+    """
     names = []
     pool_distro = distro
     # For legal reasons x.pool.sles.ntp.org does not exist,
@@ -185,34 +375,60 @@ def generate_server_names(distro):
     return names
 
 
-def write_ntp_config_template(cfg, cloud, path, template=None):
-    servers = cfg.get('servers', [])
-    pools = cfg.get('pools', [])
+def write_ntp_config_template(distro_name, servers=None, pools=None,
+                              path=None, template_fn=None, template=None):
+    """Render a ntp client configuration for the specified client.
+
+    @param distro_name: string.  The distro class name.
+    @param servers: A list of strings specifying ntp servers. Defaults to empty
+    list.
+    @param pools: A list of strings specifying ntp pools. Defaults to empty
+    list.
+    @param path: A string to specify where to write the rendered template.
+    @param template_fn: A string to specify the template source file.
+    @param template: A string specifying the contents of the template. This
+    content will be written to a temporary file before being used to render
+    the configuration file.
+
+    @raises: ValueError when path is None.
+    @raises: ValueError when template_fn is None and template is None.
+    """
+    if not servers:
+        servers = []
+    if not pools:
+        pools = []
 
     if len(servers) == 0 and len(pools) == 0:
-        pools = generate_server_names(cloud.distro.name)
+        pools = generate_server_names(distro_name)
         LOG.debug(
             'Adding distro default ntp pool servers: %s', ','.join(pools))
 
-    params = {
-        'servers': servers,
-        'pools': pools,
-    }
+    if not path:
+        raise ValueError('Invalid value for path parameter')
 
-    if template is None:
-        template = 'ntp.conf.%s' % cloud.distro.name
+    if not template_fn and not template:
+        raise ValueError('Not template_fn or template provided')
 
-    template_fn = cloud.get_template_filename(template)
-    if not template_fn:
-        template_fn = cloud.get_template_filename('ntp.conf')
-        if not template_fn:
-            raise RuntimeError(
-                'No template found, not rendering {path}'.format(path=path))
+    params = {'servers': servers, 'pools': pools}
+    if template:
+        tfile = temp_utils.mkstemp(prefix='template_name-', suffix=".tmpl")
+        template_fn = tfile[1]  # filepath is second item in tuple
+        util.write_file(template_fn, content=template)
 
     templater.render_to_file(template_fn, path, params)
+    # clean up temporary template
+    if template:
+        util.del_file(template_fn)
 
 
 def reload_ntp(service, systemd=False):
+    """Restart or reload an ntp system service.
+
+    @param service: A string specifying the name of the service to be affected.
+    @param systemd: A boolean indicating if the distro uses systemd, defaults
+    to False.
+    @returns: A tuple of stdout, stderr results from executing the action.
+    """
     if systemd:
         cmd = ['systemctl', 'reload-or-restart', service]
     else:
@@ -220,4 +436,117 @@ def reload_ntp(service, systemd=False):
     util.subp(cmd, capture=True)
 
 
+def supplemental_schema_validation(ntp_config):
+    """Validate user-provided ntp:config option values.
+
+    This function supplements flexible jsonschema validation with specific
+    value checks to aid in triage of invalid user-provided configuration.
+
+    @param ntp_config: Dictionary of configuration value under 'ntp'.
+
+    @raises: ValueError describing invalid values provided.
+    """
+    errors = []
+    missing = REQUIRED_NTP_CONFIG_KEYS.difference(set(ntp_config.keys()))
+    if missing:
+        keys = ', '.join(sorted(missing))
+        errors.append(
+            'Missing required ntp:config keys: {keys}'.format(keys=keys))
+    elif not any([ntp_config.get('template'),
+                  ntp_config.get('template_name')]):
+        errors.append(
+            'Either ntp:config:template or ntp:config:template_name values'
+            ' are required')
+    for key, value in sorted(ntp_config.items()):
+        keypath = 'ntp:config:' + key
+        if key == 'confpath':
+            if not all([value, isinstance(value, six.string_types)]):
+                errors.append(
+                    'Expected a config file path {keypath}.'
+                    ' Found ({value})'.format(keypath=keypath, value=value))
+        elif key == 'packages':
+            if not isinstance(value, list):
+                errors.append(
+                    'Expected a list of required package names for {keypath}.'
+                    ' Found ({value})'.format(keypath=keypath, value=value))
+        elif key in ('template', 'template_name'):
+            if value is None:  # Either template or template_name can be none
+                continue
+            if not isinstance(value, six.string_types):
+                errors.append(
+                    'Expected a string type for {keypath}.'
+                    ' Found ({value})'.format(keypath=keypath, value=value))
+        elif not isinstance(value, six.string_types):
+            errors.append(
+                'Expected a string type for {keypath}.'
+                ' Found ({value})'.format(keypath=keypath, value=value))
+
+    if errors:
+        raise ValueError(r'Invalid ntp configuration:\n{errors}'.format(
+            errors='\n'.join(errors)))
+
+
+def handle(name, cfg, cloud, log, _args):
+    """Enable and configure ntp."""
+    if 'ntp' not in cfg:
+        LOG.debug(
+            "Skipping module named %s, not present or disabled by cfg", name)
+        return
+    ntp_cfg = cfg['ntp']
+    if ntp_cfg is None:
+        ntp_cfg = {}  # Allow empty config which will install the package
+
+    # TODO drop this when validate_cloudconfig_schema is strict=True
+    if not isinstance(ntp_cfg, (dict)):
+        raise RuntimeError(
+            "'ntp' key existed in config, but not a dictionary type,"
+            " is a {_type} instead".format(_type=type_utils.obj_name(ntp_cfg)))
+
+    validate_cloudconfig_schema(cfg, schema)
+
+    # Allow users to explicitly enable/disable
+    enabled = ntp_cfg.get('enabled', True)
+    if util.is_false(enabled):
+        LOG.debug("Skipping module named %s, disabled by cfg", name)
+        return
+
+    # Select which client is going to be used and get the configuration
+    ntp_client_config = select_ntp_client(ntp_cfg.get('ntp_client'),
+                                          cloud.distro)
+
+    # Allow user ntp config to override distro configurations
+    ntp_client_config = util.mergemanydict(
+        [ntp_client_config, ntp_cfg.get('config', {})], reverse=True)
+
+    supplemental_schema_validation(ntp_client_config)
+    rename_ntp_conf(confpath=ntp_client_config.get('confpath'))
+
+    template_fn = None
+    if not ntp_client_config.get('template'):
+        template_name = (
+            ntp_client_config.get('template_name').replace('{distro}',
+                                                           cloud.distro.name))
+        template_fn = cloud.get_template_filename(template_name)
+        if not template_fn:
+            msg = ('No template found, not rendering %s' %
+                   ntp_client_config.get('template_name'))
+            raise RuntimeError(msg)
+
+    write_ntp_config_template(cloud.distro.name,
+                              servers=ntp_cfg.get('servers', []),
+                              pools=ntp_cfg.get('pools', []),
+                              path=ntp_client_config.get('confpath'),
+                              template_fn=template_fn,
+                              template=ntp_client_config.get('template'))
+
+    install_ntp_client(cloud.distro.install_packages,
+                       packages=ntp_client_config['packages'],
+                       check_exe=ntp_client_config['check_exe'])
+    try:
+        reload_ntp(ntp_client_config['service_name'],
+                   systemd=cloud.distro.uses_systemd())
+    except util.ProcessExecutionError as e:
+        LOG.exception("Failed to reload/start ntp service: %s", e)
+        raise
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_phone_home.py b/cloudinit/config/cc_phone_home.py
index 878069b..3be0d1c 100644
--- a/cloudinit/config/cc_phone_home.py
+++ b/cloudinit/config/cc_phone_home.py
@@ -41,6 +41,7 @@ keys to post. Available keys are:
 """
 
 from cloudinit import templater
+from cloudinit import url_helper
 from cloudinit import util
 
 from cloudinit.settings import PER_INSTANCE
@@ -136,9 +137,9 @@ def handle(name, cfg, cloud, log, args):
     }
     url = templater.render_string(url, url_params)
     try:
-        util.read_file_or_url(url, data=real_submit_keys,
-                              retries=tries, sec_between=3,
-                              ssl_details=util.fetch_ssl_details(cloud.paths))
+        url_helper.read_file_or_url(
+            url, data=real_submit_keys, retries=tries, sec_between=3,
+            ssl_details=util.fetch_ssl_details(cloud.paths))
     except Exception:
         util.logexc(log, "Failed to post phone home data to %s in %s tries",
                     url, tries)
diff --git a/cloudinit/config/cc_power_state_change.py b/cloudinit/config/cc_power_state_change.py
index 4da3a58..50b3747 100644
--- a/cloudinit/config/cc_power_state_change.py
+++ b/cloudinit/config/cc_power_state_change.py
@@ -74,7 +74,7 @@ def givecmdline(pid):
         if util.is_FreeBSD():
             (output, _err) = util.subp(['procstat', '-c', str(pid)])
             line = output.splitlines()[1]
-            m = re.search('\d+ (\w|\.|-)+\s+(/\w.+)', line)
+            m = re.search(r'\d+ (\w|\.|-)+\s+(/\w.+)', line)
             return m.group(2)
         else:
             return util.load_file("/proc/%s/cmdline" % pid)
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 013e69b..2edddd0 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -81,7 +81,7 @@ def _resize_xfs(mount_point, devpth):
 
 
 def _resize_ufs(mount_point, devpth):
-    return ('growfs', devpth)
+    return ('growfs', '-y', devpth)
 
 
 def _resize_zfs(mount_point, devpth):
@@ -89,13 +89,11 @@ def _resize_zfs(mount_point, devpth):
 
 
 def _get_dumpfs_output(mount_point):
-    dumpfs_res, err = util.subp(['dumpfs', '-m', mount_point])
-    return dumpfs_res
+    return util.subp(['dumpfs', '-m', mount_point])[0]
 
 
 def _get_gpart_output(part):
-    gpart_res, err = util.subp(['gpart', 'show', part])
-    return gpart_res
+    return util.subp(['gpart', 'show', part])[0]
 
 
 def _can_skip_resize_ufs(mount_point, devpth):
@@ -113,7 +111,7 @@ def _can_skip_resize_ufs(mount_point, devpth):
         if not line.startswith('#'):
             newfs_cmd = shlex.split(line)
             opt_value = 'O:Ua:s:b:d:e:f:g:h:i:jk:m:o:'
-            optlist, args = getopt.getopt(newfs_cmd[1:], opt_value)
+            optlist, _args = getopt.getopt(newfs_cmd[1:], opt_value)
             for o, a in optlist:
                 if o == "-s":
                     cur_fs_sz = int(a)
diff --git a/cloudinit/config/cc_rh_subscription.py b/cloudinit/config/cc_rh_subscription.py
index 530808c..1c67943 100644
--- a/cloudinit/config/cc_rh_subscription.py
+++ b/cloudinit/config/cc_rh_subscription.py
@@ -209,8 +209,7 @@ class SubscriptionManager(object):
                 cmd.append("--serverurl={0}".format(self.server_hostname))
 
             try:
-                return_out, return_err = self._sub_man_cli(cmd,
-                                                           logstring_val=True)
+                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -233,8 +232,7 @@ class SubscriptionManager(object):
 
             # Attempting to register the system only
             try:
-                return_out, return_err = self._sub_man_cli(cmd,
-                                                           logstring_val=True)
+                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -257,7 +255,7 @@ class SubscriptionManager(object):
                .format(self.servicelevel)]
 
         try:
-            return_out, return_err = self._sub_man_cli(cmd)
+            return_out = self._sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             if e.stdout.rstrip() != '':
                 for line in e.stdout.split("\n"):
@@ -275,7 +273,7 @@ class SubscriptionManager(object):
     def _set_auto_attach(self):
         cmd = ['attach', '--auto']
         try:
-            return_out, return_err = self._sub_man_cli(cmd)
+            return_out = self._sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             self.log_warn("Auto-attach failed with: {0}".format(e))
             return False
@@ -294,12 +292,12 @@ class SubscriptionManager(object):
 
         # Get all available pools
         cmd = ['list', '--available', '--pool-only']
-        results, errors = self._sub_man_cli(cmd)
+        results = self._sub_man_cli(cmd)[0]
         available = (results.rstrip()).split("\n")
 
         # Get all consumed pools
         cmd = ['list', '--consumed', '--pool-only']
-        results, errors = self._sub_man_cli(cmd)
+        results = self._sub_man_cli(cmd)[0]
         consumed = (results.rstrip()).split("\n")
 
         return available, consumed
@@ -311,14 +309,14 @@ class SubscriptionManager(object):
         '''
 
         cmd = ['repos', '--list-enabled']
-        return_out, return_err = self._sub_man_cli(cmd)
+        return_out = self._sub_man_cli(cmd)[0]
         active_repos = []
         for repo in return_out.split("\n"):
             if "Repo ID:" in repo:
                 active_repos.append((repo.split(':')[1]).strip())
 
         cmd = ['repos', '--list-disabled']
-        return_out, return_err = self._sub_man_cli(cmd)
+        return_out = self._sub_man_cli(cmd)[0]
 
         inactive_repos = []
         for repo in return_out.split("\n"):
diff --git a/cloudinit/config/cc_rsyslog.py b/cloudinit/config/cc_rsyslog.py
index af08788..27d2366 100644
--- a/cloudinit/config/cc_rsyslog.py
+++ b/cloudinit/config/cc_rsyslog.py
@@ -203,8 +203,8 @@ LOG = logging.getLogger(__name__)
 COMMENT_RE = re.compile(r'[ ]*[#]+[ ]*')
 HOST_PORT_RE = re.compile(
     r'^(?P<proto>[@]{0,2})'
-    '(([[](?P<bracket_addr>[^\]]*)[\]])|(?P<addr>[^:]*))'
-    '([:](?P<port>[0-9]+))?$')
+    r'(([[](?P<bracket_addr>[^\]]*)[\]])|(?P<addr>[^:]*))'
+    r'([:](?P<port>[0-9]+))?$')
 
 
 def reload_syslog(command=DEF_RELOAD, systemd=False):
diff --git a/cloudinit/config/cc_runcmd.py b/cloudinit/config/cc_runcmd.py
index 539cbd5..b6f6c80 100644
--- a/cloudinit/config/cc_runcmd.py
+++ b/cloudinit/config/cc_runcmd.py
@@ -66,7 +66,6 @@ schema = {
             'additionalProperties': False,
             'minItems': 1,
             'required': [],
-            'uniqueItems': True
         }
     }
 }
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index bb24d57..5ef9737 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -68,16 +68,57 @@ import re
 import sys
 
 from cloudinit.distros import ug_util
-from cloudinit import ssh_util
+from cloudinit import log as logging
+from cloudinit.ssh_util import update_ssh_config
 from cloudinit import util
 
 from string import ascii_letters, digits
 
+LOG = logging.getLogger(__name__)
+
 # We are removing certain 'painful' letters/numbers
 PW_SET = (''.join([x for x in ascii_letters + digits
                    if x not in 'loLOI01']))
 
 
+def handle_ssh_pwauth(pw_auth, service_cmd=None, service_name="ssh"):
+    """Apply sshd PasswordAuthentication changes.
+
+    @param pw_auth: config setting from 'pw_auth'.
+                    Best given as True, False, or "unchanged".
+    @param service_cmd: The service command list (['service'])
+    @param service_name: The name of the sshd service for the system.
+
+    @return: None"""
+    cfg_name = "PasswordAuthentication"
+    if service_cmd is None:
+        service_cmd = ["service"]
+
+    if util.is_true(pw_auth):
+        cfg_val = 'yes'
+    elif util.is_false(pw_auth):
+        cfg_val = 'no'
+    else:
+        bmsg = "Leaving ssh config '%s' unchanged." % cfg_name
+        if pw_auth is None or pw_auth.lower() == 'unchanged':
+            LOG.debug("%s ssh_pwauth=%s", bmsg, pw_auth)
+        else:
+            LOG.warning("%s Unrecognized value: ssh_pwauth=%s", bmsg, pw_auth)
+        return
+
+    updated = update_ssh_config({cfg_name: cfg_val})
+    if not updated:
+        LOG.debug("No need to restart ssh service, %s not updated.", cfg_name)
+        return
+
+    if 'systemctl' in service_cmd:
+        cmd = list(service_cmd) + ["restart", service_name]
+    else:
+        cmd = list(service_cmd) + [service_name, "restart"]
+    util.subp(cmd)
+    LOG.debug("Restarted the ssh daemon.")
+
+
 def handle(_name, cfg, cloud, log, args):
     if len(args) != 0:
         # if run from command line, and give args, wipe the chpasswd['list']
@@ -170,65 +211,9 @@ def handle(_name, cfg, cloud, log, args):
             if expired_users:
                 log.debug("Expired passwords for: %s users", expired_users)
 
-    change_pwauth = False
-    pw_auth = None
-    if 'ssh_pwauth' in cfg:
-        if util.is_true(cfg['ssh_pwauth']):
-            change_pwauth = True
-            pw_auth = 'yes'
-        elif util.is_false(cfg['ssh_pwauth']):
-            change_pwauth = True
-            pw_auth = 'no'
-        elif str(cfg['ssh_pwauth']).lower() == 'unchanged':
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        elif not str(cfg['ssh_pwauth']).strip():
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        elif not cfg['ssh_pwauth']:
-            log.debug('Leaving auth line unchanged')
-            change_pwauth = False
-        else:
-            msg = 'Unrecognized value %s for ssh_pwauth' % cfg['ssh_pwauth']
-            util.logexc(log, msg)
-
-    if change_pwauth:
-        replaced_auth = False
-
-        # See: man sshd_config
-        old_lines = ssh_util.parse_ssh_config(ssh_util.DEF_SSHD_CFG)
-        new_lines = []
-        i = 0
-        for (i, line) in enumerate(old_lines):
-            # Keywords are case-insensitive and arguments are case-sensitive
-            if line.key == 'passwordauthentication':
-                log.debug("Replacing auth line %s with %s", i + 1, pw_auth)
-                replaced_auth = True
-                line.value = pw_auth
-            new_lines.append(line)
-
-        if not replaced_auth:
-            log.debug("Adding new auth line %s", i + 1)
-            replaced_auth = True
-            new_lines.append(ssh_util.SshdConfigLine('',
-                                                     'PasswordAuthentication',
-                                                     pw_auth))
-
-        lines = [str(l) for l in new_lines]
-        util.write_file(ssh_util.DEF_SSHD_CFG, "\n".join(lines),
-                        copy_mode=True)
-
-        try:
-            cmd = cloud.distro.init_cmd  # Default service
-            cmd.append(cloud.distro.get_option('ssh_svcname', 'ssh'))
-            cmd.append('restart')
-            if 'systemctl' in cmd:  # Switch action ordering
-                cmd[1], cmd[2] = cmd[2], cmd[1]
-            cmd = filter(None, cmd)  # Remove empty arguments
-            util.subp(cmd)
-            log.debug("Restarted the ssh daemon")
-        except Exception:
-            util.logexc(log, "Restarting of the ssh daemon failed")
+    handle_ssh_pwauth(
+        cfg.get('ssh_pwauth'), service_cmd=cloud.distro.init_cmd,
+        service_name=cloud.distro.get_option('ssh_svcname', 'ssh'))
 
     if len(errors):
         log.debug("%s errors occured, re-raising the last one", len(errors))
diff --git a/cloudinit/config/cc_snap.py b/cloudinit/config/cc_snap.py
index 34a53fd..90724b8 100644
--- a/cloudinit/config/cc_snap.py
+++ b/cloudinit/config/cc_snap.py
@@ -110,7 +110,6 @@ schema = {
                     'additionalItems': False,  # Reject non-string & non-list
                     'minItems': 1,
                     'minProperties': 1,
-                    'uniqueItems': True
                 },
                 'squashfuse_in_container': {
                     'type': 'boolean'
@@ -204,12 +203,12 @@ def maybe_install_squashfuse(cloud):
         return
     try:
         cloud.distro.update_package_sources()
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Package update failed")
         raise
     try:
         cloud.distro.install_packages(['squashfuse'])
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Failed to install squashfuse")
         raise
 
diff --git a/cloudinit/config/cc_snappy.py b/cloudinit/config/cc_snappy.py
index bab80bb..15bee2d 100644
--- a/cloudinit/config/cc_snappy.py
+++ b/cloudinit/config/cc_snappy.py
@@ -213,7 +213,7 @@ def render_snap_op(op, name, path=None, cfgfile=None, config=None):
 
 def read_installed_packages():
     ret = []
-    for (name, date, version, dev) in read_pkg_data():
+    for (name, _date, _version, dev) in read_pkg_data():
         if dev:
             ret.append(NAMESPACE_DELIM.join([name, dev]))
         else:
@@ -222,7 +222,7 @@ def read_installed_packages():
 
 
 def read_pkg_data():
-    out, err = util.subp([SNAPPY_CMD, "list"])
+    out, _err = util.subp([SNAPPY_CMD, "list"])
     pkg_data = []
     for line in out.splitlines()[1:]:
         toks = line.split(sep=None, maxsplit=3)
diff --git a/cloudinit/config/cc_ubuntu_advantage.py b/cloudinit/config/cc_ubuntu_advantage.py
index 16b1868..5e082bd 100644
--- a/cloudinit/config/cc_ubuntu_advantage.py
+++ b/cloudinit/config/cc_ubuntu_advantage.py
@@ -87,7 +87,6 @@ schema = {
                     'additionalItems': False,  # Reject non-string & non-list
                     'minItems': 1,
                     'minProperties': 1,
-                    'uniqueItems': True
                 }
             },
             'additionalProperties': False,  # Reject keys not in schema
@@ -149,12 +148,12 @@ def maybe_install_ua_tools(cloud):
         return
     try:
         cloud.distro.update_package_sources()
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Package update failed")
         raise
     try:
         cloud.distro.install_packages(['ubuntu-advantage-tools'])
-    except Exception as e:
+    except Exception:
         util.logexc(LOG, "Failed to install ubuntu-advantage-tools")
         raise
 
diff --git a/cloudinit/config/cc_users_groups.py b/cloudinit/config/cc_users_groups.py
index b215e95..c95bdaa 100644
--- a/cloudinit/config/cc_users_groups.py
+++ b/cloudinit/config/cc_users_groups.py
@@ -54,8 +54,9 @@ config keys for an entry in ``users`` are as follows:
     - ``ssh_authorized_keys``: Optional. List of ssh keys to add to user's
       authkeys file. Default: none
     - ``ssh_import_id``: Optional. SSH id to import for user. Default: none
-    - ``sudo``: Optional. Sudo rule to use, or list of sudo rules to use.
-      Default: none.
+    - ``sudo``: Optional. Sudo rule to use, list of sudo rules to use or False.
+      Default: none. An absence of sudo key, or a value of none or false
+      will result in no sudo rules being written for the user.
     - ``system``: Optional. Create user as system user with no home directory.
       Default: false
     - ``uid``: Optional. The user's ID. Default: The next available value.
@@ -82,6 +83,9 @@ config keys for an entry in ``users`` are as follows:
 
     users:
         - default
+        # User explicitly omitted from sudo permission; also default behavior.
+        - name: <some_restricted_user>
+          sudo: false
         - name: <username>
           expiredate: <date>
           gecos: <comment>
diff --git a/cloudinit/config/schema.py b/cloudinit/config/schema.py
index ca7d0d5..080a6d0 100644
--- a/cloudinit/config/schema.py
+++ b/cloudinit/config/schema.py
@@ -4,7 +4,7 @@
 from __future__ import print_function
 
 from cloudinit import importer
-from cloudinit.util import find_modules, read_file_or_url
+from cloudinit.util import find_modules, load_file
 
 import argparse
 from collections import defaultdict
@@ -93,20 +93,33 @@ def validate_cloudconfig_schema(config, schema, strict=False):
 def annotated_cloudconfig_file(cloudconfig, original_content, schema_errors):
     """Return contents of the cloud-config file annotated with schema errors.
 
-    @param cloudconfig: YAML-loaded object from the original_content.
+    @param cloudconfig: YAML-loaded dict from the original_content or empty
+        dict if unparseable.
     @param original_content: The contents of a cloud-config file
     @param schema_errors: List of tuples from a JSONSchemaValidationError. The
         tuples consist of (schemapath, error_message).
     """
     if not schema_errors:
         return original_content
-    schemapaths = _schemapath_for_cloudconfig(cloudconfig, original_content)
+    schemapaths = {}
+    if cloudconfig:
+        schemapaths = _schemapath_for_cloudconfig(
+            cloudconfig, original_content)
     errors_by_line = defaultdict(list)
     error_count = 1
     error_footer = []
     annotated_content = []
     for path, msg in schema_errors:
-        errors_by_line[schemapaths[path]].append(msg)
+        match = re.match(r'format-l(?P<line>\d+)\.c(?P<col>\d+).*', path)
+        if match:
+            line, col = match.groups()
+            errors_by_line[int(line)].append(msg)
+        else:
+            col = None
+            errors_by_line[schemapaths[path]].append(msg)
+        if col is not None:
+            msg = 'Line {line} column {col}: {msg}'.format(
+                line=line, col=col, msg=msg)
         error_footer.append('# E{0}: {1}'.format(error_count, msg))
         error_count += 1
     lines = original_content.decode().split('\n')
@@ -139,21 +152,34 @@ def validate_cloudconfig_file(config_path, schema, annotate=False):
     """
     if not os.path.exists(config_path):
         raise RuntimeError('Configfile {0} does not exist'.format(config_path))
-    content = read_file_or_url('file://{0}'.format(config_path)).contents
+    content = load_file(config_path, decode=False)
     if not content.startswith(CLOUD_CONFIG_HEADER):
         errors = (
-            ('header', 'File {0} needs to begin with "{1}"'.format(
+            ('format-l1.c1', 'File {0} needs to begin with "{1}"'.format(
                 config_path, CLOUD_CONFIG_HEADER.decode())),)
-        raise SchemaValidationError(errors)
-
+        error = SchemaValidationError(errors)
+        if annotate:
+            print(annotated_cloudconfig_file({}, content, error.schema_errors))
+        raise error
     try:
         cloudconfig = yaml.safe_load(content)
-    except yaml.parser.ParserError as e:
-        errors = (
-            ('format', 'File {0} is not valid yaml. {1}'.format(
-                config_path, str(e))),)
-        raise SchemaValidationError(errors)
-
+    except (yaml.YAMLError) as e:
+        line = column = 1
+        mark = None
+        if hasattr(e, 'context_mark') and getattr(e, 'context_mark'):
+            mark = getattr(e, 'context_mark')
+        elif hasattr(e, 'problem_mark') and getattr(e, 'problem_mark'):
+            mark = getattr(e, 'problem_mark')
+        if mark:
+            line = mark.line + 1
+            column = mark.column + 1
+        errors = (('format-l{line}.c{col}'.format(line=line, col=column),
+                   'File {0} is not valid yaml. {1}'.format(
+                       config_path, str(e))),)
+        error = SchemaValidationError(errors)
+        if annotate:
+            print(annotated_cloudconfig_file({}, content, error.schema_errors))
+        raise error
     try:
         validate_cloudconfig_schema(
             cloudconfig, schema, strict=True)
@@ -176,7 +202,7 @@ def _schemapath_for_cloudconfig(config, original_content):
     list_index = 0
     RE_YAML_INDENT = r'^(\s*)'
     scopes = []
-    for line_number, line in enumerate(content_lines):
+    for line_number, line in enumerate(content_lines, 1):
         indent_depth = len(re.match(RE_YAML_INDENT, line).groups()[0])
         line = line.strip()
         if not line or line.startswith('#'):
@@ -208,8 +234,8 @@ def _schemapath_for_cloudconfig(config, original_content):
                 scopes.append((indent_depth + 2, key + '.0'))
                 for inner_list_index in range(0, len(yaml.safe_load(value))):
                     list_key = key + '.' + str(inner_list_index)
-                    schema_line_numbers[list_key] = line_number + 1
-        schema_line_numbers[key] = line_number + 1
+                    schema_line_numbers[list_key] = line_number
+        schema_line_numbers[key] = line_number
     return schema_line_numbers
 
 
@@ -297,8 +323,8 @@ def get_schema():
 
     configs_dir = os.path.dirname(os.path.abspath(__file__))
     potential_handlers = find_modules(configs_dir)
-    for (fname, mod_name) in potential_handlers.items():
-        mod_locs, looked_locs = importer.find_module(
+    for (_fname, mod_name) in potential_handlers.items():
+        mod_locs, _looked_locs = importer.find_module(
             mod_name, ['cloudinit.config'], ['schema'])
         if mod_locs:
             mod = importer.import_module(mod_locs[0])
@@ -337,9 +363,11 @@ def handle_schema_args(name, args):
         try:
             validate_cloudconfig_file(
                 args.config_file, full_schema, args.annotate)
-        except (SchemaValidationError, RuntimeError) as e:
+        except SchemaValidationError as e:
             if not args.annotate:
                 error(str(e))
+        except RuntimeError as e:
+                error(str(e))
         else:
             print("Valid cloud-config file {0}".format(args.config_file))
     if args.doc:
diff --git a/cloudinit/config/tests/test_disable_ec2_metadata.py b/cloudinit/config/tests/test_disable_ec2_metadata.py
new file mode 100644
index 0000000..67646b0
--- /dev/null
+++ b/cloudinit/config/tests/test_disable_ec2_metadata.py
@@ -0,0 +1,50 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Tests cc_disable_ec2_metadata handler"""
+
+import cloudinit.config.cc_disable_ec2_metadata as ec2_meta
+
+from cloudinit.tests.helpers import CiTestCase, mock
+
+import logging
+
+LOG = logging.getLogger(__name__)
+
+DISABLE_CFG = {'disable_ec2_metadata': 'true'}
+
+
+class TestEC2MetadataRoute(CiTestCase):
+
+    with_logs = True
+
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.which')
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.subp')
+    def test_disable_ifconfig(self, m_subp, m_which):
+        """Set the route if ifconfig command is available"""
+        m_which.side_effect = lambda x: x if x == 'ifconfig' else None
+        ec2_meta.handle('foo', DISABLE_CFG, None, LOG, None)
+        m_subp.assert_called_with(
+            ['route', 'add', '-host', '169.254.169.254', 'reject'],
+            capture=False)
+
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.which')
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.subp')
+    def test_disable_ip(self, m_subp, m_which):
+        """Set the route if ip command is available"""
+        m_which.side_effect = lambda x: x if x == 'ip' else None
+        ec2_meta.handle('foo', DISABLE_CFG, None, LOG, None)
+        m_subp.assert_called_with(
+            ['ip', 'route', 'add', 'prohibit', '169.254.169.254'],
+            capture=False)
+
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.which')
+    @mock.patch('cloudinit.config.cc_disable_ec2_metadata.util.subp')
+    def test_disable_no_tool(self, m_subp, m_which):
+        """Log error when neither route nor ip commands are available"""
+        m_which.return_value = None  # Find neither ifconfig nor ip
+        ec2_meta.handle('foo', DISABLE_CFG, None, LOG, None)
+        self.assertEqual(
+            [mock.call('ip'), mock.call('ifconfig')], m_which.call_args_list)
+        m_subp.assert_not_called()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
new file mode 100644
index 0000000..b051ec8
--- /dev/null
+++ b/cloudinit/config/tests/test_set_passwords.py
@@ -0,0 +1,71 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import mock
+
+from cloudinit.config import cc_set_passwords as setpass
+from cloudinit.tests.helpers import CiTestCase
+from cloudinit import util
+
+MODPATH = "cloudinit.config.cc_set_passwords."
+
+
+class TestHandleSshPwauth(CiTestCase):
+    """Test cc_set_passwords handling of ssh_pwauth in handle_ssh_pwauth."""
+
+    with_logs = True
+
+    @mock.patch(MODPATH + "util.subp")
+    def test_unknown_value_logs_warning(self, m_subp):
+        setpass.handle_ssh_pwauth("floo")
+        self.assertIn("Unrecognized value: ssh_pwauth=floo",
+                      self.logs.getvalue())
+        m_subp.assert_not_called()
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_systemctl_as_service_cmd(self, m_subp, m_update_ssh_config):
+        """If systemctl in service cmd: systemctl restart name."""
+        setpass.handle_ssh_pwauth(
+            True, service_cmd=["systemctl"], service_name="myssh")
+        self.assertEqual(mock.call(["systemctl", "restart", "myssh"]),
+                         m_subp.call_args)
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_service_as_service_cmd(self, m_subp, m_update_ssh_config):
+        """If systemctl in service cmd: systemctl restart name."""
+        setpass.handle_ssh_pwauth(
+            True, service_cmd=["service"], service_name="myssh")
+        self.assertEqual(mock.call(["service", "myssh", "restart"]),
+                         m_subp.call_args)
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=False)
+    @mock.patch(MODPATH + "util.subp")
+    def test_not_restarted_if_not_updated(self, m_subp, m_update_ssh_config):
+        """If config is not updated, then no system restart should be done."""
+        setpass.handle_ssh_pwauth(True)
+        m_subp.assert_not_called()
+        self.assertIn("No need to restart ssh", self.logs.getvalue())
+
+    @mock.patch(MODPATH + "update_ssh_config", return_value=True)
+    @mock.patch(MODPATH + "util.subp")
+    def test_unchanged_does_nothing(self, m_subp, m_update_ssh_config):
+        """If 'unchanged', then no updates to config and no restart."""
+        setpass.handle_ssh_pwauth(
+            "unchanged", service_cmd=["systemctl"], service_name="myssh")
+        m_update_ssh_config.assert_not_called()
+        m_subp.assert_not_called()
+
+    @mock.patch(MODPATH + "util.subp")
+    def test_valid_change_values(self, m_subp):
+        """If value is a valid changen value, then update should be called."""
+        upname = MODPATH + "update_ssh_config"
+        optname = "PasswordAuthentication"
+        for value in util.FALSE_STRINGS + util.TRUE_STRINGS:
+            optval = "yes" if value in util.TRUE_STRINGS else "no"
+            with mock.patch(upname, return_value=False) as m_update:
+                setpass.handle_ssh_pwauth(value)
+                m_update.assert_called_with({optname: optval})
+        m_subp.assert_not_called()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/config/tests/test_snap.py b/cloudinit/config/tests/test_snap.py
index c5b4a9d..34c80f1 100644
--- a/cloudinit/config/tests/test_snap.py
+++ b/cloudinit/config/tests/test_snap.py
@@ -9,7 +9,7 @@ from cloudinit.config.cc_snap import (
 from cloudinit.config.schema import validate_cloudconfig_schema
 from cloudinit import util
 from cloudinit.tests.helpers import (
-    CiTestCase, mock, wrap_and_call, skipUnlessJsonSchema)
+    CiTestCase, SchemaTestCaseMixin, mock, wrap_and_call, skipUnlessJsonSchema)
 
 
 SYSTEM_USER_ASSERTION = """\
@@ -245,9 +245,10 @@ class TestRunCommands(CiTestCase):
 
 
 @skipUnlessJsonSchema()
-class TestSchema(CiTestCase):
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
 
     with_logs = True
+    schema = schema
 
     def test_schema_warns_on_snap_not_as_dict(self):
         """If the snap configuration is not a dict, emit a warning."""
@@ -340,6 +341,30 @@ class TestSchema(CiTestCase):
             {'snap': {'assertions': {'01': 'also valid'}}}, schema)
         self.assertEqual('', self.logs.getvalue())
 
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': [["echo", "bye"], ["echo" "bye"]]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': ["echo bye", "echo bye"]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_array(self):
+        """Duplicated commands dict/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_string(self):
+        """Duplicated commands dict/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': "echo bye", '01': "echo bye"}},
+            "command entries can be duplicate.")
+
 
 class TestHandle(CiTestCase):
 
diff --git a/cloudinit/config/tests/test_ubuntu_advantage.py b/cloudinit/config/tests/test_ubuntu_advantage.py
index f2a59fa..f1beeff 100644
--- a/cloudinit/config/tests/test_ubuntu_advantage.py
+++ b/cloudinit/config/tests/test_ubuntu_advantage.py
@@ -7,7 +7,8 @@ from cloudinit.config.cc_ubuntu_advantage import (
     handle, maybe_install_ua_tools, run_commands, schema)
 from cloudinit.config.schema import validate_cloudconfig_schema
 from cloudinit import util
-from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJsonSchema
+from cloudinit.tests.helpers import (
+    CiTestCase, mock, SchemaTestCaseMixin, skipUnlessJsonSchema)
 
 
 # Module path used in mocks
@@ -105,9 +106,10 @@ class TestRunCommands(CiTestCase):
 
 
 @skipUnlessJsonSchema()
-class TestSchema(CiTestCase):
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
 
     with_logs = True
+    schema = schema
 
     def test_schema_warns_on_ubuntu_advantage_not_as_dict(self):
         """If ubuntu-advantage configuration is not a dict, emit a warning."""
@@ -169,6 +171,30 @@ class TestSchema(CiTestCase):
             {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema)
         self.assertEqual('', self.logs.getvalue())
 
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': [["echo", "bye"], ["echo" "bye"]]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': ["echo bye", "echo bye"]},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_array(self):
+        """Duplicated commands dict/array entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}},
+            "command entries can be duplicate.")
+
+    def test_duplicates_are_fine_dict_string(self):
+        """Duplicated commands dict/string entries are allowed."""
+        self.assertSchemaValid(
+            {'commands': {'00': "echo bye", '01': "echo bye"}},
+            "command entries can be duplicate.")
+
 
 class TestHandle(CiTestCase):
 
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
index 55260ea..ab0b077 100755
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -49,6 +49,9 @@ LOG = logging.getLogger(__name__)
 # It could break when Amazon adds new regions and new AZs.
 _EC2_AZ_RE = re.compile('^[a-z][a-z]-(?:[a-z]+-)+[0-9][a-z]$')
 
+# Default NTP Client Configurations
+PREFERRED_NTP_CLIENTS = ['chrony', 'systemd-timesyncd', 'ntp', 'ntpdate']
+
 
 @six.add_metaclass(abc.ABCMeta)
 class Distro(object):
@@ -60,6 +63,7 @@ class Distro(object):
     tz_zone_dir = "/usr/share/zoneinfo"
     init_cmd = ['service']  # systemctl, service etc
     renderer_configs = {}
+    _preferred_ntp_clients = None
 
     def __init__(self, name, cfg, paths):
         self._paths = paths
@@ -339,6 +343,14 @@ class Distro(object):
             contents.write("%s\n" % (eh))
             util.write_file(self.hosts_fn, contents.getvalue(), mode=0o644)
 
+    @property
+    def preferred_ntp_clients(self):
+        """Allow distro to determine the preferred ntp client list"""
+        if not self._preferred_ntp_clients:
+            self._preferred_ntp_clients = list(PREFERRED_NTP_CLIENTS)
+
+        return self._preferred_ntp_clients
+
     def _bring_up_interface(self, device_name):
         cmd = ['ifup', device_name]
         LOG.debug("Attempting to run bring up interface %s using command %s",
@@ -519,7 +531,7 @@ class Distro(object):
             self.lock_passwd(name)
 
         # Configure sudo access
-        if 'sudo' in kwargs:
+        if 'sudo' in kwargs and kwargs['sudo'] is not False:
             self.write_sudo_rules(name, kwargs['sudo'])
 
         # Import SSH keys
diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
index 754d3df..ff22d56 100644
--- a/cloudinit/distros/freebsd.py
+++ b/cloudinit/distros/freebsd.py
@@ -110,15 +110,15 @@ class Distro(distros.Distro):
         if dev.startswith('lo'):
             return dev
 
-        n = re.search('\d+$', dev)
+        n = re.search(r'\d+$', dev)
         index = n.group(0)
 
-        (out, err) = util.subp(['ifconfig', '-a'])
+        (out, _err) = util.subp(['ifconfig', '-a'])
         ifconfigoutput = [x for x in (out.strip()).splitlines()
                           if len(x.split()) > 0]
         bsddev = 'NOT_FOUND'
         for line in ifconfigoutput:
-            m = re.match('^\w+', line)
+            m = re.match(r'^\w+', line)
             if m:
                 if m.group(0).startswith('lo'):
                     continue
@@ -128,7 +128,7 @@ class Distro(distros.Distro):
                 break
 
         # Replace the index with the one we're after.
-        bsddev = re.sub('\d+$', index, bsddev)
+        bsddev = re.sub(r'\d+$', index, bsddev)
         LOG.debug("Using network interface %s", bsddev)
         return bsddev
 
@@ -266,7 +266,7 @@ class Distro(distros.Distro):
             self.lock_passwd(name)
 
         # Configure sudo access
-        if 'sudo' in kwargs:
+        if 'sudo' in kwargs and kwargs['sudo'] is not False:
             self.write_sudo_rules(name, kwargs['sudo'])
 
         # Import SSH keys
diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
index 162dfa0..9f90e95 100644
--- a/cloudinit/distros/opensuse.py
+++ b/cloudinit/distros/opensuse.py
@@ -208,4 +208,28 @@ class Distro(distros.Distro):
                                             nameservers, searchservers)
         return dev_names
 
+    @property
+    def preferred_ntp_clients(self):
+        """The preferred ntp client is dependent on the version."""
+
+        """Allow distro to determine the preferred ntp client list"""
+        if not self._preferred_ntp_clients:
+            distro_info = util.system_info()['dist']
+            name = distro_info[0]
+            major_ver = int(distro_info[1].split('.')[0])
+
+            # This is horribly complicated because of a case of
+            # "we do not care if versions should be increasing syndrome"
+            if (
+                (major_ver >= 15 and 'openSUSE' not in name) or
+                (major_ver >= 15 and 'openSUSE' in name and major_ver != 42)
+            ):
+                self._preferred_ntp_clients = ['chrony',
+                                               'systemd-timesyncd', 'ntp']
+            else:
+                self._preferred_ntp_clients = ['ntp',
+                                               'systemd-timesyncd', 'chrony']
+
+        return self._preferred_ntp_clients
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
index 82ca34f..6815410 100644
--- a/cloudinit/distros/ubuntu.py
+++ b/cloudinit/distros/ubuntu.py
@@ -10,12 +10,31 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit.distros import debian
+from cloudinit.distros import PREFERRED_NTP_CLIENTS
 from cloudinit import log as logging
+from cloudinit import util
+
+import copy
 
 LOG = logging.getLogger(__name__)
 
 
 class Distro(debian.Distro):
+
+    @property
+    def preferred_ntp_clients(self):
+        """The preferred ntp client is dependent on the version."""
+        if not self._preferred_ntp_clients:
+            (_name, _version, codename) = util.system_info()['dist']
+            # Xenial cloud-init only installed ntp, UbuntuCore has timesyncd.
+            if codename == "xenial" and not util.system_is_snappy():
+                self._preferred_ntp_clients = ['ntp']
+            else:
+                self._preferred_ntp_clients = (
+                    copy.deepcopy(PREFERRED_NTP_CLIENTS))
+        return self._preferred_ntp_clients
+
     pass
 
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/ec2_utils.py b/cloudinit/ec2_utils.py
index dc3f0fc..3b7b17f 100644
--- a/cloudinit/ec2_utils.py
+++ b/cloudinit/ec2_utils.py
@@ -150,11 +150,9 @@ def get_instance_userdata(api_version='latest',
         # NOT_FOUND occurs) and just in that case returning an empty string.
         exception_cb = functools.partial(_skip_retry_on_codes,
                                          SKIP_USERDATA_CODES)
-        response = util.read_file_or_url(ud_url,
-                                         ssl_details=ssl_details,
-                                         timeout=timeout,
-                                         retries=retries,
-                                         exception_cb=exception_cb)
+        response = url_helper.read_file_or_url(
+            ud_url, ssl_details=ssl_details, timeout=timeout,
+            retries=retries, exception_cb=exception_cb)
         user_data = response.contents
     except url_helper.UrlError as e:
         if e.code not in SKIP_USERDATA_CODES:
@@ -169,9 +167,9 @@ def _get_instance_metadata(tree, api_version='latest',
                            ssl_details=None, timeout=5, retries=5,
                            leaf_decoder=None):
     md_url = url_helper.combine_url(metadata_address, api_version, tree)
-    caller = functools.partial(util.read_file_or_url,
-                               ssl_details=ssl_details, timeout=timeout,
-                               retries=retries)
+    caller = functools.partial(
+        url_helper.read_file_or_url, ssl_details=ssl_details,
+        timeout=timeout, retries=retries)
 
     def mcaller(url):
         return caller(url).contents
diff --git a/cloudinit/handlers/upstart_job.py b/cloudinit/handlers/upstart_job.py
index 1ca92d4..dc33876 100644
--- a/cloudinit/handlers/upstart_job.py
+++ b/cloudinit/handlers/upstart_job.py
@@ -97,7 +97,7 @@ def _has_suitable_upstart():
             else:
                 util.logexc(LOG, "dpkg --compare-versions failed [%s]",
                             e.exit_code)
-        except Exception as e:
+        except Exception:
             util.logexc(LOG, "dpkg --compare-versions failed")
         return False
     else:
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index f69c0ef..3ffde52 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -107,6 +107,21 @@ def is_bond(devname):
     return os.path.exists(sys_dev_path(devname, "bonding"))
 
 
+def is_renamed(devname):
+    """
+    /* interface name assignment types (sysfs name_assign_type attribute) */
+    #define NET_NAME_UNKNOWN	0	/* unknown origin (not exposed to user) */
+    #define NET_NAME_ENUM		1	/* enumerated by kernel */
+    #define NET_NAME_PREDICTABLE	2	/* predictably named by the kernel */
+    #define NET_NAME_USER		3	/* provided by user-space */
+    #define NET_NAME_RENAMED	4	/* renamed by user-space */
+    """
+    name_assign_type = read_sys_net_safe(devname, 'name_assign_type')
+    if name_assign_type and name_assign_type in ['3', '4']:
+        return True
+    return False
+
+
 def is_vlan(devname):
     uevent = str(read_sys_net_safe(devname, "uevent"))
     return 'DEVTYPE=vlan' in uevent.splitlines()
@@ -180,6 +195,17 @@ def find_fallback_nic(blacklist_drivers=None):
     if not blacklist_drivers:
         blacklist_drivers = []
 
+    if 'net.ifnames=0' in util.get_cmdline():
+        LOG.debug('Stable ifnames disabled by net.ifnames=0 in /proc/cmdline')
+    else:
+        unstable = [device for device in get_devicelist()
+                    if device != 'lo' and not is_renamed(device)]
+        if len(unstable):
+            LOG.debug('Found unstable nic names: %s; calling udevadm settle',
+                      unstable)
+            msg = 'Waiting for udev events to settle'
+            util.log_time(LOG.debug, msg, func=util.udevadm_settle)
+
     # get list of interfaces that could have connections
     invalid_interfaces = set(['lo'])
     potential_interfaces = set([device for device in get_devicelist()
@@ -295,7 +321,7 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
 
     def _version_2(netcfg):
         renames = []
-        for key, ent in netcfg.get('ethernets', {}).items():
+        for ent in netcfg.get('ethernets', {}).values():
             # only rename if configured to do so
             name = ent.get('set-name')
             if not name:
@@ -333,8 +359,12 @@ def interface_has_own_mac(ifname, strict=False):
       1: randomly generated   3: set using dev_set_mac_address"""
 
     assign_type = read_sys_net_int(ifname, "addr_assign_type")
-    if strict and assign_type is None:
-        raise ValueError("%s had no addr_assign_type.")
+    if assign_type is None:
+        # None is returned if this nic had no 'addr_assign_type' entry.
+        # if strict, raise an error, if not return True.
+        if strict:
+            raise ValueError("%s had no addr_assign_type.")
+        return True
     return assign_type in (0, 1, 3)
 
 
diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
index 9e9fe0f..f89a0f7 100755
--- a/cloudinit/net/cmdline.py
+++ b/cloudinit/net/cmdline.py
@@ -65,7 +65,7 @@ def _klibc_to_config_entry(content, mac_addrs=None):
         iface['mac_address'] = mac_addrs[name]
 
     # Handle both IPv4 and IPv6 values
-    for v, pre in (('ipv4', 'IPV4'), ('ipv6', 'IPV6')):
+    for pre in ('IPV4', 'IPV6'):
         # if no IPV4ADDR or IPV6ADDR, then go on.
         if pre + "ADDR" not in data:
             continue
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index 087c0c0..12cf509 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -216,7 +216,7 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
     if leases_d is None:
         leases_d = NETWORKD_LEASES_DIR
     leases = networkd_load_leases(leases_d=leases_d)
-    for ifindex, data in sorted(leases.items()):
+    for _ifindex, data in sorted(leases.items()):
         if data.get(keyname):
             return data[keyname]
     return None
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index c6a71d1..bd20a36 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -10,9 +10,12 @@ from . import ParserError
 from . import renderer
 from .network_state import subnet_is_ipv6
 
+from cloudinit import log as logging
 from cloudinit import util
 
 
+LOG = logging.getLogger(__name__)
+
 NET_CONFIG_COMMANDS = [
     "pre-up", "up", "post-up", "down", "pre-down", "post-down",
 ]
@@ -61,7 +64,7 @@ def _iface_add_subnet(iface, subnet):
 
 
 # TODO: switch to valid_map for attrs
-def _iface_add_attrs(iface, index):
+def _iface_add_attrs(iface, index, ipv4_subnet_mtu):
     # If the index is non-zero, this is an alias interface. Alias interfaces
     # represent additional interface addresses, and should not have additional
     # attributes. (extra attributes here are almost always either incorrect,
@@ -100,6 +103,13 @@ def _iface_add_attrs(iface, index):
             value = 'on' if iface[key] else 'off'
         if not value or key in ignore_map:
             continue
+        if key == 'mtu' and ipv4_subnet_mtu:
+            if value != ipv4_subnet_mtu:
+                LOG.warning(
+                    "Network config: ignoring %s device-level mtu:%s because"
+                    " ipv4 subnet-level mtu:%s provided.",
+                    iface['name'], value, ipv4_subnet_mtu)
+            continue
         if key in multiline_keys:
             for v in value:
                 content.append("    {0} {1}".format(renames.get(key, key), v))
@@ -377,12 +387,15 @@ class Renderer(renderer.Renderer):
         subnets = iface.get('subnets', {})
         if subnets:
             for index, subnet in enumerate(subnets):
+                ipv4_subnet_mtu = None
                 iface['index'] = index
                 iface['mode'] = subnet['type']
                 iface['control'] = subnet.get('control', 'auto')
                 subnet_inet = 'inet'
                 if subnet_is_ipv6(subnet):
                     subnet_inet += '6'
+                else:
+                    ipv4_subnet_mtu = subnet.get('mtu')
                 iface['inet'] = subnet_inet
                 if subnet['type'].startswith('dhcp'):
                     iface['mode'] = 'dhcp'
@@ -397,7 +410,7 @@ class Renderer(renderer.Renderer):
                     _iface_start_entry(
                         iface, index, render_hwaddress=render_hwaddress) +
                     _iface_add_subnet(iface, subnet) +
-                    _iface_add_attrs(iface, index)
+                    _iface_add_attrs(iface, index, ipv4_subnet_mtu)
                 )
                 for route in subnet.get('routes', []):
                     lines.extend(self._render_route(route, indent="    "))
@@ -409,7 +422,8 @@ class Renderer(renderer.Renderer):
             if 'bond-master' in iface or 'bond-slaves' in iface:
                 lines.append("auto {name}".format(**iface))
             lines.append("iface {name} {inet} {mode}".format(**iface))
-            lines.extend(_iface_add_attrs(iface, index=0))
+            lines.extend(
+                _iface_add_attrs(iface, index=0, ipv4_subnet_mtu=None))
             sections.append(lines)
         return sections
 
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index 6344348..4014363 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -34,7 +34,7 @@ def _get_params_dict_by_match(config, match):
                 if key.startswith(match))
 
 
-def _extract_addresses(config, entry):
+def _extract_addresses(config, entry, ifname):
     """This method parse a cloudinit.net.network_state dictionary (config) and
        maps netstate keys/values into a dictionary (entry) to represent
        netplan yaml.
@@ -124,6 +124,15 @@ def _extract_addresses(config, entry):
 
             addresses.append(addr)
 
+    if 'mtu' in config:
+        entry_mtu = entry.get('mtu')
+        if entry_mtu and config['mtu'] != entry_mtu:
+            LOG.warning(
+                "Network config: ignoring %s device-level mtu:%s because"
+                " ipv4 subnet-level mtu:%s provided.",
+                ifname, config['mtu'], entry_mtu)
+        else:
+            entry['mtu'] = config['mtu']
     if len(addresses) > 0:
         entry.update({'addresses': addresses})
     if len(routes) > 0:
@@ -262,10 +271,7 @@ class Renderer(renderer.Renderer):
                     else:
                         del eth['match']
                         del eth['set-name']
-                if 'mtu' in ifcfg:
-                    eth['mtu'] = ifcfg.get('mtu')
-
-                _extract_addresses(ifcfg, eth)
+                _extract_addresses(ifcfg, eth, ifname)
                 ethernets.update({ifname: eth})
 
             elif if_type == 'bond':
@@ -288,7 +294,7 @@ class Renderer(renderer.Renderer):
                 slave_interfaces = ifcfg.get('bond-slaves')
                 if slave_interfaces == 'none':
                     _extract_bond_slaves_by_name(interfaces, bond, ifname)
-                _extract_addresses(ifcfg, bond)
+                _extract_addresses(ifcfg, bond, ifname)
                 bonds.update({ifname: bond})
 
             elif if_type == 'bridge':
@@ -321,7 +327,7 @@ class Renderer(renderer.Renderer):
 
                 if len(br_config) > 0:
                     bridge.update({'parameters': br_config})
-                _extract_addresses(ifcfg, bridge)
+                _extract_addresses(ifcfg, bridge, ifname)
                 bridges.update({ifname: bridge})
 
             elif if_type == 'vlan':
@@ -333,7 +339,7 @@ class Renderer(renderer.Renderer):
                 macaddr = ifcfg.get('mac_address', None)
                 if macaddr is not None:
                     vlan['macaddress'] = macaddr.lower()
-                _extract_addresses(ifcfg, vlan)
+                _extract_addresses(ifcfg, vlan, ifname)
                 vlans.update({ifname: vlan})
 
         # inject global nameserver values under each all interface which
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index 6d63e5c..72c803e 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -7,6 +7,8 @@
 import copy
 import functools
 import logging
+import socket
+import struct
 
 import six
 
@@ -886,12 +888,9 @@ def net_prefix_to_ipv4_mask(prefix):
     This is the inverse of ipv4_mask_to_net_prefix.
         24 -> "255.255.255.0"
     Also supports input as a string."""
-
-    mask = [0, 0, 0, 0]
-    for i in list(range(0, int(prefix))):
-        idx = int(i / 8)
-        mask[idx] = mask[idx] + (1 << (7 - i % 8))
-    return ".".join([str(x) for x in mask])
+    mask = socket.inet_ntoa(
+        struct.pack(">I", (0xffffffff << (32 - int(prefix)) & 0xffffffff)))
+    return mask
 
 
 def ipv4_mask_to_net_prefix(mask):
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 39d89c4..3d71923 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -287,7 +287,6 @@ class Renderer(renderer.Renderer):
             if subnet_type == 'dhcp6':
                 iface_cfg['IPV6INIT'] = True
                 iface_cfg['DHCPV6C'] = True
-                iface_cfg['BOOTPROTO'] = 'dhcp'
             elif subnet_type in ['dhcp4', 'dhcp']:
                 iface_cfg['BOOTPROTO'] = 'dhcp'
             elif subnet_type == 'static':
@@ -305,6 +304,13 @@ class Renderer(renderer.Renderer):
                     mtu_key = 'IPV6_MTU'
                     iface_cfg['IPV6INIT'] = True
                 if 'mtu' in subnet:
+                    mtu_mismatch = bool(mtu_key in iface_cfg and
+                                        subnet['mtu'] != iface_cfg[mtu_key])
+                    if mtu_mismatch:
+                        LOG.warning(
+                            'Network config: ignoring %s device-level mtu:%s'
+                            ' because ipv4 subnet-level mtu:%s provided.',
+                            iface_cfg.name, iface_cfg[mtu_key], subnet['mtu'])
                     iface_cfg[mtu_key] = subnet['mtu']
             elif subnet_type == 'manual':
                 # If the subnet has an MTU setting, then ONBOOT=True
@@ -364,7 +370,7 @@ class Renderer(renderer.Renderer):
 
     @classmethod
     def _render_subnet_routes(cls, iface_cfg, route_cfg, subnets):
-        for i, subnet in enumerate(subnets, start=len(iface_cfg.children)):
+        for _, subnet in enumerate(subnets, start=len(iface_cfg.children)):
             for route in subnet.get('routes', []):
                 is_ipv6 = subnet.get('ipv6') or is_ipv6_addr(route['gateway'])
 
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 276556e..5c017d1 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -199,6 +199,7 @@ class TestGenerateFallbackConfig(CiTestCase):
         self.sysdir = self.tmp_dir() + '/'
         self.m_sys_path.return_value = self.sysdir
         self.addCleanup(sys_mock.stop)
+        self.add_patch('cloudinit.net.util.udevadm_settle', 'm_settle')
 
     def test_generate_fallback_finds_connected_eth_with_mac(self):
         """generate_fallback_config finds any connected device with a mac."""
diff --git a/cloudinit/netinfo.py b/cloudinit/netinfo.py
index 993b26c..9ff929c 100644
--- a/cloudinit/netinfo.py
+++ b/cloudinit/netinfo.py
@@ -8,9 +8,11 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
+from copy import copy, deepcopy
 import re
 
 from cloudinit import log as logging
+from cloudinit.net.network_state import net_prefix_to_ipv4_mask
 from cloudinit import util
 
 from cloudinit.simpletable import SimpleTable
@@ -18,18 +20,90 @@ from cloudinit.simpletable import SimpleTable
 LOG = logging.getLogger()
 
 
-def netdev_info(empty=""):
-    fields = ("hwaddr", "addr", "bcast", "mask")
-    (ifcfg_out, _err) = util.subp(["ifconfig", "-a"], rcs=[0, 1])
+DEFAULT_NETDEV_INFO = {
+    "ipv4": [],
+    "ipv6": [],
+    "hwaddr": "",
+    "up": False
+}
+
+
+def _netdev_info_iproute(ipaddr_out):
+    """
+    Get network device dicts from ip route and ip link info.
+
+    @param ipaddr_out: Output string from 'ip addr show' command.
+
+    @returns: A dict of device info keyed by network device name containing
+              device configuration values.
+    @raise: TypeError if ipaddr_out isn't a string.
+    """
+    devs = {}
+    dev_name = None
+    for num, line in enumerate(ipaddr_out.splitlines()):
+        m = re.match(r'^\d+:\s(?P<dev>[^:]+):\s+<(?P<flags>\S+)>\s+.*', line)
+        if m:
+            dev_name = m.group('dev').lower().split('@')[0]
+            flags = m.group('flags').split(',')
+            devs[dev_name] = {
+                'ipv4': [], 'ipv6': [], 'hwaddr': '',
+                'up': bool('UP' in flags and 'LOWER_UP' in flags),
+            }
+        elif 'inet6' in line:
+            m = re.match(
+                r'\s+inet6\s(?P<ip>\S+)\sscope\s(?P<scope6>\S+).*', line)
+            if not m:
+                LOG.warning(
+                    'Could not parse ip addr show: (line:%d) %s', num, line)
+                continue
+            devs[dev_name]['ipv6'].append(m.groupdict())
+        elif 'inet' in line:
+            m = re.match(
+                r'\s+inet\s(?P<cidr4>\S+)(\sbrd\s(?P<bcast>\S+))?\sscope\s'
+                r'(?P<scope>\S+).*', line)
+            if not m:
+                LOG.warning(
+                    'Could not parse ip addr show: (line:%d) %s', num, line)
+                continue
+            match = m.groupdict()
+            cidr4 = match.pop('cidr4')
+            addr, _, prefix = cidr4.partition('/')
+            if not prefix:
+                prefix = '32'
+            devs[dev_name]['ipv4'].append({
+                'ip': addr,
+                'bcast': match['bcast'] if match['bcast'] else '',
+                'mask': net_prefix_to_ipv4_mask(prefix),
+                'scope': match['scope']})
+        elif 'link' in line:
+            m = re.match(
+                r'\s+link/(?P<link_type>\S+)\s(?P<hwaddr>\S+).*', line)
+            if not m:
+                LOG.warning(
+                    'Could not parse ip addr show: (line:%d) %s', num, line)
+                continue
+            if m.group('link_type') == 'ether':
+                devs[dev_name]['hwaddr'] = m.group('hwaddr')
+            else:
+                devs[dev_name]['hwaddr'] = ''
+        else:
+            continue
+    return devs
+
+
+def _netdev_info_ifconfig(ifconfig_data):
+    # fields that need to be returned in devs for each dev
     devs = {}
-    for line in str(ifcfg_out).splitlines():
+    for line in ifconfig_data.splitlines():
         if len(line) == 0:
             continue
         if line[0] not in ("\t", " "):
             curdev = line.split()[0]
-            devs[curdev] = {"up": False}
-            for field in fields:
-                devs[curdev][field] = ""
+            # current ifconfig pops a ':' on the end of the device
+            if curdev.endswith(':'):
+                curdev = curdev[:-1]
+            if curdev not in devs:
+                devs[curdev] = deepcopy(DEFAULT_NETDEV_INFO)
         toks = line.lower().strip().split()
         if toks[0] == "up":
             devs[curdev]['up'] = True
@@ -39,59 +113,164 @@ def netdev_info(empty=""):
             if re.search(r"flags=\d+<up,", toks[1]):
                 devs[curdev]['up'] = True
 
-        fieldpost = ""
-        if toks[0] == "inet6":
-            fieldpost = "6"
-
         for i in range(len(toks)):
-            # older net-tools (ubuntu) show 'inet addr:xx.yy',
-            # newer (freebsd and fedora) show 'inet xx.yy'
-            # just skip this 'inet' entry. (LP: #1285185)
-            try:
-                if ((toks[i] in ("inet", "inet6") and
-                     toks[i + 1].startswith("addr:"))):
-                    continue
-            except IndexError:
-                pass
-
-            # Couple the different items we're interested in with the correct
-            # field since FreeBSD/CentOS/Fedora differ in the output.
-            ifconfigfields = {
-                "addr:": "addr", "inet": "addr",
-                "bcast:": "bcast", "broadcast": "bcast",
-                "mask:": "mask", "netmask": "mask",
-                "hwaddr": "hwaddr", "ether": "hwaddr",
-                "scope": "scope",
-            }
-            for origfield, field in ifconfigfields.items():
-                target = "%s%s" % (field, fieldpost)
-                if devs[curdev].get(target, ""):
-                    continue
-                if toks[i] == "%s" % origfield:
-                    try:
-                        devs[curdev][target] = toks[i + 1]
-                    except IndexError:
-                        pass
-                elif toks[i].startswith("%s" % origfield):
-                    devs[curdev][target] = toks[i][len(field) + 1:]
-
-    if empty != "":
-        for (_devname, dev) in devs.items():
-            for field in dev:
-                if dev[field] == "":
-                    dev[field] = empty
+            if toks[i] == "inet":  # Create new ipv4 addr entry
+                devs[curdev]['ipv4'].append(
+                    {'ip': toks[i + 1].lstrip("addr:")})
+            elif toks[i].startswith("bcast:"):
+                devs[curdev]['ipv4'][-1]['bcast'] = toks[i].lstrip("bcast:")
+            elif toks[i] == "broadcast":
+                devs[curdev]['ipv4'][-1]['bcast'] = toks[i + 1]
+            elif toks[i].startswith("mask:"):
+                devs[curdev]['ipv4'][-1]['mask'] = toks[i].lstrip("mask:")
+            elif toks[i] == "netmask":
+                devs[curdev]['ipv4'][-1]['mask'] = toks[i + 1]
+            elif toks[i] == "hwaddr" or toks[i] == "ether":
+                devs[curdev]['hwaddr'] = toks[i + 1]
+            elif toks[i] == "inet6":
+                if toks[i + 1] == "addr:":
+                    devs[curdev]['ipv6'].append({'ip': toks[i + 2]})
+                else:
+                    devs[curdev]['ipv6'].append({'ip': toks[i + 1]})
+            elif toks[i] == "prefixlen":  # Add prefix to current ipv6 value
+                addr6 = devs[curdev]['ipv6'][-1]['ip'] + "/" + toks[i + 1]
+                devs[curdev]['ipv6'][-1]['ip'] = addr6
+            elif toks[i].startswith("scope:"):
+                devs[curdev]['ipv6'][-1]['scope6'] = toks[i].lstrip("scope:")
+            elif toks[i] == "scopeid":
+                res = re.match(r'.*<(\S+)>', toks[i + 1])
+                if res:
+                    devs[curdev]['ipv6'][-1]['scope6'] = res.group(1)
+    return devs
+
+
+def netdev_info(empty=""):
+    devs = {}
+    if util.which('ip'):
+        # Try iproute first of all
+        (ipaddr_out, _err) = util.subp(["ip", "addr", "show"])
+        devs = _netdev_info_iproute(ipaddr_out)
+    elif util.which('ifconfig'):
+        # Fall back to net-tools if iproute2 is not present
+        (ifcfg_out, _err) = util.subp(["ifconfig", "-a"], rcs=[0, 1])
+        devs = _netdev_info_ifconfig(ifcfg_out)
+    else:
+        LOG.warning(
+            "Could not print networks: missing 'ip' and 'ifconfig' commands")
 
+    if empty == "":
+        return devs
+
+    recurse_types = (dict, tuple, list)
+
+    def fill(data, new_val="", empty_vals=("", b"")):
+        """Recursively replace 'empty_vals' in data (dict, tuple, list)
+           with new_val"""
+        if isinstance(data, dict):
+            myiter = data.items()
+        elif isinstance(data, (tuple, list)):
+            myiter = enumerate(data)
+        else:
+            raise TypeError("Unexpected input to fill")
+
+        for key, val in myiter:
+            if val in empty_vals:
+                data[key] = new_val
+            elif isinstance(val, recurse_types):
+                fill(val, new_val)
+
+    fill(devs, new_val=empty)
     return devs
 
 
-def route_info():
-    (route_out, _err) = util.subp(["netstat", "-rn"], rcs=[0, 1])
+def _netdev_route_info_iproute(iproute_data):
+    """
+    Get network route dicts from ip route info.
+
+    @param iproute_data: Output string from ip route command.
+
+    @returns: A dict containing ipv4 and ipv6 route entries as lists. Each
+              item in the list is a route dictionary representing destination,
+              gateway, flags, genmask and interface information.
+    """
+
+    routes = {}
+    routes['ipv4'] = []
+    routes['ipv6'] = []
+    entries = iproute_data.splitlines()
+    default_route_entry = {
+        'destination': '', 'flags': '', 'gateway': '', 'genmask': '',
+        'iface': '', 'metric': ''}
+    for line in entries:
+        entry = copy(default_route_entry)
+        if not line:
+            continue
+        toks = line.split()
+        flags = ['U']
+        if toks[0] == "default":
+            entry['destination'] = "0.0.0.0"
+            entry['genmask'] = "0.0.0.0"
+        else:
+            if '/' in toks[0]:
+                (addr, cidr) = toks[0].split("/")
+            else:
+                addr = toks[0]
+                cidr = '32'
+                flags.append("H")
+                entry['genmask'] = net_prefix_to_ipv4_mask(cidr)
+            entry['destination'] = addr
+            entry['genmask'] = net_prefix_to_ipv4_mask(cidr)
+            entry['gateway'] = "0.0.0.0"
+        for i in range(len(toks)):
+            if toks[i] == "via":
+                entry['gateway'] = toks[i + 1]
+                flags.insert(1, "G")
+            if toks[i] == "dev":
+                entry["iface"] = toks[i + 1]
+            if toks[i] == "metric":
+                entry['metric'] = toks[i + 1]
+        entry['flags'] = ''.join(flags)
+        routes['ipv4'].append(entry)
+    try:
+        (iproute_data6, _err6) = util.subp(
+            ["ip", "--oneline", "-6", "route", "list", "table", "all"],
+            rcs=[0, 1])
+    except util.ProcessExecutionError:
+        pass
+    else:
+        entries6 = iproute_data6.splitlines()
+        for line in entries6:
+            entry = {}
+            if not line:
+                continue
+            toks = line.split()
+            if toks[0] == "default":
+                entry['destination'] = "::/0"
+                entry['flags'] = "UG"
+            else:
+                entry['destination'] = toks[0]
+                entry['gateway'] = "::"
+                entry['flags'] = "U"
+            for i in range(len(toks)):
+                if toks[i] == "via":
+                    entry['gateway'] = toks[i + 1]
+                    entry['flags'] = "UG"
+                if toks[i] == "dev":
+                    entry["iface"] = toks[i + 1]
+                if toks[i] == "metric":
+                    entry['metric'] = toks[i + 1]
+                if toks[i] == "expires":
+                    entry['flags'] = entry['flags'] + 'e'
+            routes['ipv6'].append(entry)
+    return routes
+
 
+def _netdev_route_info_netstat(route_data):
     routes = {}
     routes['ipv4'] = []
     routes['ipv6'] = []
 
-    entries = route_out.splitlines()[1:]
+    entries = route_data.splitlines()
     for line in entries:
         if not line:
             continue
@@ -101,8 +280,8 @@ def route_info():
         #  default      10.65.0.1  UGS      0  34920 vtnet0
         #
         # Linux netstat shows 2 more:
-        #  Destination  Gateway    Genmask  Flags MSS Window irtt Iface
-        #  0.0.0.0      10.65.0.1  0.0.0.0  UG      0 0         0 eth0
+        #  Destination  Gateway    Genmask  Flags Metric Ref    Use Iface
+        #  0.0.0.0      10.65.0.1  0.0.0.0  UG    0      0        0 eth0
         if (len(toks) < 6 or toks[0] == "Kernel" or
                 toks[0] == "Destination" or toks[0] == "Internet" or
                 toks[0] == "Internet6" or toks[0] == "Routing"):
@@ -125,31 +304,57 @@ def route_info():
         routes['ipv4'].append(entry)
 
     try:
-        (route_out6, _err6) = util.subp(["netstat", "-A", "inet6", "-n"],
-                                        rcs=[0, 1])
+        (route_data6, _err6) = util.subp(
+            ["netstat", "-A", "inet6", "--route", "--numeric"], rcs=[0, 1])
     except util.ProcessExecutionError:
         pass
     else:
-        entries6 = route_out6.splitlines()[1:]
+        entries6 = route_data6.splitlines()
         for line in entries6:
             if not line:
                 continue
             toks = line.split()
-            if (len(toks) < 6 or toks[0] == "Kernel" or
+            if (len(toks) < 7 or toks[0] == "Kernel" or
+                    toks[0] == "Destination" or toks[0] == "Internet" or
                     toks[0] == "Proto" or toks[0] == "Active"):
                 continue
             entry = {
-                'proto': toks[0],
-                'recv-q': toks[1],
-                'send-q': toks[2],
-                'local address': toks[3],
-                'foreign address': toks[4],
-                'state': toks[5],
+                'destination': toks[0],
+                'gateway': toks[1],
+                'flags': toks[2],
+                'metric': toks[3],
+                'ref': toks[4],
+                'use': toks[5],
+                'iface': toks[6],
             }
+            # skip lo interface on ipv6
+            if entry['iface'] == "lo":
+                continue
+            # strip /128 from address if it's included
+            if entry['destination'].endswith('/128'):
+                entry['destination'] = re.sub(
+                    r'\/128$', '', entry['destination'])
             routes['ipv6'].append(entry)
     return routes
 
 
+def route_info():
+    routes = {}
+    if util.which('ip'):
+        # Try iproute first of all
+        (iproute_out, _err) = util.subp(["ip", "-o", "route", "list"])
+        routes = _netdev_route_info_iproute(iproute_out)
+    elif util.which('netstat'):
+        # Fall back to net-tools if iproute2 is not present
+        (route_out, _err) = util.subp(
+            ["netstat", "--route", "--numeric", "--extend"], rcs=[0, 1])
+        routes = _netdev_route_info_netstat(route_out)
+    else:
+        LOG.warning(
+            "Could not print routes: missing 'ip' and 'netstat' commands")
+    return routes
+
+
 def getgateway():
     try:
         routes = route_info()
@@ -164,23 +369,36 @@ def getgateway():
 
 def netdev_pformat():
     lines = []
+    empty = "."
     try:
-        netdev = netdev_info(empty=".")
-    except Exception:
-        lines.append(util.center("Net device info failed", '!', 80))
+        netdev = netdev_info(empty=empty)
+    except Exception as e:
+        lines.append(
+            util.center(
+                "Net device info failed ({error})".format(error=str(e)),
+                '!', 80))
     else:
+        if not netdev:
+            return '\n'
         fields = ['Device', 'Up', 'Address', 'Mask', 'Scope', 'Hw-Address']
         tbl = SimpleTable(fields)
-        for (dev, d) in sorted(netdev.items()):
-            tbl.add_row([dev, d["up"], d["addr"], d["mask"], ".", d["hwaddr"]])
-            if d.get('addr6'):
-                tbl.add_row([dev, d["up"],
-                             d["addr6"], ".", d.get("scope6"), d["hwaddr"]])
+        for (dev, data) in sorted(netdev.items()):
+            for addr in data.get('ipv4'):
+                tbl.add_row(
+                    (dev, data["up"], addr["ip"], addr["mask"],
+                     addr.get('scope', empty), data["hwaddr"]))
+            for addr in data.get('ipv6'):
+                tbl.add_row(
+                    (dev, data["up"], addr["ip"], empty, addr["scope6"],
+                     data["hwaddr"]))
+            if len(data.get('ipv6')) + len(data.get('ipv4')) == 0:
+                tbl.add_row((dev, data["up"], empty, empty, empty,
+                             data["hwaddr"]))
         netdev_s = tbl.get_string()
         max_len = len(max(netdev_s.splitlines(), key=len))
         header = util.center("Net device info", "+", max_len)
         lines.extend([header, netdev_s])
-    return "\n".join(lines)
+    return "\n".join(lines) + "\n"
 
 
 def route_pformat():
@@ -188,7 +406,10 @@ def route_pformat():
     try:
         routes = route_info()
     except Exception as e:
-        lines.append(util.center('Route info failed', '!', 80))
+        lines.append(
+            util.center(
+                'Route info failed ({error})'.format(error=str(e)),
+                '!', 80))
         util.logexc(LOG, "Route info failed: %s" % e)
     else:
         if routes.get('ipv4'):
@@ -205,20 +426,20 @@ def route_pformat():
             header = util.center("Route IPv4 info", "+", max_len)
             lines.extend([header, route_s])
         if routes.get('ipv6'):
-            fields_v6 = ['Route', 'Proto', 'Recv-Q', 'Send-Q',
-                         'Local Address', 'Foreign Address', 'State']
+            fields_v6 = ['Route', 'Destination', 'Gateway', 'Interface',
+                         'Flags']
             tbl_v6 = SimpleTable(fields_v6)
             for (n, r) in enumerate(routes.get('ipv6')):
                 route_id = str(n)
-                tbl_v6.add_row([route_id, r['proto'],
-                                r['recv-q'], r['send-q'],
-                                r['local address'], r['foreign address'],
-                                r['state']])
+                if r['iface'] == 'lo':
+                    continue
+                tbl_v6.add_row([route_id, r['destination'],
+                                r['gateway'], r['iface'], r['flags']])
             route_s = tbl_v6.get_string()
             max_len = len(max(route_s.splitlines(), key=len))
             header = util.center("Route IPv6 info", "+", max_len)
             lines.extend([header, route_s])
-    return "\n".join(lines)
+    return "\n".join(lines) + "\n"
 
 
 def debug_info(prefix='ci-info: '):
diff --git a/cloudinit/reporting/events.py b/cloudinit/reporting/events.py
index 4f62d2f..e5dfab3 100644
--- a/cloudinit/reporting/events.py
+++ b/cloudinit/reporting/events.py
@@ -192,7 +192,7 @@ class ReportEventStack(object):
 
     def _childrens_finish_info(self):
         for cand_result in (status.FAIL, status.WARN):
-            for name, (value, msg) in self.children.items():
+            for _name, (value, _msg) in self.children.items():
                 if value == cand_result:
                     return (value, self.message)
         return (self.result, self.message)
diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py
index 22279d0..858e082 100644
--- a/cloudinit/sources/DataSourceAliYun.py
+++ b/cloudinit/sources/DataSourceAliYun.py
@@ -45,7 +45,7 @@ def _is_aliyun():
 
 def parse_public_keys(public_keys):
     keys = []
-    for key_id, key_body in public_keys.items():
+    for _key_id, key_body in public_keys.items():
         if isinstance(key_body, str):
             keys.append(key_body.strip())
         elif isinstance(key_body, list):
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index e1d0055..24fd65f 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -29,7 +29,6 @@ CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
 
 # Shell command lists
 CMD_PROBE_FLOPPY = ['modprobe', 'floppy']
-CMD_UDEVADM_SETTLE = ['udevadm', 'settle', '--timeout=5']
 
 META_DATA_NOT_SUPPORTED = {
     'block-device-mapping': {},
@@ -185,26 +184,24 @@ class DataSourceAltCloud(sources.DataSource):
             cmd = CMD_PROBE_FLOPPY
             (cmd_out, _err) = util.subp(cmd)
             LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
-        except ProcessExecutionError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except ProcessExecutionError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
-        except OSError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except OSError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
 
         floppy_dev = '/dev/fd0'
 
         # udevadm settle for floppy device
         try:
-            cmd = CMD_UDEVADM_SETTLE
-            cmd.append('--exit-if-exists=' + floppy_dev)
-            (cmd_out, _err) = util.subp(cmd)
+            (cmd_out, _err) = util.udevadm_settle(exists=floppy_dev, timeout=5)
             LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
-        except ProcessExecutionError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except ProcessExecutionError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
-        except OSError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except OSError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
 
         try:
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 0ee622e..7007d9e 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -48,6 +48,7 @@ DEFAULT_FS = 'ext4'
 # DMI chassis-asset-tag is set static for all azure instances
 AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
 REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
+REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
 IMDS_URL = "http://169.254.169.254/metadata/reprovisiondata";
 
 
@@ -107,31 +108,24 @@ def find_dev_from_busdev(camcontrol_out, busdev):
     return None
 
 
-def get_dev_storvsc_sysctl():
+def execute_or_debug(cmd, fail_ret=None):
     try:
-        sysctl_out, err = util.subp(['sysctl', 'dev.storvsc'])
+        return util.subp(cmd)[0]
     except util.ProcessExecutionError:
-        LOG.debug("Fail to execute sysctl dev.storvsc")
-        sysctl_out = ""
-    return sysctl_out
+        LOG.debug("Failed to execute: %s", ' '.join(cmd))
+        return fail_ret
+
+
+def get_dev_storvsc_sysctl():
+    return execute_or_debug(["sysctl", "dev.storvsc"], fail_ret="")
 
 
 def get_camcontrol_dev_bus():
-    try:
-        camcontrol_b_out, err = util.subp(['camcontrol', 'devlist', '-b'])
-    except util.ProcessExecutionError:
-        LOG.debug("Fail to execute camcontrol devlist -b")
-        return None
-    return camcontrol_b_out
+    return execute_or_debug(['camcontrol', 'devlist', '-b'])
 
 
 def get_camcontrol_dev():
-    try:
-        camcontrol_out, err = util.subp(['camcontrol', 'devlist'])
-    except util.ProcessExecutionError:
-        LOG.debug("Fail to execute camcontrol devlist")
-        return None
-    return camcontrol_out
+    return execute_or_debug(['camcontrol', 'devlist'])
 
 
 def get_resource_disk_on_freebsd(port_id):
@@ -214,6 +208,7 @@ BUILTIN_CLOUD_CONFIG = {
 }
 
 DS_CFG_PATH = ['datasource', DS_NAME]
+DS_CFG_KEY_PRESERVE_NTFS = 'never_destroy_ntfs'
 DEF_EPHEMERAL_LABEL = 'Temporary Storage'
 
 # The redacted password fails to meet password complexity requirements
@@ -400,14 +395,9 @@ class DataSourceAzure(sources.DataSource):
         if found == ddir:
             LOG.debug("using files cached in %s", ddir)
 
-        # azure / hyper-v provides random data here
-        # TODO. find the seed on FreeBSD platform
-        # now update ds_cfg to reflect contents pass in config
-        if not util.is_FreeBSD():
-            seed = util.load_file("/sys/firmware/acpi/tables/OEM0",
-                                  quiet=True, decode=False)
-            if seed:
-                self.metadata['random_seed'] = seed
+        seed = _get_random_seed()
+        if seed:
+            self.metadata['random_seed'] = seed
 
         user_ds_cfg = util.get_cfg_by_path(self.cfg, DS_CFG_PATH, {})
         self.ds_cfg = util.mergemanydict([user_ds_cfg, self.ds_cfg])
@@ -443,11 +433,12 @@ class DataSourceAzure(sources.DataSource):
             LOG.debug("negotiating already done for %s",
                       self.get_instance_id())
 
-    def _poll_imds(self, report_ready=True):
+    def _poll_imds(self):
         """Poll IMDS for the new provisioning data until we get a valid
         response. Then return the returned JSON object."""
         url = IMDS_URL + "?api-version=2017-04-02"
         headers = {"Metadata": "true"}
+        report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
         LOG.debug("Start polling IMDS")
 
         def exc_cb(msg, exception):
@@ -457,13 +448,17 @@ class DataSourceAzure(sources.DataSource):
             # call DHCP and setup the ephemeral network to acquire the new IP.
             return False
 
-        need_report = report_ready
         while True:
             try:
                 with EphemeralDHCPv4() as lease:
-                    if need_report:
+                    if report_ready:
+                        path = REPORTED_READY_MARKER_FILE
+                        LOG.info(
+                            "Creating a marker file to report ready: %s", path)
+                        util.write_file(path, "{pid}: {time}\n".format(
+                            pid=os.getpid(), time=time()))
                         self._report_ready(lease=lease)
-                        need_report = False
+                        report_ready = False
                     return readurl(url, timeout=1, headers=headers,
                                    exception_cb=exc_cb, infinite=True).contents
             except UrlError:
@@ -474,7 +469,7 @@ class DataSourceAzure(sources.DataSource):
            before we go into our polling loop."""
         try:
             get_metadata_from_fabric(None, lease['unknown-245'])
-        except Exception as exc:
+        except Exception:
             LOG.warning(
                 "Error communicating with Azure fabric; You may experience."
                 "connectivity issues.", exc_info=True)
@@ -492,13 +487,15 @@ class DataSourceAzure(sources.DataSource):
         jump back into the polling loop in order to retrieve the ovf_env."""
         if not ret:
             return False
-        (md, self.userdata_raw, cfg, files) = ret
+        (_md, self.userdata_raw, cfg, _files) = ret
         path = REPROVISION_MARKER_FILE
         if (cfg.get('PreprovisionedVm') is True or
                 os.path.isfile(path)):
             if not os.path.isfile(path):
-                LOG.info("Creating a marker file to poll imds")
-                util.write_file(path, "%s: %s\n" % (os.getpid(), time()))
+                LOG.info("Creating a marker file to poll imds: %s",
+                         path)
+                util.write_file(path, "{pid}: {time}\n".format(
+                    pid=os.getpid(), time=time()))
             return True
         return False
 
@@ -528,16 +525,19 @@ class DataSourceAzure(sources.DataSource):
                   self.ds_cfg['agent_command'])
         try:
             fabric_data = metadata_func()
-        except Exception as exc:
+        except Exception:
             LOG.warning(
                 "Error communicating with Azure fabric; You may experience."
                 "connectivity issues.", exc_info=True)
             return False
+        util.del_file(REPORTED_READY_MARKER_FILE)
         util.del_file(REPROVISION_MARKER_FILE)
         return fabric_data
 
     def activate(self, cfg, is_new_instance):
-        address_ephemeral_resize(is_new_instance=is_new_instance)
+        address_ephemeral_resize(is_new_instance=is_new_instance,
+                                 preserve_ntfs=self.ds_cfg.get(
+                                     DS_CFG_KEY_PRESERVE_NTFS, False))
         return
 
     @property
@@ -581,17 +581,29 @@ def _has_ntfs_filesystem(devpath):
     return os.path.realpath(devpath) in ntfs_devices
 
 
-def can_dev_be_reformatted(devpath):
-    """Determine if block device devpath is newly formatted ephemeral.
+def can_dev_be_reformatted(devpath, preserve_ntfs):
+    """Determine if the ephemeral drive at devpath should be reformatted.
 
-    A newly formatted disk will:
+    A fresh ephemeral disk is formatted by Azure and will:
       a.) have a partition table (dos or gpt)
       b.) have 1 partition that is ntfs formatted, or
           have 2 partitions with the second partition ntfs formatted.
           (larger instances with >2TB ephemeral disk have gpt, and will
            have a microsoft reserved partition as part 1.  LP: #1686514)
       c.) the ntfs partition will have no files other than possibly
-          'dataloss_warning_readme.txt'"""
+          'dataloss_warning_readme.txt'
+
+    User can indicate that NTFS should never be destroyed by setting
+    DS_CFG_KEY_PRESERVE_NTFS in dscfg.
+    If data is found on NTFS, user is warned to set DS_CFG_KEY_PRESERVE_NTFS
+    to make sure cloud-init does not accidentally wipe their data.
+    If cloud-init cannot mount the disk to check for data, destruction
+    will be allowed, unless the dscfg key is set."""
+    if preserve_ntfs:
+        msg = ('config says to never destroy NTFS (%s.%s), skipping checks' %
+               (".".join(DS_CFG_PATH), DS_CFG_KEY_PRESERVE_NTFS))
+        return False, msg
+
     if not os.path.exists(devpath):
         return False, 'device %s does not exist' % devpath
 
@@ -624,18 +636,27 @@ def can_dev_be_reformatted(devpath):
     bmsg = ('partition %s (%s) on device %s was ntfs formatted' %
             (cand_part, cand_path, devpath))
     try:
-        file_count = util.mount_cb(cand_path, count_files)
+        file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
+                                   update_env_for_mount={'LANG': 'C'})
     except util.MountFailedError as e:
+        if "mount: unknown filesystem type 'ntfs'" in str(e):
+            return True, (bmsg + ' but this system cannot mount NTFS,'
+                          ' assuming there are no important files.'
+                          ' Formatting allowed.')
         return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e)
 
     if file_count != 0:
+        LOG.warning("it looks like you're using NTFS on the ephemeral disk, "
+                    'to ensure that filesystem does not get wiped, set '
+                    '%s.%s in config', '.'.join(DS_CFG_PATH),
+                    DS_CFG_KEY_PRESERVE_NTFS)
         return False, bmsg + ' but had %d files on it.' % file_count
 
     return True, bmsg + ' and had no important files. Safe for reformatting.'
 
 
 def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
-                             is_new_instance=False):
+                             is_new_instance=False, preserve_ntfs=False):
     # wait for ephemeral disk to come up
     naplen = .2
     missing = util.wait_for_files([devpath], maxwait=maxwait, naplen=naplen,
@@ -651,7 +672,7 @@ def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
     if is_new_instance:
         result, msg = (True, "First instance boot.")
     else:
-        result, msg = can_dev_be_reformatted(devpath)
+        result, msg = can_dev_be_reformatted(devpath, preserve_ntfs)
 
     LOG.debug("reformattable=%s: %s", result, msg)
     if not result:
@@ -965,6 +986,18 @@ def _check_freebsd_cdrom(cdrom_dev):
     return False
 
 
+def _get_random_seed():
+    """Return content random seed file if available, otherwise,
+       return None."""
+    # azure / hyper-v provides random data here
+    # TODO. find the seed on FreeBSD platform
+    # now update ds_cfg to reflect contents pass in config
+    if util.is_FreeBSD():
+        return None
+    return util.load_file("/sys/firmware/acpi/tables/OEM0",
+                          quiet=True, decode=False)
+
+
 def list_possible_azure_ds_devs():
     devlist = []
     if util.is_FreeBSD():
diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py
index 0df545f..d4b758f 100644
--- a/cloudinit/sources/DataSourceCloudStack.py
+++ b/cloudinit/sources/DataSourceCloudStack.py
@@ -68,6 +68,10 @@ class DataSourceCloudStack(sources.DataSource):
 
     dsname = 'CloudStack'
 
+    # Setup read_url parameters per get_url_params.
+    url_max_wait = 120
+    url_timeout = 50
+
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
         self.seed_dir = os.path.join(paths.seed_dir, 'cs')
@@ -80,33 +84,18 @@ class DataSourceCloudStack(sources.DataSource):
         self.metadata_address = "http://%s/"; % (self.vr_addr,)
         self.cfg = {}
 
-    def _get_url_settings(self):
-        mcfg = self.ds_cfg
-        max_wait = 120
-        try:
-            max_wait = int(mcfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
+    def wait_for_metadata_service(self):
+        url_params = self.get_url_params()
 
-        if max_wait == 0:
+        if url_params.max_wait_seconds <= 0:
             return False
 
-        timeout = 50
-        try:
-            timeout = int(mcfg.get("timeout", timeout))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        return (max_wait, timeout)
-
-    def wait_for_metadata_service(self):
-        (max_wait, timeout) = self._get_url_settings()
-
         urls = [uhelp.combine_url(self.metadata_address,
                                   'latest/meta-data/instance-id')]
         start_time = time.time()
-        url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
-                                 timeout=timeout, status_cb=LOG.warn)
+        url = uhelp.wait_for_url(
+            urls=urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
 
         if url:
             LOG.debug("Using metadata source: '%s'", url)
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index c7b5fe5..4cb2897 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -43,7 +43,7 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
         self.version = None
         self.ec2_metadata = None
         self._network_config = None
-        self.network_json = None
+        self.network_json = sources.UNSET
         self.network_eni = None
         self.known_macs = None
         self.files = {}
@@ -69,7 +69,8 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
                 util.logexc(LOG, "Failed reading config drive from %s", sdir)
 
         if not found:
-            for dev in find_candidate_devs():
+            dslist = self.sys_cfg.get('datasource_list')
+            for dev in find_candidate_devs(dslist=dslist):
                 try:
                     # Set mtype if freebsd and turn off sync
                     if dev.startswith("/dev/cd"):
@@ -148,7 +149,7 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
     @property
     def network_config(self):
         if self._network_config is None:
-            if self.network_json is not None:
+            if self.network_json not in (None, sources.UNSET):
                 LOG.debug("network config provided via network_json")
                 self._network_config = openstack.convert_net_json(
                     self.network_json, known_macs=self.known_macs)
@@ -211,7 +212,7 @@ def write_injected_files(files):
                 util.logexc(LOG, "Failed writing file: %s", filename)
 
 
-def find_candidate_devs(probe_optical=True):
+def find_candidate_devs(probe_optical=True, dslist=None):
     """Return a list of devices that may contain the config drive.
 
     The returned list is sorted by search order where the first item has
@@ -227,6 +228,9 @@ def find_candidate_devs(probe_optical=True):
         * either vfat or iso9660 formated
         * labeled with 'config-2' or 'CONFIG-2'
     """
+    if dslist is None:
+        dslist = []
+
     # query optical drive to get it in blkid cache for 2.6 kernels
     if probe_optical:
         for device in OPTICAL_DEVICES:
@@ -257,7 +261,8 @@ def find_candidate_devs(probe_optical=True):
     devices = [d for d in candidates
                if d in by_label or not util.is_partition(d)]
 
-    if devices:
+    LOG.debug("devices=%s dslist=%s", devices, dslist)
+    if devices and "IBMCloud" in dslist:
         # IBMCloud uses config-2 label, but limited to a single UUID.
         ibm_platform, ibm_path = get_ibm_platform()
         if ibm_path in devices:
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index 21e9ef8..968ab3f 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -27,8 +27,6 @@ SKIP_METADATA_URL_CODES = frozenset([uhelp.NOT_FOUND])
 STRICT_ID_PATH = ("datasource", "Ec2", "strict_id")
 STRICT_ID_DEFAULT = "warn"
 
-_unset = "_unset"
-
 
 class Platforms(object):
     # TODO Rename and move to cloudinit.cloud.CloudNames
@@ -59,15 +57,16 @@ class DataSourceEc2(sources.DataSource):
     # for extended metadata content. IPv6 support comes in 2016-09-02
     extended_metadata_versions = ['2016-09-02']
 
+    # Setup read_url parameters per get_url_params.
+    url_max_wait = 120
+    url_timeout = 50
+
     _cloud_platform = None
 
-    _network_config = _unset  # Used for caching calculated network config v1
+    _network_config = sources.UNSET  # Used to cache calculated network cfg v1
 
     # Whether we want to get network configuration from the metadata service.
-    get_network_metadata = False
-
-    # Track the discovered fallback nic for use in configuration generation.
-    _fallback_interface = None
+    perform_dhcp_setup = False
 
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)
@@ -98,7 +97,7 @@ class DataSourceEc2(sources.DataSource):
         elif self.cloud_platform == Platforms.NO_EC2_METADATA:
             return False
 
-        if self.get_network_metadata:  # Setup networking in init-local stage.
+        if self.perform_dhcp_setup:  # Setup networking in init-local stage.
             if util.is_FreeBSD():
                 LOG.debug("FreeBSD doesn't support running dhclient with -sf")
                 return False
@@ -158,27 +157,11 @@ class DataSourceEc2(sources.DataSource):
         else:
             return self.metadata['instance-id']
 
-    def _get_url_settings(self):
-        mcfg = self.ds_cfg
-        max_wait = 120
-        try:
-            max_wait = int(mcfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
-
-        timeout = 50
-        try:
-            timeout = max(0, int(mcfg.get("timeout", timeout)))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        return (max_wait, timeout)
-
     def wait_for_metadata_service(self):
         mcfg = self.ds_cfg
 
-        (max_wait, timeout) = self._get_url_settings()
-        if max_wait <= 0:
+        url_params = self.get_url_params()
+        if url_params.max_wait_seconds <= 0:
             return False
 
         # Remove addresses from the list that wont resolve.
@@ -205,7 +188,8 @@ class DataSourceEc2(sources.DataSource):
 
         start_time = time.time()
         url = uhelp.wait_for_url(
-            urls=urls, max_wait=max_wait, timeout=timeout, status_cb=LOG.warn)
+            urls=urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
 
         if url:
             self.metadata_address = url2base[url]
@@ -310,11 +294,11 @@ class DataSourceEc2(sources.DataSource):
     @property
     def network_config(self):
         """Return a network config dict for rendering ENI or netplan files."""
-        if self._network_config != _unset:
+        if self._network_config != sources.UNSET:
             return self._network_config
 
         if self.metadata is None:
-            # this would happen if get_data hadn't been called. leave as _unset
+            # this would happen if get_data hadn't been called. leave as UNSET
             LOG.warning(
                 "Unexpected call to network_config when metadata is None.")
             return None
@@ -353,9 +337,7 @@ class DataSourceEc2(sources.DataSource):
                 self._fallback_interface = _legacy_fbnic
                 self.fallback_nic = None
             else:
-                self._fallback_interface = net.find_fallback_nic()
-                if self._fallback_interface is None:
-                    LOG.warning("Did not find a fallback interface on EC2.")
+                return super(DataSourceEc2, self).fallback_interface
         return self._fallback_interface
 
     def _crawl_metadata(self):
@@ -390,7 +372,7 @@ class DataSourceEc2Local(DataSourceEc2):
     metadata service. If the metadata service provides network configuration
     then render the network configuration for that instance based on metadata.
     """
-    get_network_metadata = True  # Get metadata network config if present
+    perform_dhcp_setup = True  # Use dhcp before querying metadata
 
     def get_data(self):
         supported_platforms = (Platforms.AWS,)
diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py
index 02b3d56..01106ec 100644
--- a/cloudinit/sources/DataSourceIBMCloud.py
+++ b/cloudinit/sources/DataSourceIBMCloud.py
@@ -8,17 +8,11 @@ There are 2 different api exposed launch methods.
  * template: This is the legacy method of launching instances.
    When booting from an image template, the system boots first into
    a "provisioning" mode.  There, host <-> guest mechanisms are utilized
-   to execute code in the guest and provision it.
+   to execute code in the guest and configure it.  The configuration
+   includes configuring the system network and possibly installing
+   packages and other software stack.
 
-   Cloud-init will disable itself when it detects that it is in the
-   provisioning mode.  It detects this by the presence of
-   a file '/root/provisioningConfiguration.cfg'.
-
-   When provided with user-data, the "first boot" will contain a
-   ConfigDrive-like disk labeled with 'METADATA'.  If there is no user-data
-   provided, then there is no data-source.
-
-   Cloud-init never does any network configuration in this mode.
+   After the provisioning is finished, the system reboots.
 
  * os_code: Essentially "launch by OS Code" (Operating System Code).
    This is a more modern approach.  There is no specific "provisioning" boot.
@@ -30,11 +24,73 @@ There are 2 different api exposed launch methods.
    mean that 1 in 8^16 (~4 billion) Xen ConfigDrive systems will be
    incorrectly identified as IBMCloud.
 
+The combination of these 2 launch methods and with or without user-data
+creates 6 boot scenarios.
+ A. os_code with user-data
+ B. os_code without user-data
+    Cloud-init is fully operational in this mode.
+
+    There is a block device attached with label 'config-2'.
+    As it differs from OpenStack's config-2, we have to differentiate.
+    We do so by requiring the UUID on the filesystem to be "9796-932E".
+
+    This disk will have the following files. Specifically note, there
+    is no versioned path to the meta-data, only 'latest':
+      openstack/latest/meta_data.json
+      openstack/latest/network_data.json
+      openstack/latest/user_data [optional]
+      openstack/latest/vendor_data.json
+
+    vendor_data.json as of 2018-04 looks like this:
+      {"cloud-init":"#!/bin/bash\necho 'root:$6$<snip>' | chpasswd -e"}
+
+    The only difference between A and B in this mode is the presence
+    of user_data on the config disk.
+
+ C. template, provisioning boot with user-data
+ D. template, provisioning boot without user-data.
+    With ds-identify cloud-init is fully disabled in this mode.
+    Without ds-identify, cloud-init None datasource will be used.
+
+    This is currently identified by the presence of
+    /root/provisioningConfiguration.cfg . That file is placed into the
+    system before it is booted.
+
+    The difference between C and D is the presence of the METADATA disk
+    as described in E below.  There is no METADATA disk attached unless
+    user-data is provided.
+
+ E. template, post-provisioning boot with user-data.
+    Cloud-init is fully operational in this mode.
+
+    This is identified by a block device with filesystem label "METADATA".
+    The looks similar to a version-1 OpenStack config drive.  It will
+    have the following files:
+
+       openstack/latest/user_data
+       openstack/latest/meta_data.json
+       openstack/content/interfaces
+       meta.js
+
+    meta.js contains something similar to user_data.  cloud-init ignores it.
+    cloud-init ignores the 'interfaces' style file here.
+    In this mode, cloud-init has networking code disabled.  It relies
+    on the provisioning boot to have configured networking.
+
+ F. template, post-provisioning boot without user-data.
+    With ds-identify, cloud-init will be fully disabled.
+    Without ds-identify, cloud-init None datasource will be used.
+
+    There is no information available to identify this scenario.
+
+    The user will be able to ssh in as as root with their public keys that
+    have been installed into /root/ssh/.authorized_keys
+    during the provisioning stage.
+
 TODO:
  * is uuid (/sys/hypervisor/uuid) stable for life of an instance?
    it seems it is not the same as data's uuid in the os_code case
    but is in the template case.
-
 """
 import base64
 import json
@@ -138,8 +194,30 @@ def _is_xen():
     return os.path.exists("/proc/xen")
 
 
-def _is_ibm_provisioning():
-    return os.path.exists("/root/provisioningConfiguration.cfg")
+def _is_ibm_provisioning(
+        prov_cfg="/root/provisioningConfiguration.cfg",
+        inst_log="/root/swinstall.log",
+        boot_ref="/proc/1/environ"):
+    """Return boolean indicating if this boot is ibm provisioning boot."""
+    if os.path.exists(prov_cfg):
+        msg = "config '%s' exists." % prov_cfg
+        result = True
+        if os.path.exists(inst_log):
+            if os.path.exists(boot_ref):
+                result = (os.stat(inst_log).st_mtime >
+                          os.stat(boot_ref).st_mtime)
+                msg += (" log '%s' from %s boot." %
+                        (inst_log, "current" if result else "previous"))
+            else:
+                msg += (" log '%s' existed, but no reference file '%s'." %
+                        (inst_log, boot_ref))
+                result = False
+        else:
+            msg += " log '%s' did not exist." % inst_log
+    else:
+        result, msg = (False, "config '%s' did not exist." % prov_cfg)
+    LOG.debug("ibm_provisioning=%s: %s", result, msg)
+    return result
 
 
 def get_ibm_platform():
@@ -189,7 +267,7 @@ def get_ibm_platform():
         else:
             return (Platforms.TEMPLATE_LIVE_METADATA, metadata_path)
     elif _is_ibm_provisioning():
-            return (Platforms.TEMPLATE_PROVISIONING_NODATA, None)
+        return (Platforms.TEMPLATE_PROVISIONING_NODATA, None)
     return not_found
 
 
diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
index 6ac8863..bcb3854 100644
--- a/cloudinit/sources/DataSourceMAAS.py
+++ b/cloudinit/sources/DataSourceMAAS.py
@@ -198,13 +198,13 @@ def read_maas_seed_url(seed_url, read_file_or_url=None, timeout=None,
     If version is None, then <version>/ will not be used.
     """
     if read_file_or_url is None:
-        read_file_or_url = util.read_file_or_url
+        read_file_or_url = url_helper.read_file_or_url
 
     if seed_url.endswith("/"):
         seed_url = seed_url[:-1]
 
     md = {}
-    for path, dictname, binary, optional in DS_FIELDS:
+    for path, _dictname, binary, optional in DS_FIELDS:
         if version is None:
             url = "%s/%s" % (seed_url, path)
         else:
diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
index 5d3a8dd..2daea59 100644
--- a/cloudinit/sources/DataSourceNoCloud.py
+++ b/cloudinit/sources/DataSourceNoCloud.py
@@ -78,7 +78,7 @@ class DataSourceNoCloud(sources.DataSource):
                 LOG.debug("Using seeded data from %s", path)
                 mydata = _merge_new_seed(mydata, seeded)
                 break
-            except ValueError as e:
+            except ValueError:
                 pass
 
         # If the datasource config had a 'seedfrom' entry, then that takes
@@ -117,7 +117,7 @@ class DataSourceNoCloud(sources.DataSource):
                     try:
                         seeded = util.mount_cb(dev, _pp2d_callback,
                                                pp2d_kwargs)
-                    except ValueError as e:
+                    except ValueError:
                         if dev in label_list:
                             LOG.warning("device %s with label=%s not a"
                                         "valid seed.", dev, label)
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index dc914a7..178ccb0 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -556,7 +556,7 @@ def search_file(dirpath, filename):
     if not dirpath or not filename:
         return None
 
-    for root, dirs, files in os.walk(dirpath):
+    for root, _dirs, files in os.walk(dirpath):
         if filename in files:
             return os.path.join(root, filename)
 
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index d4a4111..16c1078 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -378,7 +378,7 @@ def read_context_disk_dir(source_dir, asuser=None):
         if asuser is not None:
             try:
                 pwd.getpwnam(asuser)
-            except KeyError as e:
+            except KeyError:
                 raise BrokenContextDiskDir(
                     "configured user '{user}' does not exist".format(
                         user=asuser))
diff --git a/cloudinit/sources/DataSourceOpenStack.py b/cloudinit/sources/DataSourceOpenStack.py
index e55a763..365af96 100644
--- a/cloudinit/sources/DataSourceOpenStack.py
+++ b/cloudinit/sources/DataSourceOpenStack.py
@@ -7,6 +7,7 @@
 import time
 
 from cloudinit import log as logging
+from cloudinit.net.dhcp import EphemeralDHCPv4, NoDHCPLeaseError
 from cloudinit import sources
 from cloudinit import url_helper
 from cloudinit import util
@@ -22,51 +23,37 @@ DEFAULT_METADATA = {
     "instance-id": DEFAULT_IID,
 }
 
+# OpenStack DMI constants
+DMI_PRODUCT_NOVA = 'OpenStack Nova'
+DMI_PRODUCT_COMPUTE = 'OpenStack Compute'
+VALID_DMI_PRODUCT_NAMES = [DMI_PRODUCT_NOVA, DMI_PRODUCT_COMPUTE]
+DMI_ASSET_TAG_OPENTELEKOM = 'OpenTelekomCloud'
+VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM]
+
 
 class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
 
     dsname = "OpenStack"
 
+    _network_config = sources.UNSET  # Used to cache calculated network cfg v1
+
+    # Whether we want to get network configuration from the metadata service.
+    perform_dhcp_setup = False
+
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceOpenStack, self).__init__(sys_cfg, distro, paths)
         self.metadata_address = None
         self.ssl_details = util.fetch_ssl_details(self.paths)
         self.version = None
         self.files = {}
-        self.ec2_metadata = None
+        self.ec2_metadata = sources.UNSET
+        self.network_json = sources.UNSET
 
     def __str__(self):
         root = sources.DataSource.__str__(self)
         mstr = "%s [%s,ver=%s]" % (root, self.dsmode, self.version)
         return mstr
 
-    def _get_url_settings(self):
-        # TODO(harlowja): this is shared with ec2 datasource, we should just
-        # move it to a shared location instead...
-        # Note: the defaults here are different though.
-
-        # max_wait < 0 indicates do not wait
-        max_wait = -1
-        timeout = 10
-        retries = 5
-
-        try:
-            max_wait = int(self.ds_cfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
-
-        try:
-            timeout = max(0, int(self.ds_cfg.get("timeout", timeout)))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        try:
-            retries = int(self.ds_cfg.get("retries", retries))
-        except Exception:
-            util.logexc(LOG, "Failed to get retries. using %s", retries)
-
-        return (max_wait, timeout, retries)
-
     def wait_for_metadata_service(self):
         urls = self.ds_cfg.get("metadata_urls", [DEF_MD_URL])
         filtered = [x for x in urls if util.is_resolvable_url(x)]
@@ -86,10 +73,11 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
             md_urls.append(md_url)
             url2base[md_url] = url
 
-        (max_wait, timeout, retries) = self._get_url_settings()
+        url_params = self.get_url_params()
         start_time = time.time()
-        avail_url = url_helper.wait_for_url(urls=md_urls, max_wait=max_wait,
-                                            timeout=timeout)
+        avail_url = url_helper.wait_for_url(
+            urls=md_urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds)
         if avail_url:
             LOG.debug("Using metadata source: '%s'", url2base[avail_url])
         else:
@@ -99,38 +87,66 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
         self.metadata_address = url2base.get(avail_url)
         return bool(avail_url)
 
-    def _get_data(self):
-        try:
-            if not self.wait_for_metadata_service():
-                return False
-        except IOError:
-            return False
+    def check_instance_id(self, sys_cfg):
+        # quickly (local check only) if self.instance_id is still valid
+        return sources.instance_id_matches_system_uuid(self.get_instance_id())
 
-        (max_wait, timeout, retries) = self._get_url_settings()
+    @property
+    def network_config(self):
+        """Return a network config dict for rendering ENI or netplan files."""
+        if self._network_config != sources.UNSET:
+            return self._network_config
+
+        # RELEASE_BLOCKER: SRU to Xenial and Artful SRU should not provide
+        # network_config by default unless configured in /etc/cloud/cloud.cfg*.
+        # Patch Xenial and Artful before release to default to False.
+        if util.is_false(self.ds_cfg.get('apply_network_config', True)):
+            self._network_config = None
+            return self._network_config
+        if self.network_json == sources.UNSET:
+            # this would happen if get_data hadn't been called. leave as UNSET
+            LOG.warning(
+                'Unexpected call to network_config when network_json is None.')
+            return None
+
+        LOG.debug('network config provided via network_json')
+        self._network_config = openstack.convert_net_json(
+            self.network_json, known_macs=None)
+        return self._network_config
 
-        try:
-            results = util.log_time(LOG.debug,
-                                    'Crawl of openstack metadata service',
-                                    read_metadata_service,
-                                    args=[self.metadata_address],
-                                    kwargs={'ssl_details': self.ssl_details,
-                                            'retries': retries,
-                                            'timeout': timeout})
-        except openstack.NonReadable:
-            return False
-        except (openstack.BrokenMetadata, IOError):
-            util.logexc(LOG, "Broken metadata address %s",
-                        self.metadata_address)
+    def _get_data(self):
+        """Crawl metadata, parse and persist that data for this instance.
+
+        @return: True when metadata discovered indicates OpenStack datasource.
+            False when unable to contact metadata service or when metadata
+            format is invalid or disabled.
+        """
+        if not detect_openstack():
             return False
+        if self.perform_dhcp_setup:  # Setup networking in init-local stage.
+            try:
+                with EphemeralDHCPv4(self.fallback_interface):
+                    results = util.log_time(
+                        logfunc=LOG.debug, msg='Crawl of metadata service',
+                        func=self._crawl_metadata)
+            except (NoDHCPLeaseError, sources.InvalidMetaDataException) as e:
+                util.logexc(LOG, str(e))
+                return False
+        else:
+            try:
+                results = self._crawl_metadata()
+            except sources.InvalidMetaDataException as e:
+                util.logexc(LOG, str(e))
+                return False
 
         self.dsmode = self._determine_dsmode([results.get('dsmode')])
         if self.dsmode == sources.DSMODE_DISABLED:
             return False
-
         md = results.get('metadata', {})
         md = util.mergemanydict([md, DEFAULT_METADATA])
         self.metadata = md
         self.ec2_metadata = results.get('ec2-metadata')
+        self.network_json = results.get('networkdata')
         self.userdata_raw = results.get('userdata')
         self.version = results['version']
         self.files.update(results.get('files', {}))
@@ -145,9 +161,50 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
 
         return True
 
-    def check_instance_id(self, sys_cfg):
-        # quickly (local check only) if self.instance_id is still valid
-        return sources.instance_id_matches_system_uuid(self.get_instance_id())
+    def _crawl_metadata(self):
+        """Crawl metadata service when available.
+
+        @returns: Dictionary with all metadata discovered for this datasource.
+        @raise: InvalidMetaDataException on unreadable or broken
+            metadata.
+        """
+        try:
+            if not self.wait_for_metadata_service():
+                raise sources.InvalidMetaDataException(
+                    'No active metadata service found')
+        except IOError as e:
+            raise sources.InvalidMetaDataException(
+                'IOError contacting metadata service: {error}'.format(
+                    error=str(e)))
+
+        url_params = self.get_url_params()
+
+        try:
+            result = util.log_time(
+                LOG.debug, 'Crawl of openstack metadata service',
+                read_metadata_service, args=[self.metadata_address],
+                kwargs={'ssl_details': self.ssl_details,
+                        'retries': url_params.num_retries,
+                        'timeout': url_params.timeout_seconds})
+        except openstack.NonReadable as e:
+            raise sources.InvalidMetaDataException(str(e))
+        except (openstack.BrokenMetadata, IOError):
+            msg = 'Broken metadata address {addr}'.format(
+                addr=self.metadata_address)
+            raise sources.InvalidMetaDataException(msg)
+        return result
+
+
+class DataSourceOpenStackLocal(DataSourceOpenStack):
+    """Run in init-local using a dhcp discovery prior to metadata crawl.
+
+    In init-local, no network is available. This subclass sets up minimal
+    networking with dhclient on a viable nic so that it can talk to the
+    metadata service. If the metadata service provides network configuration
+    then render the network configuration for that instance based on metadata.
+    """
+
+    perform_dhcp_setup = True  # Get metadata network config if present
 
 
 def read_metadata_service(base_url, ssl_details=None,
@@ -157,8 +214,23 @@ def read_metadata_service(base_url, ssl_details=None,
     return reader.read_v2()
 
 
+def detect_openstack():
+    """Return True when a potential OpenStack platform is detected."""
+    if not util.is_x86():
+        return True  # Non-Intel cpus don't properly report dmi product names
+    product_name = util.read_dmi_data('system-product-name')
+    if product_name in VALID_DMI_PRODUCT_NAMES:
+        return True
+    elif util.read_dmi_data('chassis-asset-tag') in VALID_DMI_ASSET_TAGS:
+        return True
+    elif util.get_proc_env(1).get('product_name') == DMI_PRODUCT_NOVA:
+        return True
+    return False
+
+
 # Used to match classes to dependencies
 datasources = [
+    (DataSourceOpenStackLocal, (sources.DEP_FILESYSTEM,)),
     (DataSourceOpenStack, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
 ]
 
diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
index 86bfa5d..f92e8b5 100644
--- a/cloudinit/sources/DataSourceSmartOS.py
+++ b/cloudinit/sources/DataSourceSmartOS.py
@@ -1,4 +1,5 @@
 # Copyright (C) 2013 Canonical Ltd.
+# Copyright (c) 2018, Joyent, Inc.
 #
 # Author: Ben Howard <ben.howard@xxxxxxxxxxxxx>
 #
@@ -10,17 +11,19 @@
 #    SmartOS hosts use a serial console (/dev/ttyS1) on KVM Linux Guests
 #        The meta-data is transmitted via key/value pairs made by
 #        requests on the console. For example, to get the hostname, you
-#        would send "GET hostname" on /dev/ttyS1.
+#        would send "GET sdc:hostname" on /dev/ttyS1.
 #        For Linux Guests running in LX-Brand Zones on SmartOS hosts
 #        a socket (/native/.zonecontrol/metadata.sock) is used instead
 #        of a serial console.
 #
 #   Certain behavior is defined by the DataDictionary
-#       http://us-east.manta.joyent.com/jmc/public/mdata/datadict.html
+#       https://eng.joyent.com/mdata/datadict.html
 #       Comments with "@datadictionary" are snippets of the definition
 
 import base64
 import binascii
+import errno
+import fcntl
 import json
 import os
 import random
@@ -108,7 +111,7 @@ BUILTIN_CLOUD_CONFIG = {
                        'overwrite': False}
     },
     'fs_setup': [{'label': 'ephemeral0',
-                  'filesystem': 'ext3',
+                  'filesystem': 'ext4',
                   'device': 'ephemeral0'}],
 }
 
@@ -162,9 +165,8 @@ class DataSourceSmartOS(sources.DataSource):
 
     dsname = "Joyent"
 
-    _unset = "_unset"
-    smartos_type = _unset
-    md_client = _unset
+    smartos_type = sources.UNSET
+    md_client = sources.UNSET
 
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
@@ -186,12 +188,12 @@ class DataSourceSmartOS(sources.DataSource):
         return "%s [client=%s]" % (root, self.md_client)
 
     def _init(self):
-        if self.smartos_type == self._unset:
+        if self.smartos_type == sources.UNSET:
             self.smartos_type = get_smartos_environ()
             if self.smartos_type is None:
                 self.md_client = None
 
-        if self.md_client == self._unset:
+        if self.md_client == sources.UNSET:
             self.md_client = jmc_client_factory(
                 smartos_type=self.smartos_type,
                 metadata_sockfile=self.ds_cfg['metadata_sockfile'],
@@ -229,6 +231,9 @@ class DataSourceSmartOS(sources.DataSource):
                       self.md_client)
             return False
 
+        # Open once for many requests, rather than once for each request
+        self.md_client.open_transport()
+
         for ci_noun, attribute in SMARTOS_ATTRIB_MAP.items():
             smartos_noun, strip = attribute
             md[ci_noun] = self.md_client.get(smartos_noun, strip=strip)
@@ -236,6 +241,8 @@ class DataSourceSmartOS(sources.DataSource):
         for ci_noun, smartos_noun in SMARTOS_ATTRIB_JSON.items():
             md[ci_noun] = self.md_client.get_json(smartos_noun)
 
+        self.md_client.close_transport()
+
         # @datadictionary: This key may contain a program that is written
         # to a file in the filesystem of the guest on each boot and then
         # executed. It may be of any format that would be considered
@@ -266,8 +273,14 @@ class DataSourceSmartOS(sources.DataSource):
         write_boot_content(u_data, u_data_f)
 
         # Handle the cloud-init regular meta
+
+        # The hostname may or may not be qualified with the local domain name.
+        # This follows section 3.14 of RFC 2132.
         if not md['local-hostname']:
-            md['local-hostname'] = md['instance-id']
+            if md['hostname']:
+                md['local-hostname'] = md['hostname']
+            else:
+                md['local-hostname'] = md['instance-id']
 
         ud = None
         if md['user-data']:
@@ -285,6 +298,7 @@ class DataSourceSmartOS(sources.DataSource):
         self.userdata_raw = ud
         self.vendordata_raw = md['vendor-data']
         self.network_data = md['network-data']
+        self.routes_data = md['routes']
 
         self._set_provisioned()
         return True
@@ -308,7 +322,8 @@ class DataSourceSmartOS(sources.DataSource):
                     convert_smartos_network_data(
                         network_data=self.network_data,
                         dns_servers=self.metadata['dns_servers'],
-                        dns_domain=self.metadata['dns_domain']))
+                        dns_domain=self.metadata['dns_domain'],
+                        routes=self.routes_data))
         return self._network_config
 
 
@@ -316,6 +331,10 @@ class JoyentMetadataFetchException(Exception):
     pass
 
 
+class JoyentMetadataTimeoutException(JoyentMetadataFetchException):
+    pass
+
+
 class JoyentMetadataClient(object):
     """
     A client implementing v2 of the Joyent Metadata Protocol Specification.
@@ -360,6 +379,47 @@ class JoyentMetadataClient(object):
         LOG.debug('Value "%s" found.', value)
         return value
 
+    def _readline(self):
+        """
+           Reads a line a byte at a time until \n is encountered.  Returns an
+           ascii string with the trailing newline removed.
+
+           If a timeout (per-byte) is set and it expires, a
+           JoyentMetadataFetchException will be thrown.
+        """
+        response = []
+
+        def as_ascii():
+            return b''.join(response).decode('ascii')
+
+        msg = "Partial response: '%s'"
+        while True:
+            try:
+                byte = self.fp.read(1)
+                if len(byte) == 0:
+                    raise JoyentMetadataTimeoutException(msg % as_ascii())
+                if byte == b'\n':
+                    return as_ascii()
+                response.append(byte)
+            except OSError as exc:
+                if exc.errno == errno.EAGAIN:
+                    raise JoyentMetadataTimeoutException(msg % as_ascii())
+                raise
+
+    def _write(self, msg):
+        self.fp.write(msg.encode('ascii'))
+        self.fp.flush()
+
+    def _negotiate(self):
+        LOG.debug('Negotiating protocol V2')
+        self._write('NEGOTIATE V2\n')
+        response = self._readline()
+        LOG.debug('read "%s"', response)
+        if response != 'V2_OK':
+            raise JoyentMetadataFetchException(
+                'Invalid response "%s" to "NEGOTIATE V2"' % response)
+        LOG.debug('Negotiation complete')
+
     def request(self, rtype, param=None):
         request_id = '{0:08x}'.format(random.randint(0, 0xffffffff))
         message_body = ' '.join((request_id, rtype,))
@@ -374,18 +434,11 @@ class JoyentMetadataClient(object):
             self.open_transport()
             need_close = True
 
-        self.fp.write(msg.encode('ascii'))
-        self.fp.flush()
-
-        response = bytearray()
-        response.extend(self.fp.read(1))
-        while response[-1:] != b'\n':
-            response.extend(self.fp.read(1))
-
+        self._write(msg)
+        response = self._readline()
         if need_close:
             self.close_transport()
 
-        response = response.rstrip().decode('ascii')
         LOG.debug('Read "%s" from metadata transport.', response)
 
         if 'SUCCESS' not in response:
@@ -410,9 +463,9 @@ class JoyentMetadataClient(object):
 
     def list(self):
         result = self.request(rtype='KEYS')
-        if result:
-            result = result.split('\n')
-        return result
+        if not result:
+            return []
+        return result.split('\n')
 
     def put(self, key, val):
         param = b' '.join([base64.b64encode(i.encode())
@@ -450,6 +503,7 @@ class JoyentMetadataSocketClient(JoyentMetadataClient):
         sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
         sock.connect(self.socketpath)
         self.fp = sock.makefile('rwb')
+        self._negotiate()
 
     def exists(self):
         return os.path.exists(self.socketpath)
@@ -459,8 +513,9 @@ class JoyentMetadataSocketClient(JoyentMetadataClient):
 
 
 class JoyentMetadataSerialClient(JoyentMetadataClient):
-    def __init__(self, device, timeout=10, smartos_type=SMARTOS_ENV_KVM):
-        super(JoyentMetadataSerialClient, self).__init__(smartos_type)
+    def __init__(self, device, timeout=10, smartos_type=SMARTOS_ENV_KVM,
+                 fp=None):
+        super(JoyentMetadataSerialClient, self).__init__(smartos_type, fp)
         self.device = device
         self.timeout = timeout
 
@@ -468,10 +523,51 @@ class JoyentMetadataSerialClient(JoyentMetadataClient):
         return os.path.exists(self.device)
 
     def open_transport(self):
-        ser = serial.Serial(self.device, timeout=self.timeout)
-        if not ser.isOpen():
-            raise SystemError("Unable to open %s" % self.device)
-        self.fp = ser
+        if self.fp is None:
+            ser = serial.Serial(self.device, timeout=self.timeout)
+            if not ser.isOpen():
+                raise SystemError("Unable to open %s" % self.device)
+            self.fp = ser
+            fcntl.lockf(ser, fcntl.LOCK_EX)
+        self._flush()
+        self._negotiate()
+
+    def _flush(self):
+        LOG.debug('Flushing input')
+        # Read any pending data
+        timeout = self.fp.timeout
+        self.fp.timeout = 0.1
+        while True:
+            try:
+                self._readline()
+            except JoyentMetadataTimeoutException:
+                break
+        LOG.debug('Input empty')
+
+        # Send a newline and expect "invalid command".  Keep trying until
+        # successful.  Retry rather frequently so that the "Is the host
+        # metadata service running" appears on the console soon after someone
+        # attaches in an effort to debug.
+        if timeout > 5:
+            self.fp.timeout = 5
+        else:
+            self.fp.timeout = timeout
+        while True:
+            LOG.debug('Writing newline, expecting "invalid command"')
+            self._write('\n')
+            try:
+                response = self._readline()
+                if response == 'invalid command':
+                    break
+                if response == 'FAILURE':
+                    LOG.debug('Got "FAILURE".  Retrying.')
+                    continue
+                LOG.warning('Unexpected response "%s" during flush', response)
+            except JoyentMetadataTimeoutException:
+                LOG.warning('Timeout while initializing metadata client. ' +
+                            'Is the host metadata service running?')
+        LOG.debug('Got "invalid command".  Flush complete.')
+        self.fp.timeout = timeout
 
     def __repr__(self):
         return "%s(device=%s, timeout=%s)" % (
@@ -650,7 +746,7 @@ def get_smartos_environ(uname_version=None, product_name=None):
     # report 'BrandZ virtual linux' as the kernel version
     if uname_version is None:
         uname_version = uname[3]
-    if uname_version.lower() == 'brandz virtual linux':
+    if uname_version == 'BrandZ virtual linux':
         return SMARTOS_ENV_LX_BRAND
 
     if product_name is None:
@@ -658,7 +754,7 @@ def get_smartos_environ(uname_version=None, product_name=None):
     else:
         system_type = product_name
 
-    if system_type and 'smartdc' in system_type.lower():
+    if system_type and system_type.startswith('SmartDC'):
         return SMARTOS_ENV_KVM
 
     return None
@@ -666,7 +762,8 @@ def get_smartos_environ(uname_version=None, product_name=None):
 
 # Convert SMARTOS 'sdc:nics' data to network_config yaml
 def convert_smartos_network_data(network_data=None,
-                                 dns_servers=None, dns_domain=None):
+                                 dns_servers=None, dns_domain=None,
+                                 routes=None):
     """Return a dictionary of network_config by parsing provided
        SMARTOS sdc:nics configuration data
 
@@ -684,6 +781,10 @@ def convert_smartos_network_data(network_data=None,
     keys are related to ip configuration.  For each ip in the 'ips' list
     we create a subnet entry under 'subnets' pairing the ip to a one in
     the 'gateways' list.
+
+    Each route in sdc:routes is mapped to a route on each interface.
+    The sdc:routes properties 'dst' and 'gateway' map to 'network' and
+    'gateway'.  The 'linklocal' sdc:routes property is ignored.
     """
 
     valid_keys = {
@@ -706,6 +807,10 @@ def convert_smartos_network_data(network_data=None,
             'scope',
             'type',
         ],
+        'route': [
+            'network',
+            'gateway',
+        ],
     }
 
     if dns_servers:
@@ -720,6 +825,9 @@ def convert_smartos_network_data(network_data=None,
     else:
         dns_domain = []
 
+    if not routes:
+        routes = []
+
     def is_valid_ipv4(addr):
         return '.' in addr
 
@@ -746,6 +854,7 @@ def convert_smartos_network_data(network_data=None,
             if ip == "dhcp":
                 subnet = {'type': 'dhcp4'}
             else:
+                routeents = []
                 subnet = dict((k, v) for k, v in nic.items()
                               if k in valid_keys['subnet'])
                 subnet.update({
@@ -767,6 +876,25 @@ def convert_smartos_network_data(network_data=None,
                             pgws[proto]['gw'] = gateways[0]
                             subnet.update({'gateway': pgws[proto]['gw']})
 
+                for route in routes:
+                    rcfg = dict((k, v) for k, v in route.items()
+                                if k in valid_keys['route'])
+                    # Linux uses the value of 'gateway' to determine
+                    # automatically if the route is a forward/next-hop
+                    # (non-local IP for gateway) or an interface/resolver
+                    # (local IP for gateway).  So we can ignore the
+                    # 'interface' attribute of sdc:routes, because SDC
+                    # guarantees that the gateway is a local IP for
+                    # "interface=true".
+                    #
+                    # Eventually we should be smart and compare "gateway"
+                    # to see if it's in the prefix.  We can then smartly
+                    # add or not-add this route.  But for now,
+                    # when in doubt, use brute force! Routes for everyone!
+                    rcfg.update({'network': route['dst']})
+                    routeents.append(rcfg)
+                    subnet.update({'routes': routeents})
+
             subnets.append(subnet)
         cfg.update({'subnets': subnets})
         config.append(cfg)
@@ -810,12 +938,14 @@ if __name__ == "__main__":
             keyname = SMARTOS_ATTRIB_JSON[key]
             data[key] = client.get_json(keyname)
         elif key == "network_config":
-            for depkey in ('network-data', 'dns_servers', 'dns_domain'):
+            for depkey in ('network-data', 'dns_servers', 'dns_domain',
+                           'routes'):
                 load_key(client, depkey, data)
             data[key] = convert_smartos_network_data(
                 network_data=data['network-data'],
                 dns_servers=data['dns_servers'],
-                dns_domain=data['dns_domain'])
+                dns_domain=data['dns_domain'],
+                routes=data['routes'])
         else:
             if key in SMARTOS_ATTRIB_MAP:
                 keyname, strip = SMARTOS_ATTRIB_MAP[key]
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index df0b374..90d7457 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -9,6 +9,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import abc
+from collections import namedtuple
 import copy
 import json
 import os
@@ -17,6 +18,7 @@ import six
 from cloudinit.atomic_helper import write_json
 from cloudinit import importer
 from cloudinit import log as logging
+from cloudinit import net
 from cloudinit import type_utils
 from cloudinit import user_data as ud
 from cloudinit import util
@@ -41,6 +43,8 @@ INSTANCE_JSON_FILE = 'instance-data.json'
 # Key which can be provide a cloud's official product name to cloud-init
 METADATA_CLOUD_NAME_KEY = 'cloud-name'
 
+UNSET = "_unset"
+
 LOG = logging.getLogger(__name__)
 
 
@@ -48,6 +52,11 @@ class DataSourceNotFoundException(Exception):
     pass
 
 
+class InvalidMetaDataException(Exception):
+    """Raised when metadata is broken, unavailable or disabled."""
+    pass
+
+
 def process_base64_metadata(metadata, key_path=''):
     """Strip ci-b64 prefix and return metadata with base64-encoded-keys set."""
     md_copy = copy.deepcopy(metadata)
@@ -68,6 +77,10 @@ def process_base64_metadata(metadata, key_path=''):
     return md_copy
 
 
+URLParams = namedtuple(
+    'URLParms', ['max_wait_seconds', 'timeout_seconds', 'num_retries'])
+
+
 @six.add_metaclass(abc.ABCMeta)
 class DataSource(object):
 
@@ -81,6 +94,14 @@ class DataSource(object):
     # Cached cloud_name as determined by _get_cloud_name
     _cloud_name = None
 
+    # Track the discovered fallback nic for use in configuration generation.
+    _fallback_interface = None
+
+    # read_url_params
+    url_max_wait = -1   # max_wait < 0 means do not wait
+    url_timeout = 10    # timeout for each metadata url read attempt
+    url_retries = 5     # number of times to retry url upon 404
+
     def __init__(self, sys_cfg, distro, paths, ud_proc=None):
         self.sys_cfg = sys_cfg
         self.distro = distro
@@ -128,6 +149,14 @@ class DataSource(object):
                 'meta-data': self.metadata,
                 'user-data': self.get_userdata_raw(),
                 'vendor-data': self.get_vendordata_raw()}}
+        if hasattr(self, 'network_json'):
+            network_json = getattr(self, 'network_json')
+            if network_json != UNSET:
+                instance_data['ds']['network_json'] = network_json
+        if hasattr(self, 'ec2_metadata'):
+            ec2_metadata = getattr(self, 'ec2_metadata')
+            if ec2_metadata != UNSET:
+                instance_data['ds']['ec2_metadata'] = ec2_metadata
         instance_data.update(
             self._get_standardized_metadata())
         try:
@@ -149,6 +178,42 @@ class DataSource(object):
             'Subclasses of DataSource must implement _get_data which'
             ' sets self.metadata, vendordata_raw and userdata_raw.')
 
+    def get_url_params(self):
+        """Return the Datasource's prefered url_read parameters.
+
+        Subclasses may override url_max_wait, url_timeout, url_retries.
+
+        @return: A URLParams object with max_wait_seconds, timeout_seconds,
+            num_retries.
+        """
+        max_wait = self.url_max_wait
+        try:
+            max_wait = int(self.ds_cfg.get("max_wait", self.url_max_wait))
+        except ValueError:
+            util.logexc(
+                LOG, "Config max_wait '%s' is not an int, using default '%s'",
+                self.ds_cfg.get("max_wait"), max_wait)
+
+        timeout = self.url_timeout
+        try:
+            timeout = max(
+                0, int(self.ds_cfg.get("timeout", self.url_timeout)))
+        except ValueError:
+            timeout = self.url_timeout
+            util.logexc(
+                LOG, "Config timeout '%s' is not an int, using default '%s'",
+                self.ds_cfg.get('timeout'), timeout)
+
+        retries = self.url_retries
+        try:
+            retries = int(self.ds_cfg.get("retries", self.url_retries))
+        except Exception:
+            util.logexc(
+                LOG, "Config retries '%s' is not an int, using default '%s'",
+                self.ds_cfg.get('retries'), retries)
+
+        return URLParams(max_wait, timeout, retries)
+
     def get_userdata(self, apply_filter=False):
         if self.userdata is None:
             self.userdata = self.ud_proc.process(self.get_userdata_raw())
@@ -162,6 +227,17 @@ class DataSource(object):
         return self.vendordata
 
     @property
+    def fallback_interface(self):
+        """Determine the network interface used during local network config."""
+        if self._fallback_interface is None:
+            self._fallback_interface = net.find_fallback_nic()
+            if self._fallback_interface is None:
+                LOG.warning(
+                    "Did not find a fallback interface on %s.",
+                    self.cloud_name)
+        return self._fallback_interface
+
+    @property
     def cloud_name(self):
         """Return lowercase cloud name as determined by the datasource.
 
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index 90c12df..e5696b1 100644
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -14,6 +14,7 @@ from cloudinit import temp_utils
 from contextlib import contextmanager
 from xml.etree import ElementTree
 
+from cloudinit import url_helper
 from cloudinit import util
 
 LOG = logging.getLogger(__name__)
@@ -55,14 +56,14 @@ class AzureEndpointHttpClient(object):
         if secure:
             headers = self.headers.copy()
             headers.update(self.extra_secure_headers)
-        return util.read_file_or_url(url, headers=headers)
+        return url_helper.read_file_or_url(url, headers=headers)
 
     def post(self, url, data=None, extra_headers=None):
         headers = self.headers
         if extra_headers is not None:
             headers = self.headers.copy()
             headers.update(extra_headers)
-        return util.read_file_or_url(url, data=data, headers=headers)
+        return url_helper.read_file_or_url(url, data=data, headers=headers)
 
 
 class GoalState(object):
diff --git a/cloudinit/sources/helpers/digitalocean.py b/cloudinit/sources/helpers/digitalocean.py
index 693f8d5..0e7ccca 100644
--- a/cloudinit/sources/helpers/digitalocean.py
+++ b/cloudinit/sources/helpers/digitalocean.py
@@ -41,10 +41,9 @@ def assign_ipv4_link_local(nic=None):
                            "address")
 
     try:
-        (result, _err) = util.subp(ip_addr_cmd)
+        util.subp(ip_addr_cmd)
         LOG.debug("assigned ip4LL address '%s' to '%s'", addr, nic)
-
-        (result, _err) = util.subp(ip_link_cmd)
+        util.subp(ip_link_cmd)
         LOG.debug("brought device '%s' up", nic)
     except Exception:
         util.logexc(LOG, "ip4LL address assignment of '%s' to '%s' failed."
@@ -75,7 +74,7 @@ def del_ipv4_link_local(nic=None):
     ip_addr_cmd = ['ip', 'addr', 'flush', 'dev', nic]
 
     try:
-        (result, _err) = util.subp(ip_addr_cmd)
+        util.subp(ip_addr_cmd)
         LOG.debug("removed ip4LL addresses from %s", nic)
 
     except Exception as e:
diff --git a/cloudinit/sources/helpers/openstack.py b/cloudinit/sources/helpers/openstack.py
index 26f3168..a4cf066 100644
--- a/cloudinit/sources/helpers/openstack.py
+++ b/cloudinit/sources/helpers/openstack.py
@@ -638,7 +638,7 @@ def convert_net_json(network_json=None, known_macs=None):
             known_macs = net.get_interfaces_by_mac()
 
         # go through and fill out the link_id_info with names
-        for link_id, info in link_id_info.items():
+        for _link_id, info in link_id_info.items():
             if info.get('name'):
                 continue
             if info.get('mac') in known_macs:
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index 2d8900e..3ef8c62 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -73,7 +73,7 @@ class NicConfigurator(object):
         The mac address(es) are in the lower case
         """
         cmd = ['ip', 'addr', 'show']
-        (output, err) = util.subp(cmd)
+        output, _err = util.subp(cmd)
         sections = re.split(r'\n\d+: ', '\n' + output)[1:]
 
         macPat = r'link/ether (([0-9A-Fa-f]{2}[:]){5}([0-9A-Fa-f]{2}))'
diff --git a/cloudinit/sources/helpers/vmware/imc/config_passwd.py b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
index 75cfbaa..8c91fa4 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_passwd.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_passwd.py
@@ -56,10 +56,10 @@ class PasswordConfigurator(object):
         LOG.info('Expiring password.')
         for user in uidUserList:
             try:
-                out, err = util.subp(['passwd', '--expire', user])
+                util.subp(['passwd', '--expire', user])
             except util.ProcessExecutionError as e:
                 if os.path.exists('/usr/bin/chage'):
-                    out, e = util.subp(['chage', '-d', '0', user])
+                    util.subp(['chage', '-d', '0', user])
                 else:
                     LOG.warning('Failed to expire password for %s with error: '
                                 '%s', user, e)
diff --git a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
index 4407525..a590f32 100644
--- a/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
+++ b/cloudinit/sources/helpers/vmware/imc/guestcust_util.py
@@ -91,7 +91,7 @@ def enable_nics(nics):
 
     for attempt in range(0, enableNicsWaitRetries):
         logger.debug("Trying to connect interfaces, attempt %d", attempt)
-        (out, err) = set_customization_status(
+        (out, _err) = set_customization_status(
             GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
             GuestCustEventEnum.GUESTCUST_EVENT_ENABLE_NICS,
             nics)
@@ -104,7 +104,7 @@ def enable_nics(nics):
             return
 
         for count in range(0, enableNicsWaitCount):
-            (out, err) = set_customization_status(
+            (out, _err) = set_customization_status(
                 GuestCustStateEnum.GUESTCUST_STATE_RUNNING,
                 GuestCustEventEnum.GUESTCUST_EVENT_QUERY_NICS,
                 nics)
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index e7fda22..d5bc98a 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -17,6 +17,7 @@ from cloudinit import util
 class DataSourceTestSubclassNet(DataSource):
 
     dsname = 'MyTestSubclass'
+    url_max_wait = 55
 
     def __init__(self, sys_cfg, distro, paths, custom_userdata=None):
         super(DataSourceTestSubclassNet, self).__init__(
@@ -70,8 +71,7 @@ class TestDataSource(CiTestCase):
         """Init uses DataSource.dsname for sourcing ds_cfg."""
         sys_cfg = {'datasource': {'MyTestSubclass': {'key2': False}}}
         distro = 'distrotest'  # generally should be a Distro object
-        paths = Paths({})
-        datasource = DataSourceTestSubclassNet(sys_cfg, distro, paths)
+        datasource = DataSourceTestSubclassNet(sys_cfg, distro, self.paths)
         self.assertEqual({'key2': False}, datasource.ds_cfg)
 
     def test_str_is_classname(self):
@@ -81,6 +81,91 @@ class TestDataSource(CiTestCase):
             'DataSourceTestSubclassNet',
             str(DataSourceTestSubclassNet('', '', self.paths)))
 
+    def test_datasource_get_url_params_defaults(self):
+        """get_url_params default url config settings for the datasource."""
+        params = self.datasource.get_url_params()
+        self.assertEqual(params.max_wait_seconds, self.datasource.url_max_wait)
+        self.assertEqual(params.timeout_seconds, self.datasource.url_timeout)
+        self.assertEqual(params.num_retries, self.datasource.url_retries)
+
+    def test_datasource_get_url_params_subclassed(self):
+        """Subclasses can override get_url_params defaults."""
+        sys_cfg = {'datasource': {'MyTestSubclass': {'key2': False}}}
+        distro = 'distrotest'  # generally should be a Distro object
+        datasource = DataSourceTestSubclassNet(sys_cfg, distro, self.paths)
+        expected = (datasource.url_max_wait, datasource.url_timeout,
+                    datasource.url_retries)
+        url_params = datasource.get_url_params()
+        self.assertNotEqual(self.datasource.get_url_params(), url_params)
+        self.assertEqual(expected, url_params)
+
+    def test_datasource_get_url_params_ds_config_override(self):
+        """Datasource configuration options can override url param defaults."""
+        sys_cfg = {
+            'datasource': {
+                'MyTestSubclass': {
+                    'max_wait': '1', 'timeout': '2', 'retries': '3'}}}
+        datasource = DataSourceTestSubclassNet(
+            sys_cfg, self.distro, self.paths)
+        expected = (1, 2, 3)
+        url_params = datasource.get_url_params()
+        self.assertNotEqual(
+            (datasource.url_max_wait, datasource.url_timeout,
+             datasource.url_retries),
+            url_params)
+        self.assertEqual(expected, url_params)
+
+    def test_datasource_get_url_params_is_zero_or_greater(self):
+        """get_url_params ignores timeouts with a value below 0."""
+        # Set an override that is below 0 which gets ignored.
+        sys_cfg = {'datasource': {'_undef': {'timeout': '-1'}}}
+        datasource = DataSource(sys_cfg, self.distro, self.paths)
+        (_max_wait, timeout, _retries) = datasource.get_url_params()
+        self.assertEqual(0, timeout)
+
+    def test_datasource_get_url_uses_defaults_on_errors(self):
+        """On invalid system config values for url_params defaults are used."""
+        # All invalid values should be logged
+        sys_cfg = {'datasource': {
+            '_undef': {
+                'max_wait': 'nope', 'timeout': 'bug', 'retries': 'nonint'}}}
+        datasource = DataSource(sys_cfg, self.distro, self.paths)
+        url_params = datasource.get_url_params()
+        expected = (datasource.url_max_wait, datasource.url_timeout,
+                    datasource.url_retries)
+        self.assertEqual(expected, url_params)
+        logs = self.logs.getvalue()
+        expected_logs = [
+            "Config max_wait 'nope' is not an int, using default '-1'",
+            "Config timeout 'bug' is not an int, using default '10'",
+            "Config retries 'nonint' is not an int, using default '5'",
+        ]
+        for log in expected_logs:
+            self.assertIn(log, logs)
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_fallback_interface_is_discovered(self, m_get_fallback_nic):
+        """The fallback_interface is discovered via find_fallback_nic."""
+        m_get_fallback_nic.return_value = 'nic9'
+        self.assertEqual('nic9', self.datasource.fallback_interface)
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_fallback_interface_logs_undiscovered(self, m_get_fallback_nic):
+        """Log a warning when fallback_interface can not discover the nic."""
+        self.datasource._cloud_name = 'MySupahCloud'
+        m_get_fallback_nic.return_value = None  # Couldn't discover nic
+        self.assertIsNone(self.datasource.fallback_interface)
+        self.assertEqual(
+            'WARNING: Did not find a fallback interface on MySupahCloud.\n',
+            self.logs.getvalue())
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_wb_fallback_interface_is_cached(self, m_get_fallback_nic):
+        """The fallback_interface is cached and won't be rediscovered."""
+        self.datasource._fallback_interface = 'nic10'
+        self.assertEqual('nic10', self.datasource.fallback_interface)
+        m_get_fallback_nic.assert_not_called()
+
     def test__get_data_unimplemented(self):
         """Raise an error when _get_data is not implemented."""
         with self.assertRaises(NotImplementedError) as context_manager:
@@ -278,7 +363,7 @@ class TestDataSource(CiTestCase):
         base_args = get_args(DataSource.get_hostname)  # pylint: disable=W1505
         # Import all DataSource subclasses so we can inspect them.
         modules = util.find_modules(os.path.dirname(os.path.dirname(__file__)))
-        for loc, name in modules.items():
+        for _loc, name in modules.items():
             mod_locs, _ = importer.find_module(name, ['cloudinit.sources'], [])
             if mod_locs:
                 importer.import_module(mod_locs[0])
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index 882517f..73c3177 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -279,24 +279,28 @@ class SshdConfigLine(object):
 
 
 def parse_ssh_config(fname):
+    if not os.path.isfile(fname):
+        return []
+    return parse_ssh_config_lines(util.load_file(fname).splitlines())
+
+
+def parse_ssh_config_lines(lines):
     # See: man sshd_config
     # The file contains keyword-argument pairs, one per line.
     # Lines starting with '#' and empty lines are interpreted as comments.
     # Note: key-words are case-insensitive and arguments are case-sensitive
-    lines = []
-    if not os.path.isfile(fname):
-        return lines
-    for line in util.load_file(fname).splitlines():
+    ret = []
+    for line in lines:
         line = line.strip()
         if not line or line.startswith("#"):
-            lines.append(SshdConfigLine(line))
+            ret.append(SshdConfigLine(line))
             continue
         try:
             key, val = line.split(None, 1)
         except ValueError:
             key, val = line.split('=', 1)
-        lines.append(SshdConfigLine(line, key, val))
-    return lines
+        ret.append(SshdConfigLine(line, key, val))
+    return ret
 
 
 def parse_ssh_config_map(fname):
@@ -310,4 +314,56 @@ def parse_ssh_config_map(fname):
         ret[line.key] = line.value
     return ret
 
+
+def update_ssh_config(updates, fname=DEF_SSHD_CFG):
+    """Read fname, and update if changes are necessary.
+
+    @param updates: dictionary of desired values {Option: value}
+    @return: boolean indicating if an update was done."""
+    lines = parse_ssh_config(fname)
+    changed = update_ssh_config_lines(lines=lines, updates=updates)
+    if changed:
+        util.write_file(
+            fname, "\n".join([str(l) for l in lines]) + "\n", copy_mode=True)
+    return len(changed) != 0
+
+
+def update_ssh_config_lines(lines, updates):
+    """Update the ssh config lines per updates.
+
+    @param lines: array of SshdConfigLine.  This array is updated in place.
+    @param updates: dictionary of desired values {Option: value}
+    @return: A list of keys in updates that were changed."""
+    found = set()
+    changed = []
+
+    # Keywords are case-insensitive and arguments are case-sensitive
+    casemap = dict([(k.lower(), k) for k in updates.keys()])
+
+    for (i, line) in enumerate(lines, start=1):
+        if not line.key:
+            continue
+        if line.key in casemap:
+            key = casemap[line.key]
+            value = updates[key]
+            found.add(key)
+            if line.value == value:
+                LOG.debug("line %d: option %s already set to %s",
+                          i, key, value)
+            else:
+                changed.append(key)
+                LOG.debug("line %d: option %s updated %s -> %s", i,
+                          key, line.value, value)
+                line.value = value
+
+    if len(found) != len(updates):
+        for key, value in updates.items():
+            if key in found:
+                continue
+            changed.append(key)
+            lines.append(SshdConfigLine('', key, value))
+            LOG.debug("line %d: option %s added with %s",
+                      len(lines), key, value)
+    return changed
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index bc4ebc8..286607b 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -362,16 +362,22 @@ class Init(object):
         self._store_vendordata()
 
     def setup_datasource(self):
-        if self.datasource is None:
-            raise RuntimeError("Datasource is None, cannot setup.")
-        self.datasource.setup(is_new_instance=self.is_new_instance())
+        with events.ReportEventStack("setup-datasource",
+                                     "setting up datasource",
+                                     parent=self.reporter):
+            if self.datasource is None:
+                raise RuntimeError("Datasource is None, cannot setup.")
+            self.datasource.setup(is_new_instance=self.is_new_instance())
 
     def activate_datasource(self):
-        if self.datasource is None:
-            raise RuntimeError("Datasource is None, cannot activate.")
-        self.datasource.activate(cfg=self.cfg,
-                                 is_new_instance=self.is_new_instance())
-        self._write_to_cache()
+        with events.ReportEventStack("activate-datasource",
+                                     "activating datasource",
+                                     parent=self.reporter):
+            if self.datasource is None:
+                raise RuntimeError("Datasource is None, cannot activate.")
+            self.datasource.activate(cfg=self.cfg,
+                                     is_new_instance=self.is_new_instance())
+            self._write_to_cache()
 
     def _store_userdata(self):
         raw_ud = self.datasource.get_userdata_raw()
@@ -691,7 +697,9 @@ class Modules(object):
         module_list = []
         if name not in self.cfg:
             return module_list
-        cfg_mods = self.cfg[name]
+        cfg_mods = self.cfg.get(name)
+        if not cfg_mods:
+            return module_list
         # Create 'module_list', an array of hashes
         # Where hash['mod'] = module name
         #       hash['freq'] = frequency
diff --git a/cloudinit/templater.py b/cloudinit/templater.py
index b3ea64e..7e7acb8 100644
--- a/cloudinit/templater.py
+++ b/cloudinit/templater.py
@@ -121,7 +121,11 @@ def detect_template(text):
 def render_from_file(fn, params):
     if not params:
         params = {}
-    template_type, renderer, content = detect_template(util.load_file(fn))
+    # jinja in python2 uses unicode internally.  All py2 str will be decoded.
+    # If it is given a str that has non-ascii then it will raise a
+    # UnicodeDecodeError.  So we explicitly convert to unicode type here.
+    template_type, renderer, content = detect_template(
+        util.load_file(fn, decode=False).decode('utf-8'))
     LOG.debug("Rendering content of '%s' using renderer %s", fn, template_type)
     return renderer(content, params)
 
@@ -132,14 +136,18 @@ def render_to_file(fn, outfn, params, mode=0o644):
 
 
 def render_string_to_file(content, outfn, params, mode=0o644):
+    """Render string (or py2 unicode) to file.
+    Warning: py2 str with non-ascii chars will cause UnicodeDecodeError."""
     contents = render_string(content, params)
     util.write_file(outfn, contents, mode=mode)
 
 
 def render_string(content, params):
+    """Render string (or py2 unicode).
+    Warning: py2 str with non-ascii chars will cause UnicodeDecodeError."""
     if not params:
         params = {}
-    template_type, renderer, content = detect_template(content)
+    _template_type, renderer, content = detect_template(content)
     return renderer(content, params)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index 999b1d7..5bfe7fa 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -3,11 +3,13 @@
 from __future__ import print_function
 
 import functools
+import httpretty
 import logging
 import os
 import shutil
 import sys
 import tempfile
+import time
 import unittest
 
 import mock
@@ -24,6 +26,8 @@ try:
 except ImportError:
     from ConfigParser import ConfigParser
 
+from cloudinit.config.schema import (
+    SchemaValidationError, validate_cloudconfig_schema)
 from cloudinit import helpers as ch
 from cloudinit import util
 
@@ -108,12 +112,12 @@ class TestCase(unittest2.TestCase):
         super(TestCase, self).setUp()
         self.reset_global_state()
 
-    def add_patch(self, target, attr, **kwargs):
+    def add_patch(self, target, attr, *args, **kwargs):
         """Patches specified target object and sets it as attr on test
         instance also schedules cleanup"""
         if 'autospec' not in kwargs:
             kwargs['autospec'] = True
-        m = mock.patch(target, **kwargs)
+        m = mock.patch(target, *args, **kwargs)
         p = m.start()
         self.addCleanup(m.stop)
         setattr(self, attr, p)
@@ -190,35 +194,11 @@ class ResourceUsingTestCase(CiTestCase):
         super(ResourceUsingTestCase, self).setUp()
         self.resource_path = None
 
-    def resourceLocation(self, subname=None):
-        if self.resource_path is None:
-            paths = [
-                os.path.join('tests', 'data'),
-                os.path.join('data'),
-                os.path.join(os.pardir, 'tests', 'data'),
-                os.path.join(os.pardir, 'data'),
-            ]
-            for p in paths:
-                if os.path.isdir(p):
-                    self.resource_path = p
-                    break
-        self.assertTrue((self.resource_path and
-                         os.path.isdir(self.resource_path)),
-                        msg="Unable to locate test resource data path!")
-        if not subname:
-            return self.resource_path
-        return os.path.join(self.resource_path, subname)
-
-    def readResource(self, name):
-        where = self.resourceLocation(name)
-        with open(where, 'r') as fh:
-            return fh.read()
-
     def getCloudPaths(self, ds=None):
         tmpdir = tempfile.mkdtemp()
         self.addCleanup(shutil.rmtree, tmpdir)
         cp = ch.Paths({'cloud_dir': tmpdir,
-                       'templates_dir': self.resourceLocation()},
+                       'templates_dir': resourceLocation()},
                       ds=ds)
         return cp
 
@@ -234,7 +214,7 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
         ResourceUsingTestCase.tearDown(self)
 
     def replicateTestRoot(self, example_root, target_root):
-        real_root = self.resourceLocation()
+        real_root = resourceLocation()
         real_root = os.path.join(real_root, 'roots', example_root)
         for (dir_path, _dirnames, filenames) in os.walk(real_root):
             real_path = dir_path
@@ -285,7 +265,8 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
             os.path: [('isfile', 1), ('exists', 1),
                       ('islink', 1), ('isdir', 1), ('lexists', 1)],
             os: [('listdir', 1), ('mkdir', 1),
-                 ('lstat', 1), ('symlink', 2)]
+                 ('lstat', 1), ('symlink', 2),
+                 ('stat', 1)]
         }
 
         if hasattr(os, 'scandir'):
@@ -323,19 +304,43 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
 class HttprettyTestCase(CiTestCase):
     # necessary as http_proxy gets in the way of httpretty
     # https://github.com/gabrielfalcao/HTTPretty/issues/122
+    # Also make sure that allow_net_connect is set to False.
+    # And make sure reset and enable/disable are done.
 
     def setUp(self):
         self.restore_proxy = os.environ.get('http_proxy')
         if self.restore_proxy is not None:
             del os.environ['http_proxy']
         super(HttprettyTestCase, self).setUp()
+        httpretty.HTTPretty.allow_net_connect = False
+        httpretty.reset()
+        httpretty.enable()
 
     def tearDown(self):
+        httpretty.disable()
+        httpretty.reset()
         if self.restore_proxy:
             os.environ['http_proxy'] = self.restore_proxy
         super(HttprettyTestCase, self).tearDown()
 
 
+class SchemaTestCaseMixin(unittest2.TestCase):
+
+    def assertSchemaValid(self, cfg, msg="Valid Schema failed validation."):
+        """Assert the config is valid per self.schema.
+
+        If there is only one top level key in the schema properties, then
+        the cfg will be put under that key."""
+        props = list(self.schema.get('properties'))
+        # put cfg under top level key if there is only one in the schema
+        if len(props) == 1:
+            cfg = {props[0]: cfg}
+        try:
+            validate_cloudconfig_schema(cfg, self.schema, strict=True)
+        except SchemaValidationError:
+            self.fail(msg)
+
+
 def populate_dir(path, files):
     if not os.path.exists(path):
         os.makedirs(path)
@@ -354,11 +359,20 @@ def populate_dir(path, files):
     return ret
 
 
+def populate_dir_with_ts(path, data):
+    """data is {'file': ('contents', mtime)}.  mtime relative to now."""
+    populate_dir(path, dict((k, v[0]) for k, v in data.items()))
+    btime = time.time()
+    for fpath, (_contents, mtime) in data.items():
+        ts = btime + mtime if mtime else btime
+        os.utime(os.path.sep.join((path, fpath)), (ts, ts))
+
+
 def dir2dict(startdir, prefix=None):
     flist = {}
     if prefix is None:
         prefix = startdir
-    for root, dirs, files in os.walk(startdir):
+    for root, _dirs, files in os.walk(startdir):
         for fname in files:
             fpath = os.path.join(root, fname)
             key = fpath[len(prefix):]
@@ -399,6 +413,18 @@ def wrap_and_call(prefix, mocks, func, *args, **kwargs):
             p.stop()
 
 
+def resourceLocation(subname=None):
+    path = os.path.join('tests', 'data')
+    if not subname:
+        return path
+    return os.path.join(path, subname)
+
+
+def readResource(name, mode='r'):
+    with open(resourceLocation(name), mode) as fh:
+        return fh.read()
+
+
 try:
     skipIf = unittest.skipIf
 except AttributeError:
diff --git a/cloudinit/tests/test_netinfo.py b/cloudinit/tests/test_netinfo.py
index 7dea2e4..d76e768 100644
--- a/cloudinit/tests/test_netinfo.py
+++ b/cloudinit/tests/test_netinfo.py
@@ -2,105 +2,166 @@
 
 """Tests netinfo module functions and classes."""
 
-from cloudinit.netinfo import netdev_pformat, route_pformat
-from cloudinit.tests.helpers import CiTestCase, mock
+from copy import copy
+
+from cloudinit.netinfo import netdev_info, netdev_pformat, route_pformat
+from cloudinit.tests.helpers import CiTestCase, mock, readResource
 
 
 # Example ifconfig and route output
-SAMPLE_IFCONFIG_OUT = """\
-enp0s25   Link encap:Ethernet  HWaddr 50:7b:9d:2c:af:91
-          inet addr:192.168.2.18  Bcast:192.168.2.255  Mask:255.255.255.0
-          inet6 addr: fe80::8107:2b92:867e:f8a6/64 Scope:Link
-          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
-          RX packets:8106427 errors:55 dropped:0 overruns:0 frame:37
-          TX packets:9339739 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:1000
-          RX bytes:4953721719 (4.9 GB)  TX bytes:7731890194 (7.7 GB)
-          Interrupt:20 Memory:e1200000-e1220000
-
-lo        Link encap:Local Loopback
-          inet addr:127.0.0.1  Mask:255.0.0.0
-          inet6 addr: ::1/128 Scope:Host
-          UP LOOPBACK RUNNING  MTU:65536  Metric:1
-          RX packets:579230851 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:579230851 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:1
-"""
-
-SAMPLE_ROUTE_OUT = '\n'.join([
-    '0.0.0.0         192.168.2.1     0.0.0.0         UG        0 0          0'
-    ' enp0s25',
-    '0.0.0.0         192.168.2.1     0.0.0.0         UG        0 0          0'
-    ' wlp3s0',
-    '192.168.2.0     0.0.0.0         255.255.255.0   U         0 0          0'
-    ' enp0s25'])
-
-
-NETDEV_FORMATTED_OUT = '\n'.join([
-    '+++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++'
-    '++++++++++++++++++++',
-    '+---------+------+------------------------------+---------------+-------+'
-    '-------------------+',
-    '|  Device |  Up  |           Address            |      Mask     | Scope |'
-    '     Hw-Address    |',
-    '+---------+------+------------------------------+---------------+-------+'
-    '-------------------+',
-    '| enp0s25 | True |         192.168.2.18         | 255.255.255.0 |   .   |'
-    ' 50:7b:9d:2c:af:91 |',
-    '| enp0s25 | True | fe80::8107:2b92:867e:f8a6/64 |       .       |  link |'
-    ' 50:7b:9d:2c:af:91 |',
-    '|    lo   | True |          127.0.0.1           |   255.0.0.0   |   .   |'
-    '         .         |',
-    '|    lo   | True |           ::1/128            |       .       |  host |'
-    '         .         |',
-    '+---------+------+------------------------------+---------------+-------+'
-    '-------------------+'])
-
-ROUTE_FORMATTED_OUT = '\n'.join([
-    '+++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++'
-    '+++',
-    '+-------+-------------+-------------+---------------+-----------+-----'
-    '--+',
-    '| Route | Destination |   Gateway   |    Genmask    | Interface | Flags'
-    ' |',
-    '+-------+-------------+-------------+---------------+-----------+'
-    '-------+',
-    '|   0   |   0.0.0.0   | 192.168.2.1 |    0.0.0.0    |   wlp3s0  |'
-    '   UG  |',
-    '|   1   | 192.168.2.0 |   0.0.0.0   | 255.255.255.0 |  enp0s25  |'
-    '   U   |',
-    '+-------+-------------+-------------+---------------+-----------+'
-    '-------+',
-    '++++++++++++++++++++++++++++++++++++++++Route IPv6 info++++++++++'
-    '++++++++++++++++++++++++++++++',
-    '+-------+-------------+-------------+---------------+---------------+'
-    '-----------------+-------+',
-    '| Route |    Proto    |    Recv-Q   |     Send-Q    | Local Address |'
-    ' Foreign Address | State |',
-    '+-------+-------------+-------------+---------------+---------------+'
-    '-----------------+-------+',
-    '|   0   |   0.0.0.0   | 192.168.2.1 |    0.0.0.0    |       UG      |'
-    '        0        |   0   |',
-    '|   1   | 192.168.2.0 |   0.0.0.0   | 255.255.255.0 |       U       |'
-    '        0        |   0   |',
-    '+-------+-------------+-------------+---------------+---------------+'
-    '-----------------+-------+'])
+SAMPLE_OLD_IFCONFIG_OUT = readResource("netinfo/old-ifconfig-output")
+SAMPLE_NEW_IFCONFIG_OUT = readResource("netinfo/new-ifconfig-output")
+SAMPLE_IPADDRSHOW_OUT = readResource("netinfo/sample-ipaddrshow-output")
+SAMPLE_ROUTE_OUT_V4 = readResource("netinfo/sample-route-output-v4")
+SAMPLE_ROUTE_OUT_V6 = readResource("netinfo/sample-route-output-v6")
+SAMPLE_IPROUTE_OUT_V4 = readResource("netinfo/sample-iproute-output-v4")
+SAMPLE_IPROUTE_OUT_V6 = readResource("netinfo/sample-iproute-output-v6")
+NETDEV_FORMATTED_OUT = readResource("netinfo/netdev-formatted-output")
+ROUTE_FORMATTED_OUT = readResource("netinfo/route-formatted-output")
 
 
 class TestNetInfo(CiTestCase):
 
     maxDiff = None
+    with_logs = True
 
+    @mock.patch('cloudinit.netinfo.util.which')
     @mock.patch('cloudinit.netinfo.util.subp')
-    def test_netdev_pformat(self, m_subp):
-        """netdev_pformat properly rendering network device information."""
-        m_subp.return_value = (SAMPLE_IFCONFIG_OUT, '')
+    def test_netdev_old_nettools_pformat(self, m_subp, m_which):
+        """netdev_pformat properly rendering old nettools info."""
+        m_subp.return_value = (SAMPLE_OLD_IFCONFIG_OUT, '')
+        m_which.side_effect = lambda x: x if x == 'ifconfig' else None
         content = netdev_pformat()
         self.assertEqual(NETDEV_FORMATTED_OUT, content)
 
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_new_nettools_pformat(self, m_subp, m_which):
+        """netdev_pformat properly rendering netdev new nettools info."""
+        m_subp.return_value = (SAMPLE_NEW_IFCONFIG_OUT, '')
+        m_which.side_effect = lambda x: x if x == 'ifconfig' else None
+        content = netdev_pformat()
+        self.assertEqual(NETDEV_FORMATTED_OUT, content)
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_iproute_pformat(self, m_subp, m_which):
+        """netdev_pformat properly rendering ip route info."""
+        m_subp.return_value = (SAMPLE_IPADDRSHOW_OUT, '')
+        m_which.side_effect = lambda x: x if x == 'ip' else None
+        content = netdev_pformat()
+        new_output = copy(NETDEV_FORMATTED_OUT)
+        # ip route show describes global scopes on ipv4 addresses
+        # whereas ifconfig does not. Add proper global/host scope to output.
+        new_output = new_output.replace('|   .    | 50:7b', '| global | 50:7b')
+        new_output = new_output.replace(
+            '255.0.0.0   |   .    |', '255.0.0.0   |  host  |')
+        self.assertEqual(new_output, content)
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_warn_on_missing_commands(self, m_subp, m_which):
+        """netdev_pformat warns when missing both ip and 'netstat'."""
+        m_which.return_value = None  # Niether ip nor netstat found
+        content = netdev_pformat()
+        self.assertEqual('\n', content)
+        self.assertEqual(
+            "WARNING: Could not print networks: missing 'ip' and 'ifconfig'"
+            " commands\n",
+            self.logs.getvalue())
+        m_subp.assert_not_called()
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_info_nettools_down(self, m_subp, m_which):
+        """test netdev_info using nettools and down interfaces."""
+        m_subp.return_value = (
+            readResource("netinfo/new-ifconfig-output-down"), "")
+        m_which.side_effect = lambda x: x if x == 'ifconfig' else None
+        self.assertEqual(
+            {'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False},
+             'lo': {'ipv4': [{'ip': '127.0.0.1', 'mask': '255.0.0.0'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True}},
+            netdev_info("."))
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_info_iproute_down(self, m_subp, m_which):
+        """Test netdev_info with ip and down interfaces."""
+        m_subp.return_value = (
+            readResource("netinfo/sample-ipaddrshow-output-down"), "")
+        m_which.side_effect = lambda x: x if x == 'ip' else None
+        self.assertEqual(
+            {'lo': {'ipv4': [{'ip': '127.0.0.1', 'bcast': '.',
+                              'mask': '255.0.0.0', 'scope': 'host'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True},
+             'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False}},
+            netdev_info("."))
+
+    @mock.patch('cloudinit.netinfo.netdev_info')
+    def test_netdev_pformat_with_down(self, m_netdev_info):
+        """test netdev_pformat when netdev_info returns 'down' interfaces."""
+        m_netdev_info.return_value = (
+            {'lo': {'ipv4': [{'ip': '127.0.0.1', 'mask': '255.0.0.0',
+                              'scope': 'host'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True},
+             'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False}})
+        self.assertEqual(
+            readResource("netinfo/netdev-formatted-output-down"),
+            netdev_pformat())
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_route_nettools_pformat(self, m_subp, m_which):
+        """route_pformat properly rendering nettools route info."""
+
+        def subp_netstat_route_selector(*args, **kwargs):
+            if args[0] == ['netstat', '--route', '--numeric', '--extend']:
+                return (SAMPLE_ROUTE_OUT_V4, '')
+            if args[0] == ['netstat', '-A', 'inet6', '--route', '--numeric']:
+                return (SAMPLE_ROUTE_OUT_V6, '')
+            raise Exception('Unexpected subp call %s' % args[0])
+
+        m_subp.side_effect = subp_netstat_route_selector
+        m_which.side_effect = lambda x: x if x == 'netstat' else None
+        content = route_pformat()
+        self.assertEqual(ROUTE_FORMATTED_OUT, content)
+
+    @mock.patch('cloudinit.netinfo.util.which')
     @mock.patch('cloudinit.netinfo.util.subp')
-    def test_route_pformat(self, m_subp):
-        """netdev_pformat properly rendering network device information."""
-        m_subp.return_value = (SAMPLE_ROUTE_OUT, '')
+    def test_route_iproute_pformat(self, m_subp, m_which):
+        """route_pformat properly rendering ip route info."""
+
+        def subp_iproute_selector(*args, **kwargs):
+            if ['ip', '-o', 'route', 'list'] == args[0]:
+                return (SAMPLE_IPROUTE_OUT_V4, '')
+            v6cmd = ['ip', '--oneline', '-6', 'route', 'list', 'table', 'all']
+            if v6cmd == args[0]:
+                return (SAMPLE_IPROUTE_OUT_V6, '')
+            raise Exception('Unexpected subp call %s' % args[0])
+
+        m_subp.side_effect = subp_iproute_selector
+        m_which.side_effect = lambda x: x if x == 'ip' else None
         content = route_pformat()
         self.assertEqual(ROUTE_FORMATTED_OUT, content)
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_route_warn_on_missing_commands(self, m_subp, m_which):
+        """route_pformat warns when missing both ip and 'netstat'."""
+        m_which.return_value = None  # Niether ip nor netstat found
+        content = route_pformat()
+        self.assertEqual('\n', content)
+        self.assertEqual(
+            "WARNING: Could not print routes: missing 'ip' and 'netstat'"
+            " commands\n",
+            self.logs.getvalue())
+        m_subp.assert_not_called()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
index b778a3a..113249d 100644
--- a/cloudinit/tests/test_url_helper.py
+++ b/cloudinit/tests/test_url_helper.py
@@ -1,7 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.url_helper import oauth_headers
+from cloudinit.url_helper import oauth_headers, read_file_or_url
 from cloudinit.tests.helpers import CiTestCase, mock, skipIf
+from cloudinit import util
+
+import httpretty
 
 
 try:
@@ -38,3 +41,26 @@ class TestOAuthHeaders(CiTestCase):
             'url', 'consumer_key', 'token_key', 'token_secret',
             'consumer_secret')
         self.assertEqual('url', return_value)
+
+
+class TestReadFileOrUrl(CiTestCase):
+    def test_read_file_or_url_str_from_file(self):
+        """Test that str(result.contents) on file is text version of contents.
+        It should not be "b'data'", but just "'data'" """
+        tmpf = self.tmp_path("myfile1")
+        data = b'This is my file content\n'
+        util.write_file(tmpf, data, omode="wb")
+        result = read_file_or_url("file://%s" % tmpf)
+        self.assertEqual(result.contents, data)
+        self.assertEqual(str(result), data.decode('utf-8'))
+
+    @httpretty.activate
+    def test_read_file_or_url_str_from_url(self):
+        """Test that str(result.contents) on url is text version of contents.
+        It should not be "b'data'", but just "'data'" """
+        url = 'http://hostname/path'
+        data = b'This is my url content\n'
+        httpretty.register_uri(httpretty.GET, url, data)
+        result = read_file_or_url(url)
+        self.assertEqual(result.contents, data)
+        self.assertEqual(str(result), data.decode('utf-8'))
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index 3f37dbb..17853fc 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -3,11 +3,12 @@
 """Tests for cloudinit.util"""
 
 import logging
-from textwrap import dedent
+import platform
 
 import cloudinit.util as util
 
 from cloudinit.tests.helpers import CiTestCase, mock
+from textwrap import dedent
 
 LOG = logging.getLogger(__name__)
 
@@ -16,6 +17,29 @@ MOUNT_INFO = [
     '153 68 254:0 / /home rw,relatime shared:101 - xfs /dev/sda2 rw,attr2'
 ]
 
+OS_RELEASE_SLES = dedent("""\
+    NAME="SLES"\n
+    VERSION="12-SP3"\n
+    VERSION_ID="12.3"\n
+    PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n
+    ID="sles"\nANSI_COLOR="0;32"\n
+    CPE_NAME="cpe:/o:suse:sles:12:sp3"\n
+""")
+
+OS_RELEASE_UBUNTU = dedent("""\
+    NAME="Ubuntu"\n
+    VERSION="16.04.3 LTS (Xenial Xerus)"\n
+    ID=ubuntu\n
+    ID_LIKE=debian\n
+    PRETTY_NAME="Ubuntu 16.04.3 LTS"\n
+    VERSION_ID="16.04"\n
+    HOME_URL="http://www.ubuntu.com/"\n
+    SUPPORT_URL="http://help.ubuntu.com/"\n
+    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"\n
+    VERSION_CODENAME=xenial\n
+    UBUNTU_CODENAME=xenial\n
+""")
+
 
 class FakeCloud(object):
 
@@ -135,7 +159,7 @@ class TestGetHostnameFqdn(CiTestCase):
     def test_get_hostname_fqdn_from_passes_metadata_only_to_cloud(self):
         """Calls to cloud.get_hostname pass the metadata_only parameter."""
         mycloud = FakeCloud('cloudhost', 'cloudhost.mycloud.com')
-        hostname, fqdn = util.get_hostname_fqdn(
+        _hn, _fqdn = util.get_hostname_fqdn(
             cfg={}, cloud=mycloud, metadata_only=True)
         self.assertEqual(
             [{'fqdn': True, 'metadata_only': True},
@@ -212,4 +236,105 @@ class TestBlkid(CiTestCase):
                                   capture=True, decode="replace")
 
 
+@mock.patch('cloudinit.util.subp')
+class TestUdevadmSettle(CiTestCase):
+    def test_with_no_params(self, m_subp):
+        """called with no parameters."""
+        util.udevadm_settle()
+        m_subp.called_once_with(mock.call(['udevadm', 'settle']))
+
+    def test_with_exists_and_not_exists(self, m_subp):
+        """with exists=file where file does not exist should invoke subp."""
+        mydev = self.tmp_path("mydev")
+        util.udevadm_settle(exists=mydev)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--exit-if-exists=%s' % mydev])
+
+    def test_with_exists_and_file_exists(self, m_subp):
+        """with exists=file where file does exist should not invoke subp."""
+        mydev = self.tmp_path("mydev")
+        util.write_file(mydev, "foo\n")
+        util.udevadm_settle(exists=mydev)
+        self.assertIsNone(m_subp.call_args)
+
+    def test_with_timeout_int(self, m_subp):
+        """timeout can be an integer."""
+        timeout = 9
+        util.udevadm_settle(timeout=timeout)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--timeout=%s' % timeout])
+
+    def test_with_timeout_string(self, m_subp):
+        """timeout can be a string."""
+        timeout = "555"
+        util.udevadm_settle(timeout=timeout)
+        m_subp.assert_called_once_with(
+            ['udevadm', 'settle', '--timeout=%s' % timeout])
+
+    def test_with_exists_and_timeout(self, m_subp):
+        """test call with both exists and timeout."""
+        mydev = self.tmp_path("mydev")
+        timeout = "3"
+        util.udevadm_settle(exists=mydev)
+        m_subp.called_once_with(
+            ['udevadm', 'settle', '--exit-if-exists=%s' % mydev,
+             '--timeout=%s' % timeout])
+
+    def test_subp_exception_raises_to_caller(self, m_subp):
+        m_subp.side_effect = util.ProcessExecutionError("BOOM")
+        self.assertRaises(util.ProcessExecutionError, util.udevadm_settle)
+
+
+@mock.patch('os.path.exists')
+class TestGetLinuxDistro(CiTestCase):
+
+    @classmethod
+    def os_release_exists(self, path):
+        """Side effect function"""
+        if path == '/etc/os-release':
+            return 1
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_distro_quoted_name(self, m_os_release, m_path_exists):
+        """Verify we get the correct name if the os-release file has
+        the distro name in quotes"""
+        m_os_release.return_value = OS_RELEASE_SLES
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('sles', '12.3', platform.machine()), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_distro_bare_name(self, m_os_release, m_path_exists):
+        """Verify we get the correct name if the os-release file does not
+        have the distro name in quotes"""
+        m_os_release.return_value = OS_RELEASE_UBUNTU
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('ubuntu', '16.04', platform.machine()), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):
+        """Verify we get no information if os-release does not exist"""
+        m_platform_dist.return_value = ('', '', '')
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('', '', ''), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_no_impl(self, m_platform_dist, m_path_exists):
+        """Verify we get an empty tuple when no information exists and
+        Exceptions are not propagated"""
+        m_platform_dist.side_effect = Exception()
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('', '', ''), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_plat_data(self, m_platform_dist, m_path_exists):
+        """Verify we get the correct platform information"""
+        m_platform_dist.return_value = ('foo', '1.1', 'aarch64')
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('foo', '1.1', 'aarch64'), dist)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_version.py b/cloudinit/tests/test_version.py
index d012f69..a96c2a4 100644
--- a/tests/unittests/test_version.py
+++ b/cloudinit/tests/test_version.py
@@ -3,6 +3,8 @@
 from cloudinit.tests.helpers import CiTestCase
 from cloudinit import version
 
+import mock
+
 
 class TestExportsFeatures(CiTestCase):
     def test_has_network_config_v1(self):
@@ -11,4 +13,19 @@ class TestExportsFeatures(CiTestCase):
     def test_has_network_config_v2(self):
         self.assertIn('NETWORK_CONFIG_V2', version.FEATURES)
 
+
+class TestVersionString(CiTestCase):
+    @mock.patch("cloudinit.version._PACKAGED_VERSION",
+                "17.2-3-gb05b9972-0ubuntu1")
+    def test_package_version_respected(self):
+        """If _PACKAGED_VERSION is filled in, then it should be returned."""
+        self.assertEqual("17.2-3-gb05b9972-0ubuntu1", version.version_string())
+
+    @mock.patch("cloudinit.version._PACKAGED_VERSION", "@@PACKAGED_VERSION@@")
+    @mock.patch("cloudinit.version.__VERSION__", "17.2")
+    def test_package_version_skipped(self):
+        """If _PACKAGED_VERSION is not modified, then return __VERSION__."""
+        self.assertEqual("17.2", version.version_string())
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 03a573a..8067979 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -15,6 +15,7 @@ import six
 import time
 
 from email.utils import parsedate
+from errno import ENOENT
 from functools import partial
 from itertools import count
 from requests import exceptions
@@ -80,6 +81,32 @@ def combine_url(base, *add_ons):
     return url
 
 
+def read_file_or_url(url, timeout=5, retries=10,
+                     headers=None, data=None, sec_between=1, ssl_details=None,
+                     headers_cb=None, exception_cb=None):
+    url = url.lstrip()
+    if url.startswith("/"):
+        url = "file://%s" % url
+    if url.lower().startswith("file://"):
+        if data:
+            LOG.warning("Unable to post data to file resource %s", url)
+        file_path = url[len("file://"):]
+        try:
+            with open(file_path, "rb") as fp:
+                contents = fp.read()
+        except IOError as e:
+            code = e.errno
+            if e.errno == ENOENT:
+                code = NOT_FOUND
+            raise UrlError(cause=e, code=code, headers=None, url=url)
+        return FileResponse(file_path, contents=contents)
+    else:
+        return readurl(url, timeout=timeout, retries=retries, headers=headers,
+                       headers_cb=headers_cb, data=data,
+                       sec_between=sec_between, ssl_details=ssl_details,
+                       exception_cb=exception_cb)
+
+
 # Made to have same accessors as UrlResponse so that the
 # read_file_or_url can return this or that object and the
 # 'user' of those objects will not need to know the difference.
@@ -96,7 +123,7 @@ class StringResponse(object):
         return True
 
     def __str__(self):
-        return self.contents
+        return self.contents.decode('utf-8')
 
 
 class FileResponse(StringResponse):
@@ -519,7 +546,7 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
         resource_owner_secret=token_secret,
         signature_method=oauth1.SIGNATURE_PLAINTEXT,
         timestamp=timestamp)
-    uri, signed_headers, body = client.sign(url)
+    _uri, signed_headers, _body = client.sign(url)
     return signed_headers
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/user_data.py b/cloudinit/user_data.py
index cc55daf..ed83d2d 100644
--- a/cloudinit/user_data.py
+++ b/cloudinit/user_data.py
@@ -19,7 +19,7 @@ import six
 
 from cloudinit import handlers
 from cloudinit import log as logging
-from cloudinit.url_helper import UrlError
+from cloudinit.url_helper import read_file_or_url, UrlError
 from cloudinit import util
 
 LOG = logging.getLogger(__name__)
@@ -224,8 +224,8 @@ class UserDataProcessor(object):
                 content = util.load_file(include_once_fn)
             else:
                 try:
-                    resp = util.read_file_or_url(include_url,
-                                                 ssl_details=self.ssl_details)
+                    resp = read_file_or_url(include_url,
+                                            ssl_details=self.ssl_details)
                     if include_once_on and resp.ok():
                         util.write_file(include_once_fn, resp.contents,
                                         mode=0o600)
@@ -337,8 +337,10 @@ def is_skippable(part):
 
 # Coverts a raw string into a mime message
 def convert_string(raw_data, content_type=NOT_MULTIPART_TYPE):
+    """convert a string (more likely bytes) or a message into
+    a mime message."""
     if not raw_data:
-        raw_data = ''
+        raw_data = b''
 
     def create_binmsg(data, content_type):
         maintype, subtype = content_type.split("/", 1)
@@ -346,15 +348,17 @@ def convert_string(raw_data, content_type=NOT_MULTIPART_TYPE):
         msg.set_payload(data)
         return msg
 
-    try:
-        data = util.decode_binary(util.decomp_gzip(raw_data))
-        if "mime-version:" in data[0:4096].lower():
-            msg = util.message_from_string(data)
-        else:
-            msg = create_binmsg(data, content_type)
-    except UnicodeDecodeError:
-        msg = create_binmsg(raw_data, content_type)
+    if isinstance(raw_data, six.text_type):
+        bdata = raw_data.encode('utf-8')
+    else:
+        bdata = raw_data
+    bdata = util.decomp_gzip(bdata, decode=False)
+    if b"mime-version:" in bdata[0:4096].lower():
+        msg = util.message_from_string(bdata.decode('utf-8'))
+    else:
+        msg = create_binmsg(bdata, content_type)
 
     return msg
 
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index acdc0d8..6da9511 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -576,6 +576,39 @@ def get_cfg_option_int(yobj, key, default=0):
     return int(get_cfg_option_str(yobj, key, default=default))
 
 
+def get_linux_distro():
+    distro_name = ''
+    distro_version = ''
+    if os.path.exists('/etc/os-release'):
+        os_release = load_file('/etc/os-release')
+        for line in os_release.splitlines():
+            if line.strip().startswith('ID='):
+                distro_name = line.split('=')[-1]
+                distro_name = distro_name.replace('"', '')
+            if line.strip().startswith('VERSION_ID='):
+                # Lets hope for the best that distros stay consistent ;)
+                distro_version = line.split('=')[-1]
+                distro_version = distro_version.replace('"', '')
+    else:
+        dist = ('', '', '')
+        try:
+            # Will be removed in 3.7
+            dist = platform.dist()  # pylint: disable=W1505
+        except Exception:
+            pass
+        finally:
+            found = None
+            for entry in dist:
+                if entry:
+                    found = 1
+            if not found:
+                LOG.warning('Unable to determine distribution, template '
+                            'expansion may have unexpected results')
+        return dist
+
+    return (distro_name, distro_version, platform.machine())
+
+
 def system_info():
     info = {
         'platform': platform.platform(),
@@ -583,19 +616,19 @@ def system_info():
         'release': platform.release(),
         'python': platform.python_version(),
         'uname': platform.uname(),
-        'dist': platform.dist(),  # pylint: disable=W1505
+        'dist': get_linux_distro()
     }
     system = info['system'].lower()
     var = 'unknown'
     if system == "linux":
         linux_dist = info['dist'][0].lower()
-        if linux_dist in ('centos', 'fedora', 'debian'):
+        if linux_dist in ('centos', 'debian', 'fedora', 'rhel', 'suse'):
             var = linux_dist
         elif linux_dist in ('ubuntu', 'linuxmint', 'mint'):
             var = 'ubuntu'
         elif linux_dist == 'redhat':
             var = 'rhel'
-        elif linux_dist == 'suse':
+        elif linux_dist in ('opensuse', 'sles'):
             var = 'suse'
         else:
             var = 'linux'
@@ -857,37 +890,6 @@ def fetch_ssl_details(paths=None):
     return ssl_details
 
 
-def read_file_or_url(url, timeout=5, retries=10,
-                     headers=None, data=None, sec_between=1, ssl_details=None,
-                     headers_cb=None, exception_cb=None):
-    url = url.lstrip()
-    if url.startswith("/"):
-        url = "file://%s" % url
-    if url.lower().startswith("file://"):
-        if data:
-            LOG.warning("Unable to post data to file resource %s", url)
-        file_path = url[len("file://"):]
-        try:
-            contents = load_file(file_path, decode=False)
-        except IOError as e:
-            code = e.errno
-            if e.errno == ENOENT:
-                code = url_helper.NOT_FOUND
-            raise url_helper.UrlError(cause=e, code=code, headers=None,
-                                      url=url)
-        return url_helper.FileResponse(file_path, contents=contents)
-    else:
-        return url_helper.readurl(url,
-                                  timeout=timeout,
-                                  retries=retries,
-                                  headers=headers,
-                                  headers_cb=headers_cb,
-                                  data=data,
-                                  sec_between=sec_between,
-                                  ssl_details=ssl_details,
-                                  exception_cb=exception_cb)
-
-
 def load_yaml(blob, default=None, allowed=(dict,)):
     loaded = default
     blob = decode_binary(blob)
@@ -905,8 +907,20 @@ def load_yaml(blob, default=None, allowed=(dict,)):
                              " but got %s instead") %
                             (allowed, type_utils.obj_name(converted)))
         loaded = converted
-    except (yaml.YAMLError, TypeError, ValueError):
-        logexc(LOG, "Failed loading yaml blob")
+    except (yaml.YAMLError, TypeError, ValueError) as e:
+        msg = 'Failed loading yaml blob'
+        mark = None
+        if hasattr(e, 'context_mark') and getattr(e, 'context_mark'):
+            mark = getattr(e, 'context_mark')
+        elif hasattr(e, 'problem_mark') and getattr(e, 'problem_mark'):
+            mark = getattr(e, 'problem_mark')
+        if mark:
+            msg += (
+                '. Invalid format at line {line} column {col}: "{err}"'.format(
+                    line=mark.line + 1, col=mark.column + 1, err=e))
+        else:
+            msg += '. {err}'.format(err=e)
+        LOG.warning(msg)
     return loaded
 
 
@@ -925,12 +939,14 @@ def read_seeded(base="", ext="", timeout=5, retries=10, file_retries=0):
         ud_url = "%s%s%s" % (base, "user-data", ext)
         md_url = "%s%s%s" % (base, "meta-data", ext)
 
-    md_resp = read_file_or_url(md_url, timeout, retries, file_retries)
+    md_resp = url_helper.read_file_or_url(md_url, timeout, retries,
+                                          file_retries)
     md = None
     if md_resp.ok():
         md = load_yaml(decode_binary(md_resp.contents), default={})
 
-    ud_resp = read_file_or_url(ud_url, timeout, retries, file_retries)
+    ud_resp = url_helper.read_file_or_url(ud_url, timeout, retries,
+                                          file_retries)
     ud = None
     if ud_resp.ok():
         ud = ud_resp.contents
@@ -1154,7 +1170,9 @@ def gethostbyaddr(ip):
 
 def is_resolvable_url(url):
     """determine if this url is resolvable (existing or ip)."""
-    return is_resolvable(urlparse.urlparse(url).hostname)
+    return log_time(logfunc=LOG.debug, msg="Resolving URL: " + url,
+                    func=is_resolvable,
+                    args=(urlparse.urlparse(url).hostname,))
 
 
 def search_for_mirror(candidates):
@@ -1446,7 +1464,7 @@ def get_config_logfiles(cfg):
     for fmt in get_output_cfg(cfg, None):
         if not fmt:
             continue
-        match = re.match('(?P<type>\||>+)\s*(?P<target>.*)', fmt)
+        match = re.match(r'(?P<type>\||>+)\s*(?P<target>.*)', fmt)
         if not match:
             continue
         target = match.group('target')
@@ -1608,7 +1626,8 @@ def mounts():
     return mounted
 
 
-def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True):
+def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,
+             update_env_for_mount=None):
     """
     Mount the device, call method 'callback' passing the directory
     in which it was mounted, then unmount.  Return whatever 'callback'
@@ -1670,7 +1689,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True):
                         mountcmd.extend(['-t', mtype])
                     mountcmd.append(device)
                     mountcmd.append(tmpd)
-                    subp(mountcmd)
+                    subp(mountcmd, update_env=update_env_for_mount)
                     umount = tmpd  # This forces it to be unmounted (when set)
                     mountpoint = tmpd
                     break
@@ -1857,9 +1876,55 @@ def subp_blob_in_tempfile(blob, *args, **kwargs):
         return subp(*args, **kwargs)
 
 
-def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
+def subp(args, data=None, rcs=None, env=None, capture=True,
+         combine_capture=False, shell=False,
          logstring=False, decode="replace", target=None, update_env=None,
          status_cb=None):
+    """Run a subprocess.
+
+    :param args: command to run in a list. [cmd, arg1, arg2...]
+    :param data: input to the command, made available on its stdin.
+    :param rcs:
+        a list of allowed return codes.  If subprocess exits with a value not
+        in this list, a ProcessExecutionError will be raised.  By default,
+        data is returned as a string.  See 'decode' parameter.
+    :param env: a dictionary for the command's environment.
+    :param capture:
+        boolean indicating if output should be captured.  If True, then stderr
+        and stdout will be returned.  If False, they will not be redirected.
+    :param combine_capture:
+        boolean indicating if stderr should be redirected to stdout. When True,
+        interleaved stderr and stdout will be returned as the first element of
+        a tuple, the second will be empty string or bytes (per decode).
+        if combine_capture is True, then output is captured independent of
+        the value of capture.
+    :param shell: boolean indicating if this should be run with a shell.
+    :param logstring:
+        the command will be logged to DEBUG.  If it contains info that should
+        not be logged, then logstring will be logged instead.
+    :param decode:
+        if False, no decoding will be done and returned stdout and stderr will
+        be bytes.  Other allowed values are 'strict', 'ignore', and 'replace'.
+        These values are passed through to bytes().decode() as the 'errors'
+        parameter.  There is no support for decoding to other than utf-8.
+    :param target:
+        not supported, kwarg present only to make function signature similar
+        to curtin's subp.
+    :param update_env:
+        update the enviornment for this command with this dictionary.
+        this will not affect the current processes os.environ.
+    :param status_cb:
+        call this fuction with a single string argument before starting
+        and after finishing.
+
+    :return
+        if not capturing, return is (None, None)
+        if capturing, stdout and stderr are returned.
+            if decode:
+                entries in tuple will be python2 unicode or python3 string
+            if not decode:
+                entries in tuple will be python2 string or python3 bytes
+    """
 
     # not supported in cloud-init (yet), for now kept in the call signature
     # to ease maintaining code shared between cloud-init and curtin
@@ -1885,7 +1950,8 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
         status_cb('Begin run command: {command}\n'.format(command=command))
     if not logstring:
         LOG.debug(("Running command %s with allowed return codes %s"
-                   " (shell=%s, capture=%s)"), args, rcs, shell, capture)
+                   " (shell=%s, capture=%s)"),
+                  args, rcs, shell, 'combine' if combine_capture else capture)
     else:
         LOG.debug(("Running hidden command to protect sensitive "
                    "input/output logstring: %s"), logstring)
@@ -1896,6 +1962,9 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
     if capture:
         stdout = subprocess.PIPE
         stderr = subprocess.PIPE
+    if combine_capture:
+        stdout = subprocess.PIPE
+        stderr = subprocess.STDOUT
     if data is None:
         # using devnull assures any reads get null, rather
         # than possibly waiting on input.
@@ -1934,10 +2003,11 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
             devnull_fp.close()
 
     # Just ensure blank instead of none.
-    if not out and capture:
-        out = b''
-    if not err and capture:
-        err = b''
+    if capture or combine_capture:
+        if not out:
+            out = b''
+        if not err:
+            err = b''
     if decode:
         def ldecode(data, m='utf-8'):
             if not isinstance(data, bytes):
@@ -2061,24 +2131,33 @@ def is_container():
     return False
 
 
-def get_proc_env(pid):
+def get_proc_env(pid, encoding='utf-8', errors='replace'):
     """
     Return the environment in a dict that a given process id was started with.
-    """
 
-    env = {}
-    fn = os.path.join("/proc/", str(pid), "environ")
+    @param encoding: if true, then decoding will be done with
+                     .decode(encoding, errors) and text will be returned.
+                     if false then binary will be returned.
+    @param errors:   only used if encoding is true."""
+    fn = os.path.join("/proc", str(pid), "environ")
+
     try:
-        contents = load_file(fn)
-        toks = contents.split("\x00")
-        for tok in toks:
-            if tok == "":
-                continue
-            (name, val) = tok.split("=", 1)
-            if name:
-                env[name] = val
+        contents = load_file(fn, decode=False)
     except (IOError, OSError):
-        pass
+        return {}
+
+    env = {}
+    null, equal = (b"\x00", b"=")
+    if encoding:
+        null, equal = ("\x00", "=")
+        contents = contents.decode(encoding, errors)
+
+    for tok in contents.split(null):
+        if not tok:
+            continue
+        (name, val) = tok.split(equal, 1)
+        if name:
+            env[name] = val
     return env
 
 
@@ -2214,7 +2293,7 @@ def parse_mtab(path):
 def find_freebsd_part(label_part):
     if label_part.startswith("/dev/label/"):
         target_label = label_part[5:]
-        (label_part, err) = subp(['glabel', 'status', '-s'])
+        (label_part, _err) = subp(['glabel', 'status', '-s'])
         for labels in label_part.split("\n"):
             items = labels.split()
             if len(items) > 0 and items[0].startswith(target_label):
@@ -2275,8 +2354,8 @@ def parse_mount(path):
     # the regex is a bit complex. to better understand this regex see:
     # https://regex101.com/r/2F6c1k/1
     # https://regex101.com/r/T2en7a/1
-    regex = r'^(/dev/[\S]+|.*zroot\S*?) on (/[\S]*) ' + \
-            '(?=(?:type)[\s]+([\S]+)|\(([^,]*))'
+    regex = (r'^(/dev/[\S]+|.*zroot\S*?) on (/[\S]*) '
+             r'(?=(?:type)[\s]+([\S]+)|\(([^,]*))')
     for line in mount_locs:
         m = re.search(regex, line)
         if not m:
@@ -2545,11 +2624,21 @@ def _call_dmidecode(key, dmidecode_path):
         if result.replace(".", "") == "":
             return ""
         return result
-    except (IOError, OSError) as _err:
-        LOG.debug('failed dmidecode cmd: %s\n%s', cmd, _err)
+    except (IOError, OSError) as e:
+        LOG.debug('failed dmidecode cmd: %s\n%s', cmd, e)
         return None
 
 
+def is_x86(uname_arch=None):
+    """Return True if platform is x86-based"""
+    if uname_arch is None:
+        uname_arch = os.uname()[4]
+    x86_arch_match = (
+        uname_arch == 'x86_64' or
+        (uname_arch[0] == 'i' and uname_arch[2:] == '86'))
+    return x86_arch_match
+
+
 def read_dmi_data(key):
     """
     Wrapper for reading DMI data.
@@ -2577,8 +2666,7 @@ def read_dmi_data(key):
 
     # running dmidecode can be problematic on some arches (LP: #1243287)
     uname_arch = os.uname()[4]
-    if not (uname_arch == "x86_64" or
-            (uname_arch.startswith("i") and uname_arch[2:] == "86") or
+    if not (is_x86(uname_arch) or
             uname_arch == 'aarch64' or
             uname_arch == 'amd64'):
         LOG.debug("dmidata is not supported on %s", uname_arch)
@@ -2727,4 +2815,19 @@ def mount_is_read_write(mount_point):
     mount_opts = result[-1].split(',')
     return mount_opts[0] == 'rw'
 
+
+def udevadm_settle(exists=None, timeout=None):
+    """Invoke udevadm settle with optional exists and timeout parameters"""
+    settle_cmd = ["udevadm", "settle"]
+    if exists:
+        # skip the settle if the requested path already exists
+        if os.path.exists(exists):
+            return
+        settle_cmd.extend(['--exit-if-exists=%s' % exists])
+    if timeout:
+        settle_cmd.extend(['--timeout=%s' % timeout])
+
+    return subp(settle_cmd)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/version.py b/cloudinit/version.py
index ccd0f84..3b60fc4 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,8 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "18.2"
+__VERSION__ = "18.3"
+_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
     # supports network config version 1
@@ -15,6 +16,9 @@ FEATURES = [
 
 
 def version_string():
+    """Extract a version string from cloud-init."""
+    if not _PACKAGED_VERSION.startswith('@@'):
+        return _PACKAGED_VERSION
     return __VERSION__
 
 # vi: ts=4 expandtab
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 3129d4e..5619de3 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -151,6 +151,8 @@ system_info:
      groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
      sudo: ["ALL=(ALL) NOPASSWD:ALL"]
      shell: /bin/bash
+   # Automatically discover the best ntp_client
+   ntp_client: auto
    # Other config here will be given to the distro class and/or path classes
    paths:
       cloud_dir: /var/lib/cloud/
diff --git a/debian/changelog b/debian/changelog
index 57c4454..d96d12e 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,96 @@
-cloud-init (18.2-4-g05926e48-0ubuntu1~17.10.3) UNRELEASED; urgency=medium
+cloud-init (18.3-0ubuntu1~17.10.1) artful-proposed; urgency=medium
 
   * debian/rules: update version.version_string to contain packaged version.
     (LP: #1770712)
-
- -- Scott Moser <smoser@xxxxxxxxxx>  Mon, 04 Jun 2018 10:14:17 -0400
+  * New upstream release. (LP: #1777912)
+    - release 18.3
+    - docs: represent sudo:false in docs for user_groups config module
+    - Explicitly prevent `sudo` access for user module [Jacob Bednarz]
+    - lxd: Delete default network and detach device if lxd-init created them.
+    - openstack: avoid unneeded metadata probe on non-openstack platforms
+    - stages: fix tracebacks if a module stage is undefined or empty
+      [Robert Schweikert]
+    - Be more safe on string/bytes when writing multipart user-data to disk.
+    - Fix get_proc_env for pids that have non-utf8 content in environment.
+    - tests: fix salt_minion integration test on bionic and later
+    - tests: provide human-readable integration test summary when --verbose
+    - tests: skip chrony integration tests on lxd running artful or older
+    - test: add optional --preserve-instance arg to integraiton tests
+    - netplan: fix mtu if provided by network config for all rendered types
+    - tests: remove pip install workarounds for pylxd, take upstream fix.
+    - subp: support combine_capture argument.
+    - tests: ordered tox dependencies for pylxd install
+    - util: add get_linux_distro function to replace platform.dist
+      [Robert Schweikert]
+    - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+    - - Do not use the systemd_prefix macro, not available in this environment
+      [Robert Schweikert]
+    - doc: Add config info to ec2, openstack and cloudstack datasource docs
+    - Enable SmartOS network metadata to work with netplan via per-subnet
+      routes [Dan McDonald]
+    - openstack: Allow discovery in init-local using dhclient in a sandbox.
+    - tests: Avoid using https in httpretty, improve HttPretty test case.
+    - yaml_load/schema: Add invalid line and column nums to error message
+    - Azure: Ignore NTFS mount errors when checking ephemeral drive
+      [Paul Meyer]
+    - packages/brpm: Get proper dependencies for cmdline distro.
+    - packages: Make rpm spec files patch in package version like in debs.
+    - tools/run-container: replace tools/run-centos with more generic.
+    - Update version.version_string to contain packaged version.
+    - cc_mounts: Do not add devices to fstab that are already present.
+      [Lars Kellogg-Stedman]
+    - ds-identify: ensure that we have certain tokens in PATH.
+    - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+    - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+    - cloud_tests: help pylint
+    - flake8: fix flake8 errors in previous commit.
+    - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+    - tests: restructure SSH and initial connections [Joshua Powers]
+    - ds-identify: recognize container-other as a container, test SmartOS.
+    - cloud-config.service: run After snap.seeded.service.
+    - tests: do not rely on host /proc/cmdline in test_net.py
+      [Lars Kellogg-Stedman]
+    - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+    - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+    - tests: fix package and ca_cert cloud_tests on bionic
+    - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+    - pycodestyle: Fix deprecated string literals, move away from flake8.
+    - azure: Add reported ready marker file. [Joshua Chan]
+    - tools: Support adding a release suffix through packages/bddeb.
+    - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+      [Harm Weites]
+    - tools: Re-use the orig tarball in packages/bddeb if it is around.
+    - netinfo: fix netdev_pformat when a nic does not have an address assigned.
+    - collect-logs: add -v flag, write to stderr, limit journal to single boot.
+    - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+    - Add reporting events and log_time around early source of blocking time
+    - IBMCloud: recognize provisioning environment during debug boots.
+    - net: detect unstable network names and trigger a settle if needed
+    - IBMCloud: improve documentation in datasource.
+    - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+    - packages/debian/control.in: add missing dependency on iproute2.
+    - DataSourceSmartOS: add locking of serial device. [Mike Gerdts]
+    - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts]
+    - DataSourceSmartOS: list() should always return a list [Mike Gerdts]
+    - schema: in validation, raise ImportError if strict but no jsonschema.
+    - set_passwords: Add newline to end of sshd config, only restart if
+      updated.
+    - pylint: pay attention to unused variable warnings.
+    - doc: Add documentation for AliYun datasource. [Junjie Wang]
+    - Schema: do not warn on duplicate items in commands.
+    - net: Depend on iproute2's ip instead of net-tools ifconfig or route
+    - DataSourceSmartOS: fix hang when metadata service is down [Mike Gerdts]
+    - DataSourceSmartOS: change default fs on ephemeral disk from ext3 to
+      ext4. [Mike Gerdts]
+    - pycodestyle: Fix invalid escape sequences in string literals.
+    - Implement bash completion script for cloud-init command line
+    - tools: Fix make-tarball cli tool usage for development
+    - renderer: support unicode in render_from_file.
+    - Implement ntp client spec with auto support for distro selection
+    - Apport: add Brightbox, IBM, LXD, and OpenTelekomCloud to list of clouds.
+    - tests: fix ec2 integration network metadata validation
+
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Wed, 20 Jun 2018 13:19:44 -0600
 
 cloud-init (18.2-4-g05926e48-0ubuntu1~17.10.2) artful-proposed; urgency=medium
 
diff --git a/doc/examples/cloud-config-disk-setup.txt b/doc/examples/cloud-config-disk-setup.txt
index dd91477..43a62a2 100644
--- a/doc/examples/cloud-config-disk-setup.txt
+++ b/doc/examples/cloud-config-disk-setup.txt
@@ -37,7 +37,7 @@ fs_setup:
 # Default disk definitions for SmartOS
 # ------------------------------------
 
-device_aliases: {'ephemeral0': '/dev/sdb'}
+device_aliases: {'ephemeral0': '/dev/vdb'}
 disk_setup:
     ephemeral0:
          table_type: mbr
@@ -46,7 +46,7 @@ disk_setup:
 
 fs_setup:
     - label: ephemeral0
-      filesystem: ext3
+      filesystem: ext4
       device: ephemeral0.0
 
 # Cavaut for SmartOS: if ephemeral disk is not defined, then the disk will
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
index 7bca24a..01ecad7 100644
--- a/doc/examples/cloud-config-user-groups.txt
+++ b/doc/examples/cloud-config-user-groups.txt
@@ -30,6 +30,11 @@ users:
     gecos: Magic Cloud App Daemon User
     inactive: true
     system: true
+  - name: fizzbuzz
+    sudo: False
+    ssh_authorized_keys:
+      - <ssh pub key 1>
+      - <ssh pub key 2>
   - snapuser: joe@xxxxxxxxxx
 
 # Valid Values:
@@ -71,13 +76,21 @@ users:
 #   no_log_init: When set to true, do not initialize lastlog and faillog database.
 #   ssh_import_id: Optional. Import SSH ids
 #   ssh_authorized_keys: Optional. [list] Add keys to user's authorized keys file
-#   sudo: Defaults to none. Set to the sudo string you want to use, i.e.
-#           ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following
-#           format.
-#               sudo:
-#                   - ALL=(ALL) NOPASSWD:/bin/mysql
-#                   - ALL=(ALL) ALL
-#           Note: Please double check your syntax and make sure it is valid.
+#   sudo: Defaults to none. Accepts a sudo rule string, a list of sudo rule
+#         strings or False to explicitly deny sudo usage. Examples:
+#
+#         Allow a user unrestricted sudo access.
+#             sudo:  ALL=(ALL) NOPASSWD:ALL
+#
+#         Adding multiple sudo rule strings.
+#             sudo:
+#               - ALL=(ALL) NOPASSWD:/bin/mysql
+#               - ALL=(ALL) ALL
+#
+#         Prevent sudo access for a user.
+#             sudo: False
+#
+#         Note: Please double check your syntax and make sure it is valid.
 #               cloud-init does not parse/check the syntax of the sudo
 #               directive.
 #   system: Create the user as a system user. This means no home directory.
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 7e2854d..30e57d8 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -17,6 +17,103 @@ own way) internally a datasource abstract class was created to allow for a
 single way to access the different cloud systems methods to provide this data
 through the typical usage of subclasses.
 
+
+instance-data
+-------------
+For reference, cloud-init stores all the metadata, vendordata and userdata
+provided by a cloud in a json blob at ``/run/cloud-init/instance-data.json``.
+While the json contains datasource-specific keys and names, cloud-init will
+maintain a minimal set of standardized keys that will remain stable on any
+cloud. Standardized instance-data keys will be present under a "v1" key.
+Any datasource metadata cloud-init consumes will all be present under the
+"ds" key.
+
+Below is an instance-data.json example from an OpenStack instance:
+
+.. sourcecode:: json
+
+  {
+   "base64-encoded-keys": [
+    "ds/meta-data/random_seed",
+    "ds/user-data"
+   ],
+   "ds": {
+    "ec2_metadata": {
+     "ami-id": "ami-0000032f",
+     "ami-launch-index": "0",
+     "ami-manifest-path": "FIXME",
+     "block-device-mapping": {
+      "ami": "vda",
+      "ephemeral0": "/dev/vdb",
+      "root": "/dev/vda"
+     },
+     "hostname": "xenial-test.novalocal",
+     "instance-action": "none",
+     "instance-id": "i-0006e030",
+     "instance-type": "m1.small",
+     "local-hostname": "xenial-test.novalocal",
+     "local-ipv4": "10.5.0.6",
+     "placement": {
+      "availability-zone": "None"
+     },
+     "public-hostname": "xenial-test.novalocal",
+     "public-ipv4": "10.245.162.145",
+     "reservation-id": "r-fxm623oa",
+     "security-groups": "default"
+    },
+    "meta-data": {
+     "availability_zone": null,
+     "devices": [],
+     "hostname": "xenial-test.novalocal",
+     "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
+     "launch_index": 0,
+     "local-hostname": "xenial-test.novalocal",
+     "name": "xenial-test",
+     "project_id": "e0eb2d2538814...",
+     "random_seed": "A6yPN...",
+     "uuid": "3e39d278-0644-4728-9479-678f92..."
+    },
+    "network_json": {
+     "links": [
+      {
+       "ethernet_mac_address": "fa:16:3e:7d:74:9b",
+       "id": "tap9ca524d5-6e",
+       "mtu": 8958,
+       "type": "ovs",
+       "vif_id": "9ca524d5-6e5a-4809-936a-6901..."
+      }
+     ],
+     "networks": [
+      {
+       "id": "network0",
+       "link": "tap9ca524d5-6e",
+       "network_id": "c6adfc18-9753-42eb-b3ea-18b57e6b837f",
+       "type": "ipv4_dhcp"
+      }
+     ],
+     "services": [
+      {
+       "address": "10.10.160.2",
+       "type": "dns"
+      }
+     ]
+    },
+    "user-data": "I2Nsb3VkLWNvbmZpZ...",
+    "vendor-data": null
+   },
+   "v1": {
+    "availability-zone": null,
+    "cloud-name": "openstack",
+    "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
+    "local-hostname": "xenial-test",
+    "region": null
+   }
+  }
+
+
+
+Datasource API
+--------------
 The current interface that a datasource object must provide is the following:
 
 .. sourcecode:: python
@@ -80,6 +177,7 @@ Follow for more information.
 .. toctree::
    :maxdepth: 2
 
+   datasources/aliyun.rst
    datasources/altcloud.rst
    datasources/azure.rst
    datasources/cloudsigma.rst
diff --git a/doc/rtd/topics/datasources/aliyun.rst b/doc/rtd/topics/datasources/aliyun.rst
new file mode 100644
index 0000000..3f4f40c
--- /dev/null
+++ b/doc/rtd/topics/datasources/aliyun.rst
@@ -0,0 +1,74 @@
+.. _datasource_aliyun:
+
+Alibaba Cloud (AliYun)
+======================
+The ``AliYun`` datasource reads data from Alibaba Cloud ECS.  Support is
+present in cloud-init since 0.7.9.
+
+Metadata Service
+----------------
+The Alibaba Cloud metadata service is available at the well known url
+``http://100.100.100.200/``. For more information see
+Alibaba Cloud ECS on `metadata
+<https://www.alibabacloud.com/help/zh/faq-detail/49122.htm>`__.
+
+Versions
+^^^^^^^^
+Like the EC2 metadata service, Alibaba Cloud's metadata service provides
+versioned data under specific paths.  As of April 2018, there are only
+``2016-01-01`` and ``latest`` versions.
+
+It is expected that the dated version will maintain a stable interface but
+``latest`` may change content at a future date.
+
+Cloud-init uses the ``2016-01-01`` version.
+
+You can list the versions available to your instance with:
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/
+    2016-01-01
+    latest
+
+Metadata
+^^^^^^^^
+Instance metadata can be queried at
+``http://100.100.100.200/2016-01-01/meta-data``
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/2016-01-01/meta-data
+    dns-conf/
+    eipv4
+    hostname
+    image-id
+    instance-id
+    instance/
+    mac
+    network-type
+    network/
+    ntp-conf/
+    owner-account-id
+    private-ipv4
+    public-keys/
+    region-id
+    serial-number
+    source-address
+    sub-private-ipv4-list
+    vpc-cidr-block
+    vpc-id
+
+Userdata
+^^^^^^^^
+If provided, user-data will appear at
+``http://100.100.100.200/2016-01-01/user-data``.
+If no user-data is provided, this will return a 404.
+
+.. code-block:: shell-session
+
+    $ curl http://100.100.100.200/2016-01-01/user-data
+    #!/bin/sh
+    echo "Hello World."
+
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/cloudstack.rst b/doc/rtd/topics/datasources/cloudstack.rst
index 225093a..a3101ed 100644
--- a/doc/rtd/topics/datasources/cloudstack.rst
+++ b/doc/rtd/topics/datasources/cloudstack.rst
@@ -4,7 +4,9 @@ CloudStack
 ==========
 
 `Apache CloudStack`_ expose user-data, meta-data, user password and account
-sshkey thru the Virtual-Router. For more details on meta-data and user-data,
+sshkey thru the Virtual-Router. The datasource obtains the VR address via
+dhcp lease information given to the instance.
+For more details on meta-data and user-data,
 refer the `CloudStack Administrator Guide`_. 
 
 URLs to access user-data and meta-data from the Virtual Machine. Here 10.1.1.1
@@ -18,14 +20,26 @@ is the Virtual Router IP:
 
 Configuration
 -------------
+The following configuration can be set for the datasource in system
+configuration (in `/etc/cloud/cloud.cfg` or `/etc/cloud/cloud.cfg.d/`).
 
-Apache CloudStack datasource can be configured as follows:
+The settings that may be configured are:
 
-.. code:: yaml
+ * **max_wait**:  the maximum amount of clock time in seconds that should be
+   spent searching metadata_urls.  A value less than zero will result in only
+   one request being made, to the first in the list. (default: 120)
+ * **timeout**: the timeout value provided to urlopen for each individual http
+   request.  This is used both when selecting a metadata_url and when crawling
+   the metadata service. (default: 50)
 
-    datasource:
-      CloudStack: {}
-      None: {}
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   CloudStack:
+    max_wait: 120
+    timeout: 50
     datasource_list:
       - CloudStack
 
diff --git a/doc/rtd/topics/datasources/ec2.rst b/doc/rtd/topics/datasources/ec2.rst
index 3bc66e1..64c325d 100644
--- a/doc/rtd/topics/datasources/ec2.rst
+++ b/doc/rtd/topics/datasources/ec2.rst
@@ -60,4 +60,34 @@ To see which versions are supported from your cloud provider use the following U
     ...
     latest
 
+
+
+Configuration
+-------------
+The following configuration can be set for the datasource in system
+configuration (in `/etc/cloud/cloud.cfg` or `/etc/cloud/cloud.cfg.d/`).
+
+The settings that may be configured are:
+
+ * **metadata_urls**: This list of urls will be searched for an Ec2
+   metadata service. The first entry that successfully returns a 200 response
+   for <url>/<version>/meta-data/instance-id will be selected.
+   (default: ['http://169.254.169.254', 'http://instance-data:8773']).
+ * **max_wait**:  the maximum amount of clock time in seconds that should be
+   spent searching metadata_urls.  A value less than zero will result in only
+   one request being made, to the first in the list. (default: 120)
+ * **timeout**: the timeout value provided to urlopen for each individual http
+   request.  This is used both when selecting a metadata_url and when crawling
+   the metadata service. (default: 50)
+
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   Ec2:
+    metadata_urls: ["http://169.254.169.254:80";, "http://instance-data:8773";]
+    max_wait: 120
+    timeout: 50
+
 .. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/openstack.rst b/doc/rtd/topics/datasources/openstack.rst
index 43592de..421da08 100644
--- a/doc/rtd/topics/datasources/openstack.rst
+++ b/doc/rtd/topics/datasources/openstack.rst
@@ -7,6 +7,21 @@ This datasource supports reading data from the
 `OpenStack Metadata Service
 <https://docs.openstack.org/nova/latest/admin/networking-nova.html#metadata-service>`_.
 
+Discovery
+-------------
+To determine whether a platform looks like it may be OpenStack, cloud-init
+checks the following environment attributes as a potential OpenStack platform:
+
+ * Maybe OpenStack if
+
+   * **non-x86 cpu architecture**: because DMI data is buggy on some arches
+ * Is OpenStack **if x86 architecture and ANY** of the following
+
+   * **/proc/1/environ**: Nova-lxd contains *product_name=OpenStack Nova*
+   * **DMI product_name**: Either *Openstack Nova* or *OpenStack Compute*
+   * **DMI chassis_asset_tag** is *OpenTelekomCloud*
+
+
 Configuration
 -------------
 The following configuration can be set for the datasource in system
@@ -25,18 +40,22 @@ The settings that may be configured are:
    the metadata service. (default: 10)
  * **retries**: The number of retries that should be done for an http request.
    This value is used only after metadata_url is selected. (default: 5)
+ * **apply_network_config**: A boolean specifying whether to configure the
+   network for the instance based on network_data.json provided by the
+   metadata service. When False, only configure dhcp on the primary nic for
+   this instances. (default: True)
 
-An example configuration with the default values is provided as example below:
+An example configuration with the default values is provided below:
 
 .. sourcecode:: yaml
 
-  #cloud-config
   datasource:
    OpenStack:
     metadata_urls: ["http://169.254.169.254";]
     max_wait: -1
     timeout: 10
     retries: 5
+    apply_network_config: True
 
 
 Vendor Data
diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst
index 2f8ab54..3b0148c 100644
--- a/doc/rtd/topics/network-config-format-v1.rst
+++ b/doc/rtd/topics/network-config-format-v1.rst
@@ -130,6 +130,18 @@ the bond interfaces.
 The ``bond_interfaces`` key accepts a list of network device ``name`` values
 from the configuration.  This list may be empty.
 
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
+.. note::
+
+  The possible supported values of a device's MTU is not available at
+  configuration time.  It's possible to specify a value too large or to
+  small for a device and may be ignored by the device.
+
 **params**:  *<Dictionary of key: value bonding parameter pairs>*
 
 The ``params`` key in a bond holds a dictionary of bonding parameters.
@@ -268,6 +280,21 @@ Type ``vlan`` requires the following keys:
 - ``vlan_link``: Specify the underlying link via its ``name``.
 - ``vlan_id``: Specify the VLAN numeric id.
 
+The following optional keys are supported:
+
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
+.. note::
+
+  The possible supported values of a device's MTU is not available at
+  configuration time.  It's possible to specify a value too large or to
+  small for a device and may be ignored by the device.
+
+
 **VLAN Example**::
 
    network:
diff --git a/doc/rtd/topics/network-config-format-v2.rst b/doc/rtd/topics/network-config-format-v2.rst
index 335d236..ea370ef 100644
--- a/doc/rtd/topics/network-config-format-v2.rst
+++ b/doc/rtd/topics/network-config-format-v2.rst
@@ -174,6 +174,12 @@ recognized by ``inet_pton(3)``
 Example for IPv4: ``gateway4: 172.16.0.1``
 Example for IPv6: ``gateway6: 2001:4::1``
 
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
 **nameservers**: *<(mapping)>*
 
 Set DNS servers and search domains, for manual address configuration. There
diff --git a/doc/rtd/topics/tests.rst b/doc/rtd/topics/tests.rst
index cac4a6e..b83bd89 100644
--- a/doc/rtd/topics/tests.rst
+++ b/doc/rtd/topics/tests.rst
@@ -58,7 +58,8 @@ explaining how to run one or the other independently.
     $ tox -e citest -- run --verbose \
         --os-name stretch --os-name xenial \
         --deb cloud-init_0.7.8~my_patch_all.deb \
-        --preserve-data --data-dir ~/collection
+        --preserve-data --data-dir ~/collection \
+        --preserve-instance
 
 The above command will do the following:
 
@@ -76,6 +77,10 @@ The above command will do the following:
 * ``--preserve-data`` always preserve collected data, do not remove data
   after successful test run
 
+* ``--preserve-instance`` do not destroy the instance after test to allow
+  for debugging the stopped instance during integration test development. By
+  default, test instances are destroyed after the test completes.
+
 * ``--data-dir ~/collection`` write collected data into `~/collection`,
   rather than using a temporary directory
 
diff --git a/integration-requirements.txt b/integration-requirements.txt
index df3a73e..e5bb5b2 100644
--- a/integration-requirements.txt
+++ b/integration-requirements.txt
@@ -13,7 +13,7 @@ paramiko==2.4.0
 
 # lxd backend
 # 04/03/2018: enables use of lxd 3.0
-git+https://github.com/lxc/pylxd.git@1a85a12a23401de6e96b1aeaf59ecbff2e88f49d
+git+https://github.com/lxc/pylxd.git@4b8ab1802f9aee4eb29cf7b119dae0aa47150779
 
 
 # finds latest image information
diff --git a/packages/bddeb b/packages/bddeb
index 4f2e2dd..95602a0 100755
--- a/packages/bddeb
+++ b/packages/bddeb
@@ -1,11 +1,14 @@
 #!/usr/bin/env python3
 
 import argparse
+import csv
 import json
 import os
 import shutil
 import sys
 
+UNRELEASED = "UNRELEASED"
+
 
 def find_root():
     # expected path is in <top_dir>/packages/
@@ -28,6 +31,24 @@ if "avoid-pep8-E402-import-not-top-of-file":
 DEBUILD_ARGS = ["-S", "-d"]
 
 
+def get_release_suffix(release):
+    """Given ubuntu release (xenial), return a suffix for package (~16.04.1)"""
+    csv_path = "/usr/share/distro-info/ubuntu.csv"
+    rels = {}
+    # fields are version, codename, series, created, release, eol, eol-server
+    if os.path.exists(csv_path):
+        with open(csv_path, "r") as fp:
+            # version has "16.04 LTS" or "16.10", so drop "LTS" portion.
+            rels = {row['series']: row['version'].replace(' LTS', '')
+                    for row in csv.DictReader(fp)}
+    if release in rels:
+        return "~%s.1" % rels[release]
+    elif release != UNRELEASED:
+        print("missing distro-info-data package, unable to give "
+              "per-release suffix.\n")
+    return ""
+
+
 def run_helper(helper, args=None, strip=True):
     if args is None:
         args = []
@@ -117,7 +138,7 @@ def get_parser():
 
     parser.add_argument("--release", dest="release",
                         help=("build with changelog referencing RELEASE"),
-                        default="UNRELEASED")
+                        default=UNRELEASED)
 
     for ent in DEBUILD_ARGS:
         parser.add_argument(ent, dest="debuild_args", action='append_const',
@@ -148,7 +169,10 @@ def main():
     if args.verbose:
         capture = False
 
-    templ_data = {'debian_release': args.release}
+    templ_data = {
+        'debian_release': args.release,
+        'release_suffix': get_release_suffix(args.release)}
+
     with temp_utils.tempdir() as tdir:
 
         # output like 0.7.6-1022-g36e92d3
@@ -157,10 +181,18 @@ def main():
         # This is really only a temporary archive
         # since we will extract it then add in the debian
         # folder, then re-archive it for debian happiness
-        print("Creating a temporary tarball using the 'make-tarball' helper")
         tarball = "cloud-init_%s.orig.tar.gz" % ver_data['version_long']
         tarball_fp = util.abs_join(tdir, tarball)
-        run_helper('make-tarball', ['--long', '--output=' + tarball_fp])
+        path = None
+        for pd in ("./", "../", "../dl/"):
+            if os.path.exists(pd + tarball):
+                path = pd + tarball
+                print("Using existing tarball %s" % path)
+                shutil.copy(path, tarball_fp)
+                break
+        if path is None:
+            print("Creating a temp tarball using the 'make-tarball' helper")
+            run_helper('make-tarball', ['--long', '--output=' + tarball_fp])
 
         print("Extracting temporary tarball %r" % (tarball))
         cmd = ['tar', '-xvzf', tarball_fp, '-C', tdir]
diff --git a/packages/brpm b/packages/brpm
index 3439cf3..a154ef2 100755
--- a/packages/brpm
+++ b/packages/brpm
@@ -42,13 +42,13 @@ def run_helper(helper, args=None, strip=True):
     return stdout
 
 
-def read_dependencies(requirements_file='requirements.txt'):
+def read_dependencies(distro, requirements_file='requirements.txt'):
     """Returns the Python package depedencies from requirements.txt files.
 
     @returns a tuple of (requirements, test_requirements)
     """
     pkg_deps = run_helper(
-        'read-dependencies', args=['--distro', 'redhat']).splitlines()
+        'read-dependencies', args=['--distro', distro]).splitlines()
     test_deps = run_helper(
         'read-dependencies', args=[
             '--requirements-file', 'test-requirements.txt',
@@ -83,7 +83,7 @@ def generate_spec_contents(args, version_data, tmpl_fn, top_dir, arc_fn):
         rpm_upstream_version = version_data['version']
     subs['rpm_upstream_version'] = rpm_upstream_version
 
-    deps, test_deps = read_dependencies()
+    deps, test_deps = read_dependencies(distro=args.distro)
     subs['buildrequires'] = deps + test_deps
     subs['requires'] = deps
 
diff --git a/packages/debian/changelog.in b/packages/debian/changelog.in
index bdf8d56..930322f 100644
--- a/packages/debian/changelog.in
+++ b/packages/debian/changelog.in
@@ -1,5 +1,5 @@
 ## template:basic
-cloud-init (${version_long}-1~bddeb) ${debian_release}; urgency=low
+cloud-init (${version_long}-1~bddeb${release_suffix}) ${debian_release}; urgency=low
 
   * build
 
diff --git a/packages/debian/control.in b/packages/debian/control.in
index 46da6df..e9ed64f 100644
--- a/packages/debian/control.in
+++ b/packages/debian/control.in
@@ -11,6 +11,7 @@ Package: cloud-init
 Architecture: all
 Depends: ${misc:Depends},
          ${${python}:Depends},
+         iproute2,
          isc-dhcp-client
 Recommends: eatmydata, sudo, software-properties-common, gdisk
 XB-Python-Version: ${python:Versions}
diff --git a/packages/debian/rules.in b/packages/debian/rules.in
index 4aa907e..e542c7f 100755
--- a/packages/debian/rules.in
+++ b/packages/debian/rules.in
@@ -3,6 +3,7 @@
 INIT_SYSTEM ?= systemd
 export PYBUILD_INSTALL_ARGS=--init-system=$(INIT_SYSTEM)
 PYVER ?= python${pyver}
+DEB_VERSION := $(shell dpkg-parsechangelog --show-field=Version)
 
 %:
 	dh $@ --with $(PYVER),systemd --buildsystem pybuild
@@ -14,6 +15,7 @@ override_dh_install:
 	cp tools/21-cloudinit.conf debian/cloud-init/etc/rsyslog.d/21-cloudinit.conf
 	install -D ./tools/Z99-cloud-locale-test.sh debian/cloud-init/etc/profile.d/Z99-cloud-locale-test.sh
 	install -D ./tools/Z99-cloudinit-warnings.sh debian/cloud-init/etc/profile.d/Z99-cloudinit-warnings.sh
+	flist=$$(find $(CURDIR)/debian/ -type f -name version.py) && sed -i 's,@@PACKAGED_VERSION@@,$(DEB_VERSION),' $${flist:-did-not-find-version-py-for-replacement}
 
 override_dh_auto_test:
 ifeq (,$(findstring nocheck,$(DEB_BUILD_OPTIONS)))
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index 6ab0d20..a3a6d1e 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -115,6 +115,13 @@ rm -rf $RPM_BUILD_ROOT%{python_sitelib}/tests
 mkdir -p $RPM_BUILD_ROOT/%{_sharedstatedir}/cloud
 mkdir -p $RPM_BUILD_ROOT/%{_libexecdir}/%{name}
 
+# patch in the full version to version.py
+version_pys=$(cd "$RPM_BUILD_ROOT" && find . -name version.py -type f)
+[ -n "$version_pys" ] ||
+   { echo "failed to find 'version.py' to patch with version." 1>&2; exit 1; }
+( cd "$RPM_BUILD_ROOT" &&
+  sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $version_pys )
+
 %clean
 rm -rf $RPM_BUILD_ROOT
 
@@ -197,6 +204,7 @@ fi
 %dir                    %{_sysconfdir}/cloud/templates
 %config(noreplace)      %{_sysconfdir}/cloud/templates/*
 %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf
+%{_sysconfdir}/bash_completion.d/cloud-init
 
 %{_libexecdir}/%{name}
 %dir %{_sharedstatedir}/cloud
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index 86e18b1..e781d74 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -5,7 +5,7 @@
 # Or: http://www.rpm.org/max-rpm/ch-rpm-inside.html
 
 Name:           cloud-init
-Version:        {{version}}
+Version:        {{rpm_upstream_version}}
 Release:        1{{subrelease}}%{?dist}
 Summary:        Cloud instance init scripts
 
@@ -16,22 +16,13 @@ URL:            http://launchpad.net/cloud-init
 Source0:        {{archive_name}}
 BuildRoot:      %{_tmppath}/%{name}-%{version}-build
 
-%if 0%{?suse_version} && 0%{?suse_version} <= 1110
-%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
-%else
 BuildArch:      noarch
-%endif
+
 
 {% for r in buildrequires %}
 BuildRequires:        {{r}}
 {% endfor %}
 
-%if 0%{?suse_version} && 0%{?suse_version} <= 1210
-  %define initsys sysvinit
-%else
-  %define initsys systemd
-%endif
-
 # Install pypi 'dynamic' requirements
 {% for r in requires %}
 Requires:       {{r}}
@@ -39,7 +30,7 @@ Requires:       {{r}}
 
 # Custom patches
 {% for p in patches %}
-Patch{{loop.index0}: {{p}}
+Patch{{loop.index0}}: {{p}}
 {% endfor %}
 
 %description
@@ -63,35 +54,21 @@ end for
 %{__python} setup.py install \
             --skip-build --root=%{buildroot} --prefix=%{_prefix} \
             --record-rpm=INSTALLED_FILES --install-lib=%{python_sitelib} \
-            --init-system=%{initsys}
+            --init-system=systemd
+
+# Move udev rules
+mkdir -p %{buildroot}/usr/lib/udev/rules.d/
+mv %{buildroot}/lib/udev/rules.d/* %{buildroot}/usr/lib/udev/rules.d/
 
 # Remove non-SUSE templates
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.debian.*
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.redhat.*
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.ubuntu.*
 
-# Remove cloud-init tests
-rm -r %{buildroot}/%{python_sitelib}/tests
-
-# Move sysvinit scripts to the correct place and create symbolic links
-%if %{initsys} == sysvinit
-   mkdir -p %{buildroot}/%{_initddir}
-   mv %{buildroot}%{_sysconfdir}/rc.d/init.d/* %{buildroot}%{_initddir}/
-   rmdir %{buildroot}%{_sysconfdir}/rc.d/init.d
-   rmdir %{buildroot}%{_sysconfdir}/rc.d
-
-   mkdir -p %{buildroot}/%{_sbindir}
-   pushd %{buildroot}/%{_initddir}
-   for file in * ; do
-      ln -s %{_initddir}/${file} %{buildroot}/%{_sbindir}/rc${file}
-   done
-   popd
-%endif
-
 # Move documentation
 mkdir -p %{buildroot}/%{_defaultdocdir}
 mv %{buildroot}/usr/share/doc/cloud-init %{buildroot}/%{_defaultdocdir}
-for doc in TODO LICENSE ChangeLog requirements.txt; do
+for doc in LICENSE ChangeLog requirements.txt; do
    cp ${doc} %{buildroot}/%{_defaultdocdir}/cloud-init
 done
 
@@ -102,29 +79,35 @@ done
 
 mkdir -p %{buildroot}/var/lib/cloud
 
+# patch in the full version to version.py
+version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
+[ -n "$version_pys" ] ||
+   { echo "failed to find 'version.py' to patch with version." 1>&2; exit 1; }
+( cd "%{buildroot}" &&
+  sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $version_pys )
+
 %postun
 %insserv_cleanup
 
 %files
 
-# Sysvinit scripts
-%if %{initsys} == sysvinit
-   %attr(0755, root, root) %{_initddir}/cloud-config
-   %attr(0755, root, root) %{_initddir}/cloud-final
-   %attr(0755, root, root) %{_initddir}/cloud-init-local
-   %attr(0755, root, root) %{_initddir}/cloud-init
-
-   %{_sbindir}/rccloud-*
-%endif
-
 # Program binaries
 %{_bindir}/cloud-init*
 
+# systemd files
+/usr/lib/systemd/system-generators/*
+/usr/lib/systemd/system/*
+
 # There doesn't seem to be an agreed upon place for these
 # although it appears the standard says /usr/lib but rpmbuild
 # will try /usr/lib64 ??
 /usr/lib/%{name}/uncloud-init
 /usr/lib/%{name}/write-ssh-key-fingerprints
+/usr/lib/%{name}/ds-identify
+
+# udev rules
+/usr/lib/udev/rules.d/66-azure-ephemeral.rules
+
 
 # Docs
 %doc %{_defaultdocdir}/cloud-init/*
@@ -136,6 +119,10 @@ mkdir -p %{buildroot}/var/lib/cloud
 %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README
 %dir               %{_sysconfdir}/cloud/templates
 %config(noreplace) %{_sysconfdir}/cloud/templates/*
+%{_sysconfdir}/bash_completion.d/cloud-init
+
+%{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient
+%{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager
 
 # Python code is here...
 %{python_sitelib}/*
diff --git a/setup.py b/setup.py
index bc3f52a..5ed8eae 100755
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@ from distutils.errors import DistutilsArgError
 import subprocess
 
 RENDERED_TMPD_PREFIX = "RENDERED_TEMPD"
-
+VARIANT = None
 
 def is_f(p):
     return os.path.isfile(p)
@@ -114,10 +114,20 @@ def render_tmpl(template):
     atexit.register(shutil.rmtree, tmpd)
     bname = os.path.basename(template).rstrip(tmpl_ext)
     fpath = os.path.join(tmpd, bname)
-    tiny_p([sys.executable, './tools/render-cloudcfg', template, fpath])
+    if VARIANT:
+        tiny_p([sys.executable, './tools/render-cloudcfg', '--variant',
+            VARIANT, template, fpath])
+    else:
+        tiny_p([sys.executable, './tools/render-cloudcfg', template, fpath])
     # return path relative to setup.py
     return os.path.join(os.path.basename(tmpd), bname)
 
+# User can set the variant for template rendering
+if '--distro' in sys.argv:
+    idx = sys.argv.index('--distro')
+    VARIANT = sys.argv[idx+1]
+    del sys.argv[idx+1]
+    sys.argv.remove('--distro')
 
 INITSYS_FILES = {
     'sysvinit': [f for f in glob('sysvinit/redhat/*') if is_f(f)],
@@ -228,6 +238,7 @@ if not in_virtualenv():
         INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]
 
 data_files = [
+    (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
     (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
     (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),
     (ETC + '/cloud/templates', glob('templates/*')),
@@ -259,7 +270,7 @@ requirements = read_requires()
 setuptools.setup(
     name='cloud-init',
     version=get_version(),
-    description='EC2 initialisation magic',
+    description='Cloud instance initialisation magic',
     author='Scott Moser',
     author_email='scott.moser@xxxxxxxxxxxxx',
     url='http://launchpad.net/cloud-init/',
@@ -276,4 +287,5 @@ setuptools.setup(
     }
 )
 
+
 # vi: ts=4 expandtab
diff --git a/systemd/cloud-config.service.tmpl b/systemd/cloud-config.service.tmpl
index bdee3ce..9d928ca 100644
--- a/systemd/cloud-config.service.tmpl
+++ b/systemd/cloud-config.service.tmpl
@@ -2,6 +2,7 @@
 [Unit]
 Description=Apply the settings specified in cloud-config
 After=network-online.target cloud-config.target
+After=snapd.seeded.service
 Wants=network-online.target cloud-config.target
 
 [Service]
diff --git a/templates/chrony.conf.debian.tmpl b/templates/chrony.conf.debian.tmpl
new file mode 100644
index 0000000..661bf04
--- /dev/null
+++ b/templates/chrony.conf.debian.tmpl
@@ -0,0 +1,39 @@
+## template:jinja
+# Welcome to the chrony configuration file. See chrony.conf(5) for more
+# information about usuable directives.
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# This directive specify the location of the file containing ID/key pairs for
+# NTP authentication.
+keyfile /etc/chrony/chrony.keys
+
+# This directive specify the file into which chronyd will store the rate
+# information.
+driftfile /var/lib/chrony/chrony.drift
+
+# Uncomment the following line to turn logging on.
+#log tracking measurements statistics
+
+# Log files location.
+logdir /var/log/chrony
+
+# Stop bad estimates upsetting machine clock.
+maxupdateskew 100.0
+
+# This directive enables kernel synchronisation (every 11 minutes) of the
+# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
+rtcsync
+
+# Step the system clock instead of slewing it if the adjustment is larger than
+# one second, but only in the first three clock updates.
+makestep 1 3
+
diff --git a/templates/chrony.conf.fedora.tmpl b/templates/chrony.conf.fedora.tmpl
new file mode 100644
index 0000000..8551f79
--- /dev/null
+++ b/templates/chrony.conf.fedora.tmpl
@@ -0,0 +1,48 @@
+## template:jinja
+# Use public servers from the pool.ntp.org project.
+# Please consider joining the pool (http://www.pool.ntp.org/join.html).
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# Record the rate at which the system clock gains/losses time.
+driftfile /var/lib/chrony/drift
+
+# Allow the system clock to be stepped in the first three updates
+# if its offset is larger than 1 second.
+makestep 1.0 3
+
+# Enable kernel synchronization of the real-time clock (RTC).
+rtcsync
+
+# Enable hardware timestamping on all interfaces that support it.
+#hwtimestamp *
+
+# Increase the minimum number of selectable sources required to adjust
+# the system clock.
+#minsources 2
+
+# Allow NTP client access from local network.
+#allow 192.168.0.0/16
+
+# Serve time even if not synchronized to a time source.
+#local stratum 10
+
+# Specify file containing keys for NTP authentication.
+#keyfile /etc/chrony.keys
+
+# Get TAI-UTC offset and leap seconds from the system tz database.
+leapsectz right/UTC
+
+# Specify directory for log files.
+logdir /var/log/chrony
+
+# Select which information is logged.
+#log measurements statistics tracking
diff --git a/templates/chrony.conf.opensuse.tmpl b/templates/chrony.conf.opensuse.tmpl
new file mode 100644
index 0000000..a3d3e0e
--- /dev/null
+++ b/templates/chrony.conf.opensuse.tmpl
@@ -0,0 +1,38 @@
+## template:jinja
+# Use public servers from the pool.ntp.org project.
+# Please consider joining the pool (http://www.pool.ntp.org/join.html).
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# Record the rate at which the system clock gains/losses time.
+driftfile /var/lib/chrony/drift
+
+# In first three updates step the system clock instead of slew
+# if the adjustment is larger than 1 second.
+makestep 1.0 3
+
+# Enable kernel synchronization of the real-time clock (RTC).
+rtcsync
+
+# Allow NTP client access from local network.
+#allow 192.168/16
+
+# Serve time even if not synchronized to any NTP server.
+#local stratum 10
+
+# Specify file containing keys for NTP authentication.
+#keyfile /etc/chrony.keys
+
+# Specify directory for log files.
+logdir /var/log/chrony
+
+# Select which information is logged.
+#log measurements statistics tracking
diff --git a/templates/chrony.conf.rhel.tmpl b/templates/chrony.conf.rhel.tmpl
new file mode 100644
index 0000000..5b3542e
--- /dev/null
+++ b/templates/chrony.conf.rhel.tmpl
@@ -0,0 +1,45 @@
+## template:jinja
+# Use public servers from the pool.ntp.org project.
+# Please consider joining the pool (http://www.pool.ntp.org/join.html).
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# Record the rate at which the system clock gains/losses time.
+driftfile /var/lib/chrony/drift
+
+# Allow the system clock to be stepped in the first three updates
+# if its offset is larger than 1 second.
+makestep 1.0 3
+
+# Enable kernel synchronization of the real-time clock (RTC).
+rtcsync
+
+# Enable hardware timestamping on all interfaces that support it.
+#hwtimestamp *
+
+# Increase the minimum number of selectable sources required to adjust
+# the system clock.
+#minsources 2
+
+# Allow NTP client access from local network.
+#allow 192.168.0.0/16
+
+# Serve time even if not synchronized to a time source.
+#local stratum 10
+
+# Specify file containing keys for NTP authentication.
+#keyfile /etc/chrony.keys
+
+# Specify directory for log files.
+logdir /var/log/chrony
+
+# Select which information is logged.
+#log measurements statistics tracking
diff --git a/templates/chrony.conf.sles.tmpl b/templates/chrony.conf.sles.tmpl
new file mode 100644
index 0000000..a3d3e0e
--- /dev/null
+++ b/templates/chrony.conf.sles.tmpl
@@ -0,0 +1,38 @@
+## template:jinja
+# Use public servers from the pool.ntp.org project.
+# Please consider joining the pool (http://www.pool.ntp.org/join.html).
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# Record the rate at which the system clock gains/losses time.
+driftfile /var/lib/chrony/drift
+
+# In first three updates step the system clock instead of slew
+# if the adjustment is larger than 1 second.
+makestep 1.0 3
+
+# Enable kernel synchronization of the real-time clock (RTC).
+rtcsync
+
+# Allow NTP client access from local network.
+#allow 192.168/16
+
+# Serve time even if not synchronized to any NTP server.
+#local stratum 10
+
+# Specify file containing keys for NTP authentication.
+#keyfile /etc/chrony.keys
+
+# Specify directory for log files.
+logdir /var/log/chrony
+
+# Select which information is logged.
+#log measurements statistics tracking
diff --git a/templates/chrony.conf.ubuntu.tmpl b/templates/chrony.conf.ubuntu.tmpl
new file mode 100644
index 0000000..50a6f51
--- /dev/null
+++ b/templates/chrony.conf.ubuntu.tmpl
@@ -0,0 +1,42 @@
+## template:jinja
+# Welcome to the chrony configuration file. See chrony.conf(5) for more
+# information about usuable directives.
+
+# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
+# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
+# more information.
+{% if pools %}# pools
+{% endif %}
+{% for pool in pools -%}
+pool {{pool}} iburst
+{% endfor %}
+{%- if servers %}# servers
+{% endif %}
+{% for server in servers -%}
+server {{server}} iburst
+{% endfor %}
+
+# This directive specify the location of the file containing ID/key pairs for
+# NTP authentication.
+keyfile /etc/chrony/chrony.keys
+
+# This directive specify the file into which chronyd will store the rate
+# information.
+driftfile /var/lib/chrony/chrony.drift
+
+# Uncomment the following line to turn logging on.
+#log tracking measurements statistics
+
+# Log files location.
+logdir /var/log/chrony
+
+# Stop bad estimates upsetting machine clock.
+maxupdateskew 100.0
+
+# This directive enables kernel synchronisation (every 11 minutes) of the
+# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
+rtcsync
+
+# Step the system clock instead of slewing it if the adjustment is larger than
+# one second, but only in the first three clock updates.
+makestep 1 3
diff --git a/tests/cloud_tests/args.py b/tests/cloud_tests/args.py
index c6c1877..ab34549 100644
--- a/tests/cloud_tests/args.py
+++ b/tests/cloud_tests/args.py
@@ -62,6 +62,9 @@ ARG_SETS = {
         (('-d', '--data-dir'),
          {'help': 'directory to store test data in',
           'action': 'store', 'metavar': 'DIR', 'required': False}),
+        (('--preserve-instance',),
+         {'help': 'do not destroy the instance under test',
+          'action': 'store_true', 'default': False, 'required': False}),
         (('--preserve-data',),
          {'help': 'do not remove collected data after successful run',
           'action': 'store_true', 'default': False, 'required': False}),),
diff --git a/tests/cloud_tests/bddeb.py b/tests/cloud_tests/bddeb.py
index b9cfcfa..f04d0cd 100644
--- a/tests/cloud_tests/bddeb.py
+++ b/tests/cloud_tests/bddeb.py
@@ -113,7 +113,7 @@ def bddeb(args):
     @return_value: fail count
     """
     LOG.info('preparing to build cloud-init deb')
-    (res, failed) = run_stage('build deb', [partial(setup_build, args)])
+    _res, failed = run_stage('build deb', [partial(setup_build, args)])
     return failed
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py
index d4f9135..75b5061 100644
--- a/tests/cloud_tests/collect.py
+++ b/tests/cloud_tests/collect.py
@@ -25,7 +25,8 @@ def collect_script(instance, base_dir, script, script_name):
         script.encode(), rcs=False,
         description='collect: {}'.format(script_name))
     if err:
-        LOG.debug("collect script %s had stderr: %s", script_name, err)
+        LOG.debug("collect script %s exited '%s' and had stderr: %s",
+                  script_name, err, exit)
     if not isinstance(out, bytes):
         raise util.PlatformError(
             "Collection of '%s' returned type %s, expected bytes: %s" %
@@ -41,7 +42,7 @@ def collect_console(instance, base_dir):
     @param base_dir: directory to write console log to
     """
     logfile = os.path.join(base_dir, 'console.log')
-    LOG.debug('getting console log for %s to %s', instance, logfile)
+    LOG.debug('getting console log for %s to %s', instance.name, logfile)
     try:
         data = instance.console_log()
     except NotImplementedError as e:
@@ -92,7 +93,8 @@ def collect_test_data(args, snapshot, os_name, test_name):
     # create test instance
     component = PlatformComponent(
         partial(platforms.get_instance, snapshot, user_data,
-                block=True, start=False, use_desc=test_name))
+                block=True, start=False, use_desc=test_name),
+        preserve_instance=args.preserve_instance)
 
     LOG.info('collecting test data for test: %s', test_name)
     with component as instance:
diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py
index 3bad021..95bc3b1 100644
--- a/tests/cloud_tests/platforms/instances.py
+++ b/tests/cloud_tests/platforms/instances.py
@@ -87,7 +87,12 @@ class Instance(TargetBase):
             self._ssh_client = None
 
     def _ssh_connect(self):
-        """Connect via SSH."""
+        """Connect via SSH.
+
+        Attempt to SSH to the client on the specific IP and port. If it
+        fails in some manner, then retry 2 more times for a total of 3
+        attempts; sleeping a few seconds between attempts.
+        """
         if self._ssh_client:
             return self._ssh_client
 
@@ -98,21 +103,22 @@ class Instance(TargetBase):
         client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
         private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file)
 
-        retries = 30
+        retries = 3
         while retries:
             try:
                 client.connect(username=self.ssh_username,
                                hostname=self.ssh_ip, port=self.ssh_port,
-                               pkey=private_key, banner_timeout=30)
+                               pkey=private_key)
                 self._ssh_client = client
                 return client
             except (ConnectionRefusedError, AuthenticationException,
                     BadHostKeyException, ConnectionResetError, SSHException,
-                    OSError) as e:
+                    OSError):
                 retries -= 1
-                time.sleep(10)
+                LOG.debug('Retrying ssh connection on connect failure')
+                time.sleep(3)
 
-        ssh_cmd = 'Failed ssh connection to %s@%s:%s after 300 seconds' % (
+        ssh_cmd = 'Failed ssh connection to %s@%s:%s after 3 retries' % (
             self.ssh_username, self.ssh_ip, self.ssh_port
         )
         raise util.InTargetExecuteError(b'', b'', 1, ssh_cmd, 'ssh')
@@ -128,18 +134,31 @@ class Instance(TargetBase):
             return ' '.join(l for l in test.strip().splitlines()
                             if not l.lstrip().startswith('#'))
 
-        time = self.config['boot_timeout']
+        boot_timeout = self.config['boot_timeout']
         tests = [self.config['system_ready_script']]
         if wait_for_cloud_init:
             tests.append(self.config['cloud_init_ready_script'])
 
         formatted_tests = ' && '.join(clean_test(t) for t in tests)
         cmd = ('i=0; while [ $i -lt {time} ] && i=$(($i+1)); do {test} && '
-               'exit 0; sleep 1; done; exit 1').format(time=time,
+               'exit 0; sleep 1; done; exit 1').format(time=boot_timeout,
                                                        test=formatted_tests)
 
-        if self.execute(cmd, rcs=(0, 1))[-1] != 0:
-            raise OSError('timeout: after {}s system not started'.format(time))
-
+        end_time = time.time() + boot_timeout
+        while True:
+            try:
+                return_code = self.execute(
+                    cmd, rcs=(0, 1), description='wait for instance start'
+                )[-1]
+                if return_code == 0:
+                    break
+            except util.InTargetExecuteError:
+                LOG.warning("failed to connect via SSH")
+
+            if time.time() < end_time:
+                time.sleep(3)
+            else:
+                raise util.PlatformError('ssh', 'after %ss instance is not '
+                                         'reachable' % boot_timeout)
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/platforms/lxd/instance.py b/tests/cloud_tests/platforms/lxd/instance.py
index 0d957bc..d396519 100644
--- a/tests/cloud_tests/platforms/lxd/instance.py
+++ b/tests/cloud_tests/platforms/lxd/instance.py
@@ -152,9 +152,8 @@ class LXDInstance(Instance):
                 return fp.read()
 
         try:
-            stdout, stderr = subp(
-                ['lxc', 'console', '--show-log', self.name], decode=False)
-            return stdout
+            return subp(['lxc', 'console', '--show-log', self.name],
+                        decode=False)[0]
         except ProcessExecutionError as e:
             raise PlatformError(
                 "console log",
@@ -209,16 +208,15 @@ def _has_proper_console_support():
     if 'console' not in info.get('api_extensions', []):
         reason = "LXD server does not support console api extension"
     else:
-        dver = info.get('environment', {}).get('driver_version', "")
+        dver = str(info.get('environment', {}).get('driver_version', ""))
         if dver.startswith("2.") or dver.startswith("1."):
             reason = "LXD Driver version not 3.x+ (%s)" % dver
         else:
             try:
-                stdout, stderr = subp(['lxc', 'console', '--help'],
-                                      decode=False)
+                stdout = subp(['lxc', 'console', '--help'], decode=False)[0]
                 if not (b'console' in stdout and b'log' in stdout):
                     reason = "no '--log' in lxc console --help"
-            except ProcessExecutionError as e:
+            except ProcessExecutionError:
                 reason = "no 'console' command in lxc client"
 
     if reason:
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index c7dcbe8..defae02 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -129,6 +129,22 @@ features:
 
 releases:
     # UBUNTU =================================================================
+    cosmic:
+        # EOL: Jul 2019
+        default:
+            enabled: true
+            release: cosmic
+            version: 18.10
+            os: ubuntu
+            feature_groups:
+                - base
+                - debian_base
+                - ubuntu_specific
+        lxd:
+            sstreams_server: https://cloud-images.ubuntu.com/daily
+            alias: cosmic
+            setup_overrides: null
+            override_templates: false
     bionic:
         # EOL: Apr 2023
         default:
diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py
index 6d24211..4e19570 100644
--- a/tests/cloud_tests/setup_image.py
+++ b/tests/cloud_tests/setup_image.py
@@ -25,10 +25,9 @@ def installed_package_version(image, package, ensure_installed=True):
     else:
         raise NotImplementedError
 
-    msg = 'query version for package: {}'.format(package)
-    (out, err, exit) = image.execute(
-        cmd, description=msg, rcs=(0,) if ensure_installed else range(0, 256))
-    return out.strip()
+    return image.execute(
+        cmd, description='query version for package: {}'.format(package),
+        rcs=(0,) if ensure_installed else range(0, 256))[0].strip()
 
 
 def install_deb(args, image):
@@ -54,7 +53,7 @@ def install_deb(args, image):
          remote_path], description=msg)
     # check installed deb version matches package
     fmt = ['-W', "--showformat=${Version}"]
-    (out, err, exit) = image.execute(['dpkg-deb'] + fmt + [remote_path])
+    out = image.execute(['dpkg-deb'] + fmt + [remote_path])[0]
     expected_version = out.strip()
     found_version = installed_package_version(image, 'cloud-init')
     if expected_version != found_version:
@@ -85,7 +84,7 @@ def install_rpm(args, image):
     image.execute(['rpm', '-U', remote_path], description=msg)
 
     fmt = ['--queryformat', '"%{VERSION}"']
-    (out, err, exit) = image.execute(['rpm', '-q'] + fmt + [remote_path])
+    (out, _err, _exit) = image.execute(['rpm', '-q'] + fmt + [remote_path])
     expected_version = out.strip()
     found_version = installed_package_version(image, 'cloud-init')
     if expected_version != found_version:
diff --git a/tests/cloud_tests/stage.py b/tests/cloud_tests/stage.py
index 74a7d46..d64a1dc 100644
--- a/tests/cloud_tests/stage.py
+++ b/tests/cloud_tests/stage.py
@@ -12,9 +12,15 @@ from tests.cloud_tests import LOG
 class PlatformComponent(object):
     """Context manager to safely handle platform components."""
 
-    def __init__(self, get_func):
-        """Store get_<platform component> function as partial with no args."""
+    def __init__(self, get_func, preserve_instance=False):
+        """Store get_<platform component> function as partial with no args.
+
+        @param get_func: Callable returning an instance from the platform.
+        @param preserve_instance: Boolean, when True, do not destroy instance
+            after test. Used for test development.
+        """
         self.get_func = get_func
+        self.preserve_instance = preserve_instance
 
     def __enter__(self):
         """Create instance of platform component."""
@@ -24,7 +30,10 @@ class PlatformComponent(object):
     def __exit__(self, etype, value, trace):
         """Destroy instance."""
         if self.instance is not None:
-            self.instance.destroy()
+            if self.preserve_instance:
+                LOG.info('Preserving test instance %s', self.instance.name)
+            else:
+                self.instance.destroy()
 
 
 def run_single(name, call):
diff --git a/tests/cloud_tests/testcases.yaml b/tests/cloud_tests/testcases.yaml
index a3e2990..a16d1dd 100644
--- a/tests/cloud_tests/testcases.yaml
+++ b/tests/cloud_tests/testcases.yaml
@@ -24,9 +24,9 @@ base_test_data:
         status.json: |
             #!/bin/sh
             cat /run/cloud-init/status.json
-        cloud-init-version: |
+        package-versions: |
             #!/bin/sh
-            dpkg-query -W -f='${Version}' cloud-init
+            dpkg-query --show
         system.journal.gz: |
             #!/bin/sh
             [ -d /run/systemd ] || { echo "not systemd."; exit 0; }
diff --git a/tests/cloud_tests/testcases/base.py b/tests/cloud_tests/testcases/base.py
index 324c7c9..696db8d 100644
--- a/tests/cloud_tests/testcases/base.py
+++ b/tests/cloud_tests/testcases/base.py
@@ -31,6 +31,27 @@ class CloudTestCase(unittest.TestCase):
     def is_distro(self, distro_name):
         return self.os_cfg['os'] == distro_name
 
+    def assertPackageInstalled(self, name, version=None):
+        """Check dpkg-query --show output for matching package name.
+
+        @param name: package base name
+        @param version: string representing a package version or part of a
+            version.
+        """
+        pkg_out = self.get_data_file('package-versions')
+        pkg_match = re.search(
+            '^%s\t(?P<version>.*)$' % name, pkg_out, re.MULTILINE)
+        if pkg_match:
+            installed_version = pkg_match.group('version')
+            if not version:
+                return  # Success
+            if installed_version.startswith(version):
+                return  # Success
+            raise AssertionError(
+                'Expected package version %s-%s not found. Found %s' %
+                name, version, installed_version)
+        raise AssertionError('Package not installed: %s' % name)
+
     def os_version_cmp(self, cmp_version):
         """Compare the version of the test to comparison_version.
 
@@ -149,21 +170,22 @@ class CloudTestCase(unittest.TestCase):
         self.assertEqual(
             ['ds/user-data'], instance_data['base64-encoded-keys'])
         ds = instance_data.get('ds', {})
-        macs = ds.get('network', {}).get('interfaces', {}).get('macs', {})
+        v1_data = instance_data.get('v1', {})
+        metadata = ds.get('meta-data', {})
+        macs = metadata.get(
+            'network', {}).get('interfaces', {}).get('macs', {})
         if not macs:
             raise AssertionError('No network data from EC2 meta-data')
         # Check meta-data items we depend on
         expected_net_keys = [
             'public-ipv4s', 'ipv4-associations', 'local-hostname',
             'public-hostname']
-        for mac, mac_data in macs.items():
+        for mac_data in macs.values():
             for key in expected_net_keys:
                 self.assertIn(key, mac_data)
         self.assertIsNotNone(
-            ds.get('placement', {}).get('availability-zone'),
+            metadata.get('placement', {}).get('availability-zone'),
             'Could not determine EC2 Availability zone placement')
-        ds = instance_data.get('ds', {})
-        v1_data = instance_data.get('v1', {})
         self.assertIsNotNone(
             v1_data['availability-zone'], 'expected ec2 availability-zone')
         self.assertEqual('aws', v1_data['cloud-name'])
@@ -234,7 +256,7 @@ class CloudTestCase(unittest.TestCase):
             'found unexpected kvm availability-zone %s' %
             v1_data['availability-zone'])
         self.assertIsNotNone(
-            re.match('[\da-f]{8}(-[\da-f]{4}){3}-[\da-f]{12}',
+            re.match(r'[\da-f]{8}(-[\da-f]{4}){3}-[\da-f]{12}',
                      v1_data['instance-id']),
             'kvm instance-id is not a UUID: %s' % v1_data['instance-id'])
         self.assertIn('ubuntu', v1_data['local-hostname'])
diff --git a/tests/cloud_tests/testcases/examples/including_user_groups.py b/tests/cloud_tests/testcases/examples/including_user_groups.py
index 93b7a82..4067348 100644
--- a/tests/cloud_tests/testcases/examples/including_user_groups.py
+++ b/tests/cloud_tests/testcases/examples/including_user_groups.py
@@ -42,7 +42,7 @@ class TestUserGroups(base.CloudTestCase):
 
     def test_user_root_in_secret(self):
         """Test root user is in 'secret' group."""
-        user, _, groups = self.get_data_file('root_groups').partition(":")
+        _user, _, groups = self.get_data_file('root_groups').partition(":")
         self.assertIn("secret", groups.split(),
                       msg="User root is not in group 'secret'")
 
diff --git a/tests/cloud_tests/testcases/modules/byobu.py b/tests/cloud_tests/testcases/modules/byobu.py
index 005ca01..74d0529 100644
--- a/tests/cloud_tests/testcases/modules/byobu.py
+++ b/tests/cloud_tests/testcases/modules/byobu.py
@@ -9,8 +9,7 @@ class TestByobu(base.CloudTestCase):
 
     def test_byobu_installed(self):
         """Test byobu installed."""
-        out = self.get_data_file('byobu_installed')
-        self.assertIn('/usr/bin/byobu', out)
+        self.assertPackageInstalled('byobu')
 
     def test_byobu_profile_enabled(self):
         """Test byobu profile.d file exists."""
diff --git a/tests/cloud_tests/testcases/modules/byobu.yaml b/tests/cloud_tests/testcases/modules/byobu.yaml
index a9aa1f3..d002a61 100644
--- a/tests/cloud_tests/testcases/modules/byobu.yaml
+++ b/tests/cloud_tests/testcases/modules/byobu.yaml
@@ -7,9 +7,6 @@ cloud_config: |
   #cloud-config
   byobu_by_default: enable
 collect_scripts:
-  byobu_installed: |
-    #!/bin/bash
-    which byobu
   byobu_profile_enabled: |
     #!/bin/bash
     ls /etc/profile.d/Z97-byobu.sh
diff --git a/tests/cloud_tests/testcases/modules/ca_certs.py b/tests/cloud_tests/testcases/modules/ca_certs.py
index e75f041..6b56f63 100644
--- a/tests/cloud_tests/testcases/modules/ca_certs.py
+++ b/tests/cloud_tests/testcases/modules/ca_certs.py
@@ -7,10 +7,23 @@ from tests.cloud_tests.testcases import base
 class TestCaCerts(base.CloudTestCase):
     """Test ca certs module."""
 
-    def test_cert_count(self):
-        """Test the count is proper."""
-        out = self.get_data_file('cert_count')
-        self.assertEqual(5, int(out))
+    def test_certs_updated(self):
+        """Test certs have been updated in /etc/ssl/certs."""
+        out = self.get_data_file('cert_links')
+        # Bionic update-ca-certificates creates less links debian #895075
+        unlinked_files = []
+        links = {}
+        for cert_line in out.splitlines():
+            if '->' in cert_line:
+                fname, _sep, link = cert_line.split()
+                links[fname] = link
+            else:
+                unlinked_files.append(cert_line)
+        self.assertEqual(['ca-certificates.crt'], unlinked_files)
+        self.assertEqual('cloud-init-ca-certs.pem', links['a535c1f3.0'])
+        self.assertEqual(
+            '/usr/share/ca-certificates/cloud-init-ca-certs.crt',
+            links['cloud-init-ca-certs.pem'])
 
     def test_cert_installed(self):
         """Test line from our cert exists."""
diff --git a/tests/cloud_tests/testcases/modules/ca_certs.yaml b/tests/cloud_tests/testcases/modules/ca_certs.yaml
index d939f43..2cd9155 100644
--- a/tests/cloud_tests/testcases/modules/ca_certs.yaml
+++ b/tests/cloud_tests/testcases/modules/ca_certs.yaml
@@ -43,9 +43,13 @@ cloud_config: |
         DiH5uEqBXExjrj0FslxcVKdVj5glVcSmkLwZKbEU1OKwleT/iXFhvooWhQ==
         -----END CERTIFICATE-----
 collect_scripts:
-  cert_count: |
+  cert_links: |
     #!/bin/bash
-    ls -l /etc/ssl/certs | wc -l
+    # links printed <filename> -> <link target>
+    # non-links printed <filename>
+    for file in `ls /etc/ssl/certs`; do
+        [ -h /etc/ssl/certs/$file ] && echo -n $file ' -> ' && readlink /etc/ssl/certs/$file || echo $file;
+    done
   cert: |
     #!/bin/bash
     md5sum /etc/ssl/certs/ca-certificates.crt
diff --git a/tests/cloud_tests/testcases/modules/ntp.py b/tests/cloud_tests/testcases/modules/ntp.py
index b50e52f..c63cc15 100644
--- a/tests/cloud_tests/testcases/modules/ntp.py
+++ b/tests/cloud_tests/testcases/modules/ntp.py
@@ -9,15 +9,14 @@ class TestNtp(base.CloudTestCase):
 
     def test_ntp_installed(self):
         """Test ntp installed"""
-        out = self.get_data_file('ntp_installed')
-        self.assertEqual(0, int(out))
+        self.assertPackageInstalled('ntp')
 
     def test_ntp_dist_entries(self):
         """Test dist config file is empty"""
         out = self.get_data_file('ntp_conf_dist_empty')
         self.assertEqual(0, int(out))
 
-    def test_ntp_entires(self):
+    def test_ntp_entries(self):
         """Test config entries"""
         out = self.get_data_file('ntp_conf_pool_list')
         self.assertIn('pool.ntp.org iburst', out)
diff --git a/tests/cloud_tests/testcases/modules/ntp.yaml b/tests/cloud_tests/testcases/modules/ntp.yaml
index 2530d72..7ea0707 100644
--- a/tests/cloud_tests/testcases/modules/ntp.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp.yaml
@@ -4,6 +4,7 @@
 cloud_config: |
   #cloud-config
   ntp:
+    ntp_client: ntp
     pools: []
     servers: []
 collect_scripts:
diff --git a/tests/cloud_tests/testcases/modules/ntp_chrony.py b/tests/cloud_tests/testcases/modules/ntp_chrony.py
new file mode 100644
index 0000000..7d34177
--- /dev/null
+++ b/tests/cloud_tests/testcases/modules/ntp_chrony.py
@@ -0,0 +1,26 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""cloud-init Integration Test Verify Script."""
+import unittest
+
+from tests.cloud_tests.testcases import base
+
+
+class TestNtpChrony(base.CloudTestCase):
+    """Test ntp module with chrony client"""
+
+    def setUp(self):
+        """Skip this suite of tests on lxd and artful or older."""
+        if self.platform == 'lxd':
+            if self.is_distro('ubuntu') and self.os_version_cmp('artful') <= 0:
+                raise unittest.SkipTest(
+                    'No support for chrony on containers <= artful.'
+                    ' LP: #1589780')
+        return super(TestNtpChrony, self).setUp()
+
+    def test_chrony_entries(self):
+        """Test chrony config entries"""
+        out = self.get_data_file('chrony_conf')
+        self.assertIn('.pool.ntp.org', out)
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/ntp_chrony.yaml b/tests/cloud_tests/testcases/modules/ntp_chrony.yaml
new file mode 100644
index 0000000..120735e
--- /dev/null
+++ b/tests/cloud_tests/testcases/modules/ntp_chrony.yaml
@@ -0,0 +1,17 @@
+#
+# ntp enabled, chrony selected, check conf file
+# as chrony won't start in a container
+#
+cloud_config: |
+  #cloud-config
+  ntp:
+    enabled: true
+    ntp_client: chrony
+collect_scripts:
+  chrony_conf: |
+    #!/bin/sh
+    set -- /etc/chrony.conf /etc/chrony/chrony.conf
+    for p in "$@"; do
+        [ -e "$p" ] && { cat "$p"; exit; }
+    done
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/ntp_pools.yaml b/tests/cloud_tests/testcases/modules/ntp_pools.yaml
index d490b22..60fa0fd 100644
--- a/tests/cloud_tests/testcases/modules/ntp_pools.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp_pools.yaml
@@ -9,6 +9,7 @@ required_features:
 cloud_config: |
   #cloud-config
   ntp:
+    ntp_client: ntp
     pools:
         - 0.cloud-init.mypool
         - 1.cloud-init.mypool
diff --git a/tests/cloud_tests/testcases/modules/ntp_servers.yaml b/tests/cloud_tests/testcases/modules/ntp_servers.yaml
index 6b13b70..ee63667 100644
--- a/tests/cloud_tests/testcases/modules/ntp_servers.yaml
+++ b/tests/cloud_tests/testcases/modules/ntp_servers.yaml
@@ -6,6 +6,7 @@ required_features:
 cloud_config: |
   #cloud-config
   ntp:
+    ntp_client: ntp
     servers:
         - 172.16.15.14
         - 172.16.17.18
diff --git a/tests/cloud_tests/testcases/modules/ntp_timesyncd.py b/tests/cloud_tests/testcases/modules/ntp_timesyncd.py
new file mode 100644
index 0000000..eca750b
--- /dev/null
+++ b/tests/cloud_tests/testcases/modules/ntp_timesyncd.py
@@ -0,0 +1,15 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""cloud-init Integration Test Verify Script."""
+from tests.cloud_tests.testcases import base
+
+
+class TestNtpTimesyncd(base.CloudTestCase):
+    """Test ntp module with systemd-timesyncd client"""
+
+    def test_timesyncd_entries(self):
+        """Test timesyncd config entries"""
+        out = self.get_data_file('timesyncd_conf')
+        self.assertIn('.pool.ntp.org', out)
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/ntp_timesyncd.yaml b/tests/cloud_tests/testcases/modules/ntp_timesyncd.yaml
new file mode 100644
index 0000000..ee47a74
--- /dev/null
+++ b/tests/cloud_tests/testcases/modules/ntp_timesyncd.yaml
@@ -0,0 +1,15 @@
+#
+# ntp enabled, systemd-timesyncd selected, check conf file
+# as systemd-timesyncd won't start in a container
+#
+cloud_config: |
+  #cloud-config
+  ntp:
+    enabled: true
+    ntp_client: systemd-timesyncd
+collect_scripts:
+  timesyncd_conf: |
+    #!/bin/sh
+    cat /etc/systemd/timesyncd.conf.d/cloud-init.conf
+
+# vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
index a92dec2..fecad76 100644
--- a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
+++ b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
@@ -7,15 +7,13 @@ from tests.cloud_tests.testcases import base
 class TestPackageInstallUpdateUpgrade(base.CloudTestCase):
     """Test package install update upgrade module."""
 
-    def test_installed_htop(self):
-        """Test htop got installed."""
-        out = self.get_data_file('dpkg_htop')
-        self.assertEqual(1, int(out))
+    def test_installed_sl(self):
+        """Test sl got installed."""
+        self.assertPackageInstalled('sl')
 
     def test_installed_tree(self):
         """Test tree got installed."""
-        out = self.get_data_file('dpkg_tree')
-        self.assertEqual(1, int(out))
+        self.assertPackageInstalled('tree')
 
     def test_apt_history(self):
         """Test apt history for update command."""
@@ -23,13 +21,13 @@ class TestPackageInstallUpdateUpgrade(base.CloudTestCase):
         self.assertIn(
             'Commandline: /usr/bin/apt-get --option=Dpkg::Options'
             '::=--force-confold --option=Dpkg::options::=--force-unsafe-io '
-            '--assume-yes --quiet install htop tree', out)
+            '--assume-yes --quiet install sl tree', out)
 
     def test_cloud_init_output(self):
         """Test cloud-init-output for install & upgrade stuff."""
         out = self.get_data_file('cloud-init-output.log')
         self.assertIn('Setting up tree (', out)
-        self.assertIn('Setting up htop (', out)
+        self.assertIn('Setting up sl (', out)
         self.assertIn('Reading package lists...', out)
         self.assertIn('Building dependency tree...', out)
         self.assertIn('Reading state information...', out)
diff --git a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
index 71d24b8..dd79e43 100644
--- a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
+++ b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
@@ -15,7 +15,7 @@ required_features:
 cloud_config: |
   #cloud-config
   packages:
-    - htop
+    - sl
     - tree
   package_update: true
   package_upgrade: true
@@ -23,11 +23,8 @@ collect_scripts:
   apt_history_cmdline: |
     #!/bin/bash
     grep ^Commandline: /var/log/apt/history.log
-  dpkg_htop: |
+  dpkg_show: |
     #!/bin/bash
-    dpkg -l | grep htop | wc -l
-  dpkg_tree: |
-    #!/bin/bash
-    dpkg -l | grep tree | wc -l
+    dpkg-query --show
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/salt_minion.py b/tests/cloud_tests/testcases/modules/salt_minion.py
index 70917a4..fc9688e 100644
--- a/tests/cloud_tests/testcases/modules/salt_minion.py
+++ b/tests/cloud_tests/testcases/modules/salt_minion.py
@@ -33,7 +33,6 @@ class Test(base.CloudTestCase):
 
     def test_minion_installed(self):
         """Test if the salt-minion package is installed"""
-        out = self.get_data_file('minion_installed')
-        self.assertEqual(1, int(out))
+        self.assertPackageInstalled('salt-minion')
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/salt_minion.yaml b/tests/cloud_tests/testcases/modules/salt_minion.yaml
index f20b976..9227147 100644
--- a/tests/cloud_tests/testcases/modules/salt_minion.yaml
+++ b/tests/cloud_tests/testcases/modules/salt_minion.yaml
@@ -28,15 +28,22 @@ collect_scripts:
     cat /etc/salt/minion_id
   minion.pem: |
     #!/bin/bash
-    cat /etc/salt/pki/minion/minion.pem
+    PRIV_KEYFILE=/etc/salt/pki/minion/minion.pem
+    if [ ! -f $PRIV_KEYFILE ]; then
+        # Bionic and later automatically moves /etc/salt/pki/minion/*
+        PRIV_KEYFILE=/var/lib/salt/pki/minion/minion.pem
+    fi
+    cat $PRIV_KEYFILE
   minion.pub: |
     #!/bin/bash
-    cat /etc/salt/pki/minion/minion.pub
+    PUB_KEYFILE=/etc/salt/pki/minion/minion.pub
+    if [ ! -f $PUB_KEYFILE ]; then
+        # Bionic and later automatically moves /etc/salt/pki/minion/*
+        PUB_KEYFILE=/var/lib/salt/pki/minion/minion.pub
+    fi
+    cat $PUB_KEYFILE
   grains: |
     #!/bin/bash
     cat /etc/salt/grains
-  minion_installed: |
-    #!/bin/bash
-    dpkg -l | grep salt-minion | grep ii | wc -l
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/user_groups.py b/tests/cloud_tests/testcases/modules/user_groups.py
index 93b7a82..4067348 100644
--- a/tests/cloud_tests/testcases/modules/user_groups.py
+++ b/tests/cloud_tests/testcases/modules/user_groups.py
@@ -42,7 +42,7 @@ class TestUserGroups(base.CloudTestCase):
 
     def test_user_root_in_secret(self):
         """Test root user is in 'secret' group."""
-        user, _, groups = self.get_data_file('root_groups').partition(":")
+        _user, _, groups = self.get_data_file('root_groups').partition(":")
         self.assertIn("secret", groups.split(),
                       msg="User root is not in group 'secret'")
 
diff --git a/tests/cloud_tests/util.py b/tests/cloud_tests/util.py
index 3dd4996..06f7d86 100644
--- a/tests/cloud_tests/util.py
+++ b/tests/cloud_tests/util.py
@@ -358,7 +358,7 @@ class TargetBase(object):
         # when sh is invoked with '-c', then the first argument is "$0"
         # which is commonly understood as the "program name".
         # 'read_data' is the program name, and 'remote_path' is '$1'
-        stdout, stderr, rc = self._execute(
+        stdout, _stderr, rc = self._execute(
             ["sh", "-c", 'exec cat "$1"', 'read_data', remote_path])
         if rc != 0:
             raise RuntimeError("Failed to read file '%s'" % remote_path)
diff --git a/tests/cloud_tests/verify.py b/tests/cloud_tests/verify.py
index 5a68a48..bfb2744 100644
--- a/tests/cloud_tests/verify.py
+++ b/tests/cloud_tests/verify.py
@@ -56,6 +56,51 @@ def verify_data(data_dir, platform, os_name, tests):
     return res
 
 
+def format_test_failures(test_result):
+    """Return a human-readable printable format of test failures."""
+    if not test_result['failures']:
+        return ''
+    failure_hdr = '    test failures:'
+    failure_fmt = '    * {module}.{class}.{function}\n          {error}'
+    output = []
+    for failure in test_result['failures']:
+        if not output:
+            output = [failure_hdr]
+        output.append(failure_fmt.format(**failure))
+    return '\n'.join(output)
+
+
+def format_results(res):
+    """Return human-readable results as a string"""
+    platform_hdr = 'Platform: {platform}'
+    distro_hdr = '  Distro: {distro}'
+    distro_summary_fmt = (
+        '    test modules passed:{passed} tests failed:{failed}')
+    output = ['']
+    counts = {}
+    for platform, platform_data in res.items():
+        output.append(platform_hdr.format(platform=platform))
+        counts[platform] = {}
+        for distro, distro_data in platform_data.items():
+            distro_failure_output = []
+            output.append(distro_hdr.format(distro=distro))
+            counts[platform][distro] = {'passed': 0, 'failed': 0}
+            for _, test_result in distro_data.items():
+                if test_result['passed']:
+                    counts[platform][distro]['passed'] += 1
+                else:
+                    counts[platform][distro]['failed'] += len(
+                        test_result['failures'])
+                    failure_output = format_test_failures(test_result)
+                    if failure_output:
+                        distro_failure_output.append(failure_output)
+            output.append(
+                distro_summary_fmt.format(**counts[platform][distro]))
+            if distro_failure_output:
+                output.extend(distro_failure_output)
+    return '\n'.join(output)
+
+
 def verify(args):
     """Verify test data.
 
@@ -90,7 +135,7 @@ def verify(args):
             failed += len(fail_list)
 
     # dump results
-    LOG.debug('verify results: %s', res)
+    LOG.debug('\n---- Verify summarized results:\n%s', format_results(res))
     if args.result:
         util.merge_results({'verify': res}, args.result)
 
diff --git a/tests/data/netinfo/netdev-formatted-output b/tests/data/netinfo/netdev-formatted-output
new file mode 100644
index 0000000..283ab4a
--- /dev/null
+++ b/tests/data/netinfo/netdev-formatted-output
@@ -0,0 +1,10 @@
++++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++++
++---------+------+------------------------------+---------------+--------+-------------------+
+|  Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
++---------+------+------------------------------+---------------+--------+-------------------+
+| enp0s25 | True |         192.168.2.18         | 255.255.255.0 |   .    | 50:7b:9d:2c:af:91 |
+| enp0s25 | True | fe80::7777:2222:1111:eeee/64 |       .       | global | 50:7b:9d:2c:af:91 |
+| enp0s25 | True | fe80::8107:2b92:867e:f8a6/64 |       .       |  link  | 50:7b:9d:2c:af:91 |
+|    lo   | True |          127.0.0.1           |   255.0.0.0   |   .    |         .         |
+|    lo   | True |           ::1/128            |       .       |  host  |         .         |
++---------+------+------------------------------+---------------+--------+-------------------+
diff --git a/tests/data/netinfo/netdev-formatted-output-down b/tests/data/netinfo/netdev-formatted-output-down
new file mode 100644
index 0000000..038dfb4
--- /dev/null
+++ b/tests/data/netinfo/netdev-formatted-output-down
@@ -0,0 +1,8 @@
++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
++--------+-------+-----------+-----------+-------+-------------------+
+| Device |   Up  |  Address  |    Mask   | Scope |     Hw-Address    |
++--------+-------+-----------+-----------+-------+-------------------+
+|  eth0  | False |     .     |     .     |   .   | 00:16:3e:de:51:a6 |
+|   lo   |  True | 127.0.0.1 | 255.0.0.0 |  host |         .         |
+|   lo   |  True |  ::1/128  |     .     |  host |         .         |
++--------+-------+-----------+-----------+-------+-------------------+
diff --git a/tests/data/netinfo/new-ifconfig-output b/tests/data/netinfo/new-ifconfig-output
new file mode 100644
index 0000000..83d4ad1
--- /dev/null
+++ b/tests/data/netinfo/new-ifconfig-output
@@ -0,0 +1,18 @@
+enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
+        inet 192.168.2.18  netmask 255.255.255.0  broadcast 192.168.2.255
+        inet6 fe80::7777:2222:1111:eeee  prefixlen 64  scopeid 0x30<global>
+        inet6 fe80::8107:2b92:867e:f8a6  prefixlen 64  scopeid 0x20<link>
+        ether 50:7b:9d:2c:af:91  txqueuelen 1000  (Ethernet)
+        RX packets 3017  bytes 10601563 (10.1 MiB)
+        RX errors 0  dropped 39  overruns 0  frame 0
+        TX packets 2627  bytes 196976 (192.3 KiB)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
+
+lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
+        inet 127.0.0.1  netmask 255.0.0.0
+        inet6 ::1  prefixlen 128  scopeid 0x10<host>
+        loop  txqueuelen 1  (Local Loopback)
+        RX packets 0  bytes 0 (0.0 B)
+        RX errors 0  dropped 0  overruns 0  frame 0
+        TX packets 0  bytes 0 (0.0 B)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
diff --git a/tests/data/netinfo/new-ifconfig-output-down b/tests/data/netinfo/new-ifconfig-output-down
new file mode 100644
index 0000000..5d12e35
--- /dev/null
+++ b/tests/data/netinfo/new-ifconfig-output-down
@@ -0,0 +1,15 @@
+eth0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
+        ether 00:16:3e:de:51:a6  txqueuelen 1000  (Ethernet)
+        RX packets 126229  bytes 158139342 (158.1 MB)
+        RX errors 0  dropped 0  overruns 0  frame 0
+        TX packets 59317  bytes 4839008 (4.8 MB)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
+
+lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
+        inet 127.0.0.1  netmask 255.0.0.0
+        inet6 ::1  prefixlen 128  scopeid 0x10<host>
+        loop  txqueuelen 1000  (Local Loopback)
+        RX packets 260  bytes 20092 (20.0 KB)
+        RX errors 0  dropped 0  overruns 0  frame 0
+        TX packets 260  bytes 20092 (20.0 KB)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
diff --git a/tests/data/netinfo/old-ifconfig-output b/tests/data/netinfo/old-ifconfig-output
new file mode 100644
index 0000000..e01f763
--- /dev/null
+++ b/tests/data/netinfo/old-ifconfig-output
@@ -0,0 +1,18 @@
+enp0s25   Link encap:Ethernet  HWaddr 50:7b:9d:2c:af:91
+          inet addr:192.168.2.18  Bcast:192.168.2.255  Mask:255.255.255.0
+          inet6 addr: fe80::7777:2222:1111:eeee/64 Scope:Global
+          inet6 addr: fe80::8107:2b92:867e:f8a6/64 Scope:Link
+          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
+          RX packets:8106427 errors:55 dropped:0 overruns:0 frame:37
+          TX packets:9339739 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:1000
+          RX bytes:4953721719 (4.9 GB)  TX bytes:7731890194 (7.7 GB)
+          Interrupt:20 Memory:e1200000-e1220000
+
+lo        Link encap:Local Loopback
+          inet addr:127.0.0.1  Mask:255.0.0.0
+          inet6 addr: ::1/128 Scope:Host
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
+          RX packets:579230851 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:579230851 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:1
diff --git a/tests/data/netinfo/route-formatted-output b/tests/data/netinfo/route-formatted-output
new file mode 100644
index 0000000..9d2c5dd
--- /dev/null
+++ b/tests/data/netinfo/route-formatted-output
@@ -0,0 +1,22 @@
++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++
++-------+-------------+-------------+---------------+-----------+-------+
+| Route | Destination |   Gateway   |    Genmask    | Interface | Flags |
++-------+-------------+-------------+---------------+-----------+-------+
+|   0   |   0.0.0.0   | 192.168.2.1 |    0.0.0.0    |  enp0s25  |   UG  |
+|   1   |   0.0.0.0   | 192.168.2.1 |    0.0.0.0    |   wlp3s0  |   UG  |
+|   2   | 192.168.2.0 |   0.0.0.0   | 255.255.255.0 |  enp0s25  |   U   |
++-------+-------------+-------------+---------------+-----------+-------+
++++++++++++++++++++++++++++++++++++Route IPv6 info+++++++++++++++++++++++++++++++++++
++-------+---------------------------+---------------------------+-----------+-------+
+| Route |        Destination        |          Gateway          | Interface | Flags |
++-------+---------------------------+---------------------------+-----------+-------+
+|   0   |  2a00:abcd:82ae:cd33::657 |             ::            |  enp0s25  |   Ue  |
+|   1   |  2a00:abcd:82ae:cd33::/64 |             ::            |  enp0s25  |   U   |
+|   2   |  2a00:abcd:82ae:cd33::/56 | fe80::32ee:54de:cd43:b4e1 |  enp0s25  |   UG  |
+|   3   |     fd81:123f:654::657    |             ::            |  enp0s25  |   U   |
+|   4   |     fd81:123f:654::/64    |             ::            |  enp0s25  |   U   |
+|   5   |     fd81:123f:654::/48    | fe80::32ee:54de:cd43:b4e1 |  enp0s25  |   UG  |
+|   6   | fe80::abcd:ef12:bc34:da21 |             ::            |  enp0s25  |   U   |
+|   7   |         fe80::/64         |             ::            |  enp0s25  |   U   |
+|   8   |            ::/0           | fe80::32ee:54de:cd43:b4e1 |  enp0s25  |   UG  |
++-------+---------------------------+---------------------------+-----------+-------+
diff --git a/tests/data/netinfo/sample-ipaddrshow-output b/tests/data/netinfo/sample-ipaddrshow-output
new file mode 100644
index 0000000..b2fa267
--- /dev/null
+++ b/tests/data/netinfo/sample-ipaddrshow-output
@@ -0,0 +1,13 @@
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
+    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
+    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
+2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
+    link/ether 50:7b:9d:2c:af:91 brd ff:ff:ff:ff:ff:ff
+    inet 192.168.2.18/24 brd 192.168.2.255 scope global dynamic enp0s25
+       valid_lft 84174sec preferred_lft 84174sec
+    inet6 fe80::7777:2222:1111:eeee/64 scope global
+       valid_lft forever preferred_lft forever
+    inet6 fe80::8107:2b92:867e:f8a6/64 scope link
+       valid_lft forever preferred_lft forever
+
diff --git a/tests/data/netinfo/sample-ipaddrshow-output-down b/tests/data/netinfo/sample-ipaddrshow-output-down
new file mode 100644
index 0000000..cb516d6
--- /dev/null
+++ b/tests/data/netinfo/sample-ipaddrshow-output-down
@@ -0,0 +1,8 @@
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+    inet 127.0.0.1/8 scope host lo
+       valid_lft forever preferred_lft forever
+    inet6 ::1/128 scope host
+       valid_lft forever preferred_lft forever
+44: eth0@if45: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
+    link/ether 00:16:3e:de:51:a6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
diff --git a/tests/data/netinfo/sample-iproute-output-v4 b/tests/data/netinfo/sample-iproute-output-v4
new file mode 100644
index 0000000..904cb03
--- /dev/null
+++ b/tests/data/netinfo/sample-iproute-output-v4
@@ -0,0 +1,3 @@
+default via 192.168.2.1 dev enp0s25 proto static metric 100
+default via 192.168.2.1 dev wlp3s0 proto static metric 150
+192.168.2.0/24 dev enp0s25 proto kernel scope link src 192.168.2.18 metric 100
diff --git a/tests/data/netinfo/sample-iproute-output-v6 b/tests/data/netinfo/sample-iproute-output-v6
new file mode 100644
index 0000000..12bb1c1
--- /dev/null
+++ b/tests/data/netinfo/sample-iproute-output-v6
@@ -0,0 +1,11 @@
+2a00:abcd:82ae:cd33::657 dev enp0s25 proto kernel metric 256 expires 2334sec pref medium
+2a00:abcd:82ae:cd33::/64 dev enp0s25 proto ra metric 100 pref medium
+2a00:abcd:82ae:cd33::/56 via fe80::32ee:54de:cd43:b4e1 dev enp0s25 proto ra metric 100 pref medium
+fd81:123f:654::657 dev enp0s25 proto kernel metric 256 pref medium
+fd81:123f:654::/64 dev enp0s25 proto ra metric 100 pref medium
+fd81:123f:654::/48 via fe80::32ee:54de:cd43:b4e1 dev enp0s25 proto ra metric 100 pref medium
+fe80::abcd:ef12:bc34:da21 dev enp0s25 proto static metric 100 pref medium
+fe80::/64 dev enp0s25 proto kernel metric 256 pref medium
+default via fe80::32ee:54de:cd43:b4e1 dev enp0s25 proto static metric 100 pref medium
+local ::1 dev lo  table local  proto none  metric 0  pref medium
+local 2600:1f16:b80:ad00:90a:c915:bca6:5ff2 dev lo  table local  proto none  metric 0  pref medium
diff --git a/tests/data/netinfo/sample-route-output-v4 b/tests/data/netinfo/sample-route-output-v4
new file mode 100644
index 0000000..ecc31d9
--- /dev/null
+++ b/tests/data/netinfo/sample-route-output-v4
@@ -0,0 +1,5 @@
+Kernel IP routing table
+Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
+0.0.0.0         192.168.2.1     0.0.0.0         UG        100 0          0 enp0s25
+0.0.0.0         192.168.2.1     0.0.0.0         UG        150 0          0 wlp3s0
+192.168.2.0     0.0.0.0         255.255.255.0   U         100 0          0 enp0s25
diff --git a/tests/data/netinfo/sample-route-output-v6 b/tests/data/netinfo/sample-route-output-v6
new file mode 100644
index 0000000..4712b73
--- /dev/null
+++ b/tests/data/netinfo/sample-route-output-v6
@@ -0,0 +1,13 @@
+Kernel IPv6 routing table
+Destination                     Next Hop                   Flag Met Re  Use If
+2a00:abcd:82ae:cd33::657/128    ::                         Ue   256 1     0 enp0s25
+2a00:abcd:82ae:cd33::/64        ::                         U    100 1     0 enp0s25
+2a00:abcd:82ae:cd33::/56        fe80::32ee:54de:cd43:b4e1  UG   100 1     0 enp0s25
+fd81:123f:654::657/128          ::                         U    256 1     0 enp0s25
+fd81:123f:654::/64              ::                         U    100 1     0 enp0s25
+fd81:123f:654::/48              fe80::32ee:54de:cd43:b4e1  UG   100 1     0 enp0s25
+fe80::abcd:ef12:bc34:da21/128   ::                         U    100 1     2 enp0s25
+fe80::/64                       ::                         U    256 1 16880 enp0s25
+::/0                            fe80::32ee:54de:cd43:b4e1  UG   100 1     0 enp0s25
+::/0                            ::                         !n   -1  1424956 lo
+::1/128                         ::                         Un   0   4 26289 lo
diff --git a/tests/unittests/test__init__.py b/tests/unittests/test__init__.py
index 25878d7..739bbeb 100644
--- a/tests/unittests/test__init__.py
+++ b/tests/unittests/test__init__.py
@@ -182,7 +182,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertEqual(
             ('url', 'http://example.com'), main.parse_cmdline_url(cmdline))
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_invalid_content(self, m_read):
         key = "cloud-config-url"
         url = 'http://example.com/foo'
@@ -196,7 +196,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertIn(url, msg)
         self.assertFalse(os.path.exists(fpath))
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_valid_content(self, m_read):
         url = "http://example.com/foo";
         payload = b"#cloud-config\nmydata: foo\nbar: wark\n"
@@ -210,18 +210,18 @@ class TestCmdlineUrl(CiTestCase):
         self.assertEqual(logging.INFO, lvl)
         self.assertIn(url, msg)
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_no_key_found(self, m_read):
         cmdline = "ro mykey=http://example.com/foo root=foo"
         fpath = self.tmp_path("ccpath")
-        lvl, msg = main.attempt_cmdline_url(
+        lvl, _msg = main.attempt_cmdline_url(
             fpath, network=True, cmdline=cmdline)
 
         m_read.assert_not_called()
         self.assertFalse(os.path.exists(fpath))
         self.assertEqual(logging.DEBUG, lvl)
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_exception_warns(self, m_read):
         url = "http://example.com/foo";
         cmdline = "ro cloud-config-url=%s root=LABEL=bar" % url
diff --git a/tests/unittests/test_data.py b/tests/unittests/test_data.py
index 275b16d..3efe7ad 100644
--- a/tests/unittests/test_data.py
+++ b/tests/unittests/test_data.py
@@ -524,7 +524,17 @@ c: 4
         self.assertEqual(cfg.get('password'), 'gocubs')
         self.assertEqual(cfg.get('locale'), 'chicago')
 
-    @httpretty.activate
+
+class TestConsumeUserDataHttp(TestConsumeUserData, helpers.HttprettyTestCase):
+
+    def setUp(self):
+        TestConsumeUserData.setUp(self)
+        helpers.HttprettyTestCase.setUp(self)
+
+    def tearDown(self):
+        TestConsumeUserData.tearDown(self)
+        helpers.HttprettyTestCase.tearDown(self)
+
     @mock.patch('cloudinit.url_helper.time.sleep')
     def test_include(self, mock_sleep):
         """Test #include."""
@@ -543,7 +553,6 @@ c: 4
         cc = util.load_yaml(cc_contents)
         self.assertTrue(cc.get('included'))
 
-    @httpretty.activate
     @mock.patch('cloudinit.url_helper.time.sleep')
     def test_include_bad_url(self, mock_sleep):
         """Test #include with a bad URL."""
@@ -597,8 +606,10 @@ class TestUDProcess(helpers.ResourceUsingTestCase):
 
 
 class TestConvertString(helpers.TestCase):
+
     def test_handles_binary_non_utf8_decodable(self):
-        blob = b'\x32\x99'
+        """Printable unicode (not utf8-decodable) is safely converted."""
+        blob = b'#!/bin/bash\necho \xc3\x84\n'
         msg = ud.convert_string(blob)
         self.assertEqual(blob, msg.get_payload(decode=True))
 
@@ -612,6 +623,13 @@ class TestConvertString(helpers.TestCase):
         msg = ud.convert_string(text)
         self.assertEqual(text, msg.get_payload(decode=False))
 
+    def test_handle_mime_parts(self):
+        """Mime parts are properly returned as a mime message."""
+        message = MIMEBase("text", "plain")
+        message.set_payload("Just text")
+        msg = ud.convert_string(str(message))
+        self.assertEqual("Just text", msg.get_payload(decode=False))
+
 
 class TestFetchBaseConfig(helpers.TestCase):
     def test_only_builtin_gets_builtin(self):
diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
index 4fa9616..1e77842 100644
--- a/tests/unittests/test_datasource/test_aliyun.py
+++ b/tests/unittests/test_datasource/test_aliyun.py
@@ -130,7 +130,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
                          self.ds.get_hostname())
 
     @mock.patch("cloudinit.sources.DataSourceAliYun._is_aliyun")
-    @httpretty.activate
     def test_with_mock_server(self, m_is_aliyun):
         m_is_aliyun.return_value = True
         self.regist_default_server()
@@ -143,7 +142,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
         self._test_host_name()
 
     @mock.patch("cloudinit.sources.DataSourceAliYun._is_aliyun")
-    @httpretty.activate
     def test_returns_false_when_not_on_aliyun(self, m_is_aliyun):
         """If is_aliyun returns false, then get_data should return False."""
         m_is_aliyun.return_value = False
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 3e8b791..e82716e 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -1,10 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import helpers
-from cloudinit.util import b64e, decode_binary, load_file, write_file
 from cloudinit.sources import DataSourceAzure as dsaz
-from cloudinit.util import find_freebsd_part
-from cloudinit.util import get_path_dev_freebsd
+from cloudinit.util import (b64e, decode_binary, load_file, write_file,
+                            find_freebsd_part, get_path_dev_freebsd,
+                            MountFailedError)
 from cloudinit.version import version_string as vs
 from cloudinit.tests.helpers import (CiTestCase, TestCase, populate_dir, mock,
                                      ExitStack, PY26, SkipTest)
@@ -95,6 +95,8 @@ class TestAzureDataSource(CiTestCase):
         self.patches = ExitStack()
         self.addCleanup(self.patches.close)
 
+        self.patches.enter_context(mock.patch.object(dsaz, '_get_random_seed'))
+
         super(TestAzureDataSource, self).setUp()
 
     def apply_patches(self, patches):
@@ -214,7 +216,7 @@ scbus-1 on xpt0 bus 0
                 self.assertIn(tag, x)
 
         def tags_equal(x, y):
-            for x_tag, x_val in x.items():
+            for x_val in x.values():
                 y_val = y.get(x_val.tag)
                 self.assertEqual(x_val.text, y_val.text)
 
@@ -335,6 +337,18 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         self.assertTrue(ret)
         self.assertEqual(data['agent_invoked'], '_COMMAND')
 
+    def test_sys_cfg_set_never_destroy_ntfs(self):
+        sys_cfg = {'datasource': {'Azure': {
+            'never_destroy_ntfs': 'user-supplied-value'}}}
+        data = {'ovfcontent': construct_valid_ovf_env(data={}),
+                'sys_cfg': sys_cfg}
+
+        dsrc = self._get_ds(data)
+        ret = self._get_and_setup(dsrc)
+        self.assertTrue(ret)
+        self.assertEqual(dsrc.ds_cfg.get(dsaz.DS_CFG_KEY_PRESERVE_NTFS),
+                         'user-supplied-value')
+
     def test_username_used(self):
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
@@ -676,6 +690,8 @@ class TestAzureBounce(CiTestCase):
                               mock.MagicMock(return_value={})))
         self.patches.enter_context(
             mock.patch.object(dsaz.util, 'which', lambda x: True))
+        self.patches.enter_context(
+            mock.patch.object(dsaz, '_get_random_seed'))
 
         def _dmi_mocks(key):
             if key == 'system-uuid':
@@ -957,7 +973,9 @@ class TestCanDevBeReformatted(CiTestCase):
             # return sorted by partition number
             return sorted(ret, key=lambda d: d[0])
 
-        def mount_cb(device, callback):
+        def mount_cb(device, callback, mtype, update_env_for_mount):
+            self.assertEqual('ntfs', mtype)
+            self.assertEqual('C', update_env_for_mount.get('LANG'))
             p = self.tmp_dir()
             for f in bypath.get(device).get('files', []):
                 write_file(os.path.join(p, f), content=f)
@@ -988,14 +1006,16 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda2': {'num': 2},
                     '/dev/sda3': {'num': 3},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("3 or more", msg.lower())
 
     def test_no_partitions_is_false(self):
         """A disk with no partitions can not be formatted."""
         self.patchup({'/dev/sda': {}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not partitioned", msg.lower())
 
@@ -1007,7 +1027,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1},
                     '/dev/sda2': {'num': 2, 'fs': 'ext4', 'files': []},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not ntfs", msg.lower())
 
@@ -1020,7 +1041,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda2': {'num': 2, 'fs': 'ntfs',
                                   'files': ['secret.txt']},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("files on it", msg.lower())
 
@@ -1032,7 +1054,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1},
                     '/dev/sda2': {'num': 2, 'fs': 'ntfs', 'files': []},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1043,7 +1066,8 @@ class TestCanDevBeReformatted(CiTestCase):
                 'partitions': {
                     '/dev/sda1': {'num': 1, 'fs': 'zfs'},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not ntfs", msg.lower())
 
@@ -1055,9 +1079,14 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs',
                                   'files': ['file1.txt', 'file2.exe']},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
-        self.assertFalse(value)
-        self.assertIn("files on it", msg.lower())
+        with mock.patch.object(dsaz.LOG, 'warning') as warning:
+            value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                     preserve_ntfs=False)
+            wmsg = warning.call_args[0][0]
+            self.assertIn("looks like you're using NTFS on the ephemeral disk",
+                          wmsg)
+            self.assertFalse(value)
+            self.assertIn("files on it", msg.lower())
 
     def test_one_partition_ntfs_empty_is_true(self):
         """1 mountable ntfs partition and no files can be formatted."""
@@ -1066,7 +1095,8 @@ class TestCanDevBeReformatted(CiTestCase):
                 'partitions': {
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1078,7 +1108,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs',
                                   'files': ['dataloss_warning_readme.txt']}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1093,7 +1124,8 @@ class TestCanDevBeReformatted(CiTestCase):
                         'num': 1, 'fs': 'ntfs', 'files': [self.warning_file],
                         'realpath': '/dev/sdb1'}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted(epath)
+        value, msg = dsaz.can_dev_be_reformatted(epath,
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1112,10 +1144,49 @@ class TestCanDevBeReformatted(CiTestCase):
                     epath + '-part3': {'num': 3, 'fs': 'ext',
                                        'realpath': '/dev/sdb3'}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted(epath)
+        value, msg = dsaz.can_dev_be_reformatted(epath,
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("3 or more", msg.lower())
 
+    def test_ntfs_mount_errors_true(self):
+        """can_dev_be_reformatted does not fail if NTFS is unknown fstype."""
+        self.patchup({
+            '/dev/sda': {
+                'partitions': {
+                    '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
+                }}})
+
+        err = ("Unexpected error while running command.\n",
+               "Command: ['mount', '-o', 'ro,sync', '-t', 'auto', ",
+               "'/dev/sda1', '/fake-tmp/dir']\n"
+               "Exit code: 32\n"
+               "Reason: -\n"
+               "Stdout: -\n"
+               "Stderr: mount: unknown filesystem type 'ntfs'")
+        self.m_mount_cb.side_effect = MountFailedError(
+            'Failed mounting %s to %s due to: %s' %
+            ('/dev/sda', '/fake-tmp/dir', err))
+
+        value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
+                                                 preserve_ntfs=False)
+        self.assertTrue(value)
+        self.assertIn('cannot mount NTFS, assuming', msg)
+
+    def test_never_destroy_ntfs_config_false(self):
+        """Normally formattable situation with never_destroy_ntfs set."""
+        self.patchup({
+            '/dev/sda': {
+                'partitions': {
+                    '/dev/sda1': {'num': 1, 'fs': 'ntfs',
+                                  'files': ['dataloss_warning_readme.txt']}
+                }}})
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=True)
+        self.assertFalse(value)
+        self.assertIn("config says to never destroy NTFS "
+                      "(datasource.Azure.never_destroy_ntfs)", msg)
+
 
 class TestAzureNetExists(CiTestCase):
 
@@ -1125,19 +1196,9 @@ class TestAzureNetExists(CiTestCase):
         self.assertTrue(hasattr(dsaz, "DataSourceAzureNet"))
 
 
-@mock.patch('cloudinit.sources.DataSourceAzure.util.subp')
-@mock.patch.object(dsaz, 'get_hostname')
-@mock.patch.object(dsaz, 'set_hostname')
-class TestAzureDataSourcePreprovisioning(CiTestCase):
-
-    def setUp(self):
-        super(TestAzureDataSourcePreprovisioning, self).setUp()
-        tmp = self.tmp_dir()
-        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
-        self.paths = helpers.Paths({'cloud_dir': tmp})
-        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+class TestPreprovisioningReadAzureOvfFlag(CiTestCase):
 
-    def test_read_azure_ovf_with_true_flag(self, *args):
+    def test_read_azure_ovf_with_true_flag(self):
         """The read_azure_ovf method should set the PreprovisionedVM
            cfg flag if the proper setting is present."""
         content = construct_valid_ovf_env(
@@ -1146,7 +1207,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertTrue(cfg['PreprovisionedVm'])
 
-    def test_read_azure_ovf_with_false_flag(self, *args):
+    def test_read_azure_ovf_with_false_flag(self):
         """The read_azure_ovf method should set the PreprovisionedVM
            cfg flag to false if the proper setting is false."""
         content = construct_valid_ovf_env(
@@ -1155,7 +1216,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertFalse(cfg['PreprovisionedVm'])
 
-    def test_read_azure_ovf_without_flag(self, *args):
+    def test_read_azure_ovf_without_flag(self):
         """The read_azure_ovf method should not set the
            PreprovisionedVM cfg flag."""
         content = construct_valid_ovf_env()
@@ -1163,12 +1224,121 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertFalse(cfg['PreprovisionedVm'])
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
-    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
-    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
-    @mock.patch('requests.Session.request')
+
+@mock.patch('os.path.isfile')
+class TestPreprovisioningShouldReprovision(CiTestCase):
+
+    def setUp(self):
+        super(TestPreprovisioningShouldReprovision, self).setUp()
+        tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
+        self.paths = helpers.Paths({'cloud_dir': tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    def test__should_reprovision_with_true_cfg(self, isfile, write_f):
+        """The _should_reprovision method should return true with config
+           flag present."""
+        isfile.return_value = False
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertTrue(dsa._should_reprovision(
+            (None, None, {'PreprovisionedVm': True}, None)))
+
+    def test__should_reprovision_with_file_existing(self, isfile):
+        """The _should_reprovision method should return True if the sentinal
+           exists."""
+        isfile.return_value = True
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertTrue(dsa._should_reprovision(
+            (None, None, {'preprovisionedvm': False}, None)))
+
+    def test__should_reprovision_returns_false(self, isfile):
+        """The _should_reprovision method should return False
+           if config and sentinal are not present."""
+        isfile.return_value = False
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertFalse(dsa._should_reprovision((None, None, {}, None)))
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.DataSourceAzure._poll_imds')
+    def test_reprovision_calls__poll_imds(self, _poll_imds, isfile):
+        """_reprovision will poll IMDS."""
+        isfile.return_value = False
+        hostname = "myhost"
+        username = "myuser"
+        odata = {'HostName': hostname, 'UserName': username}
+        _poll_imds.return_value = construct_valid_ovf_env(data=odata)
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        dsa._reprovision()
+        _poll_imds.assert_called_with()
+
+
+@mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+@mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+@mock.patch('requests.Session.request')
+@mock.patch(
+    'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
+class TestPreprovisioningPollIMDS(CiTestCase):
+
+    def setUp(self):
+        super(TestPreprovisioningPollIMDS, self).setUp()
+        self.tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', self.tmp)
+        self.paths = helpers.Paths({'cloud_dir': self.tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    def test_poll_imds_calls_report_ready(self, write_f, report_ready_func,
+                                          fake_resp, m_dhcp, m_net):
+        """The poll_imds will call report_ready after creating marker file."""
+        report_marker = self.tmp_path('report_marker', self.tmp)
+        lease = {
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'unknown-245': '624c3620'}
+        m_dhcp.return_value = [lease]
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        mock_path = (
+            'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
+        with mock.patch(mock_path, report_marker):
+            dsa._poll_imds()
+        self.assertEqual(report_ready_func.call_count, 1)
+        report_ready_func.assert_called_with(lease=lease)
+
+    def test_poll_imds_report_ready_false(self, report_ready_func,
+                                          fake_resp, m_dhcp, m_net):
+        """The poll_imds should not call reporting ready
+           when flag is false"""
+        report_marker = self.tmp_path('report_marker', self.tmp)
+        write_file(report_marker, content='dont run report_ready :)')
+        m_dhcp.return_value = [{
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'unknown-245': '624c3620'}]
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        mock_path = (
+            'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
+        with mock.patch(mock_path, report_marker):
+            dsa._poll_imds()
+        self.assertEqual(report_ready_func.call_count, 0)
+
+
+@mock.patch('cloudinit.sources.DataSourceAzure.util.subp')
+@mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+@mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
+@mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+@mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+@mock.patch('requests.Session.request')
+class TestAzureDataSourcePreprovisioning(CiTestCase):
+
+    def setUp(self):
+        super(TestAzureDataSourcePreprovisioning, self).setUp()
+        tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
+        self.paths = helpers.Paths({'cloud_dir': tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
     def test_poll_imds_returns_ovf_env(self, fake_resp, m_dhcp, m_net,
-                                       m_is_bsd, *args):
+                                       m_is_bsd, write_f, subp):
         """The _poll_imds method should return the ovf_env.xml."""
         m_is_bsd.return_value = False
         m_dhcp.return_value = [{
@@ -1194,12 +1364,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
         self.assertEqual(m_net.call_count, 1)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
-    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
-    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
-    @mock.patch('requests.Session.request')
     def test__reprovision_calls__poll_imds(self, fake_resp, m_dhcp, m_net,
-                                           m_is_bsd, *args):
+                                           m_is_bsd, write_f, subp):
         """The _reprovision method should call poll IMDS."""
         m_is_bsd.return_value = False
         m_dhcp.return_value = [{
@@ -1216,7 +1382,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         fake_resp.return_value = mock.MagicMock(status_code=200, text=content,
                                                 content=content)
         dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        md, ud, cfg, d = dsa._reprovision()
+        md, _ud, cfg, _d = dsa._reprovision()
         self.assertEqual(md['local-hostname'], hostname)
         self.assertEqual(cfg['system_info']['default_user']['name'], username)
         self.assertEqual(fake_resp.call_args_list,
@@ -1231,32 +1397,5 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
         self.assertEqual(m_net.call_count, 1)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_with_true_cfg(self, isfile, write_f, *args):
-        """The _should_reprovision method should return true with config
-           flag present."""
-        isfile.return_value = False
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertTrue(dsa._should_reprovision(
-            (None, None, {'PreprovisionedVm': True}, None)))
-
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_with_file_existing(self, isfile, *args):
-        """The _should_reprovision method should return True if the sentinal
-           exists."""
-        isfile.return_value = True
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertTrue(dsa._should_reprovision(
-            (None, None, {'preprovisionedvm': False}, None)))
-
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_returns_false(self, isfile, *args):
-        """The _should_reprovision method should return False
-           if config and sentinal are not present."""
-        isfile.return_value = False
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertFalse(dsa._should_reprovision((None, None, {}, None)))
-
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index b42b073..af9d3e1 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -195,7 +195,7 @@ class TestAzureEndpointHttpClient(CiTestCase):
         self.addCleanup(patches.close)
 
         self.read_file_or_url = patches.enter_context(
-            mock.patch.object(azure_helper.util, 'read_file_or_url'))
+            mock.patch.object(azure_helper.url_helper, 'read_file_or_url'))
 
     def test_non_secure_get(self):
         client = azure_helper.AzureEndpointHttpClient(mock.MagicMock())
diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py
index ec33388..0d35dc2 100644
--- a/tests/unittests/test_datasource/test_common.py
+++ b/tests/unittests/test_datasource/test_common.py
@@ -40,6 +40,7 @@ DEFAULT_LOCAL = [
     OVF.DataSourceOVF,
     SmartOS.DataSourceSmartOS,
     Ec2.DataSourceEc2Local,
+    OpenStack.DataSourceOpenStackLocal,
 ]
 
 DEFAULT_NETWORK = [
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index dff8b1e..497e761 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -191,7 +191,6 @@ def register_mock_metaserver(base_url, data):
             register(base_url, 'not found', status=404)
 
     def myreg(*argc, **kwargs):
-        # print("register_url(%s, %s)" % (argc, kwargs))
         return httpretty.register_uri(httpretty.GET, *argc, **kwargs)
 
     register_helper(myreg, base_url, data)
@@ -236,7 +235,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                 return_value=platform_data)
 
         if md:
-            httpretty.HTTPretty.allow_net_connect = False
             all_versions = (
                 [ds.min_metadata_version] + ds.extended_metadata_versions)
             for version in all_versions:
@@ -255,7 +253,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                         register_mock_metaserver(instance_id_url, None)
         return ds
 
-    @httpretty.activate
     def test_network_config_property_returns_version_1_network_data(self):
         """network_config property returns network version 1 for metadata.
 
@@ -288,7 +285,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                     m_get_mac.return_value = mac1
                     self.assertEqual(expected, ds.network_config)
 
-    @httpretty.activate
     def test_network_config_property_set_dhcp4_on_private_ipv4(self):
         """network_config property configures dhcp4 on private ipv4 nics.
 
@@ -330,7 +326,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds._network_config = {'cached': 'data'}
         self.assertEqual({'cached': 'data'}, ds.network_config)
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
     def test_network_config_cached_property_refreshed_on_upgrade(self, m_dhcp):
         """Refresh the network_config Ec2 cache if network key is absent.
@@ -364,7 +359,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
              'type': 'physical'}]}
         self.assertEqual(expected, ds.network_config)
 
-    @httpretty.activate
     def test_ec2_get_instance_id_refreshes_identity_on_upgrade(self):
         """get_instance-id gets DataSourceEc2Local.identity if not present.
 
@@ -397,7 +391,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds.metadata = DEFAULT_METADATA
         self.assertEqual('my-identity-id', ds.get_instance_id())
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
     def test_valid_platform_with_strict_true(self, m_dhcp):
         """Valid platform data should return true with strict_id true."""
@@ -409,7 +402,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         self.assertTrue(ret)
         self.assertEqual(0, m_dhcp.call_count)
 
-    @httpretty.activate
     def test_valid_platform_with_strict_false(self):
         """Valid platform data should return true with strict_id false."""
         ds = self._setup_ds(
@@ -419,7 +411,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ret = ds.get_data()
         self.assertTrue(ret)
 
-    @httpretty.activate
     def test_unknown_platform_with_strict_true(self):
         """Unknown platform data with strict_id true should return False."""
         uuid = 'ab439480-72bf-11d3-91fc-b8aded755F9a'
@@ -430,7 +421,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ret = ds.get_data()
         self.assertFalse(ret)
 
-    @httpretty.activate
     def test_unknown_platform_with_strict_false(self):
         """Unknown platform data with strict_id false should return True."""
         uuid = 'ab439480-72bf-11d3-91fc-b8aded755F9a'
@@ -462,7 +452,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                     ' not {0}'.format(platform_name))
                 self.assertIn(message, self.logs.getvalue())
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD')
     def test_ec2_local_returns_false_on_bsd(self, m_is_freebsd):
         """DataSourceEc2Local returns False on BSD.
@@ -481,7 +470,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
             "FreeBSD doesn't support running dhclient with -sf",
             self.logs.getvalue())
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
     @mock.patch('cloudinit.net.find_fallback_nic')
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
diff --git a/tests/unittests/test_datasource/test_gce.py b/tests/unittests/test_datasource/test_gce.py
index eb3cec4..41176c6 100644
--- a/tests/unittests/test_datasource/test_gce.py
+++ b/tests/unittests/test_datasource/test_gce.py
@@ -78,7 +78,6 @@ def _set_mock_metadata(gce_meta=None):
             return (404, headers, '')
 
     # reset is needed. https://github.com/gabrielfalcao/HTTPretty/issues/316
-    httpretty.reset()
     httpretty.register_uri(httpretty.GET, MD_URL_RE, body=_request_callback)
 
 
diff --git a/tests/unittests/test_datasource/test_ibmcloud.py b/tests/unittests/test_datasource/test_ibmcloud.py
index 621cfe4..e639ae4 100644
--- a/tests/unittests/test_datasource/test_ibmcloud.py
+++ b/tests/unittests/test_datasource/test_ibmcloud.py
@@ -259,4 +259,54 @@ class TestReadMD(test_helpers.CiTestCase):
                          ret['metadata'])
 
 
+class TestIsIBMProvisioning(test_helpers.FilesystemMockingTestCase):
+    """Test the _is_ibm_provisioning method."""
+    inst_log = "/root/swinstall.log"
+    prov_cfg = "/root/provisioningConfiguration.cfg"
+    boot_ref = "/proc/1/environ"
+    with_logs = True
+
+    def _call_with_root(self, rootd):
+        self.reRoot(rootd)
+        return ibm._is_ibm_provisioning()
+
+    def test_no_config(self):
+        """No provisioning config means not provisioning."""
+        self.assertFalse(self._call_with_root(self.tmp_dir()))
+
+    def test_config_only(self):
+        """A provisioning config without a log means provisioning."""
+        rootd = self.tmp_dir()
+        test_helpers.populate_dir(rootd, {self.prov_cfg: "key=value"})
+        self.assertTrue(self._call_with_root(rootd))
+
+    def test_config_with_old_log(self):
+        """A config with a log from previous boot is not provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", -30),
+                self.boot_ref: ("PWD=/", 0)}
+        test_helpers.populate_dir_with_ts(rootd, data)
+        self.assertFalse(self._call_with_root(rootd=rootd))
+        self.assertIn("from previous boot", self.logs.getvalue())
+
+    def test_config_with_new_log(self):
+        """A config with a log from this boot is provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", 30),
+                self.boot_ref: ("PWD=/", 0)}
+        test_helpers.populate_dir_with_ts(rootd, data)
+        self.assertTrue(self._call_with_root(rootd=rootd))
+        self.assertIn("from current boot", self.logs.getvalue())
+
+    def test_config_and_log_no_reference(self):
+        """If the config and log existed, but no reference, assume not."""
+        rootd = self.tmp_dir()
+        test_helpers.populate_dir(
+            rootd, {self.prov_cfg: "key=value", self.inst_log: "log data\n"})
+        self.assertFalse(self._call_with_root(rootd=rootd))
+        self.assertIn("no reference file", self.logs.getvalue())
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_maas.py b/tests/unittests/test_datasource/test_maas.py
index 6e4031c..c84d067 100644
--- a/tests/unittests/test_datasource/test_maas.py
+++ b/tests/unittests/test_datasource/test_maas.py
@@ -53,7 +53,7 @@ class TestMAASDataSource(CiTestCase):
         my_d = os.path.join(self.tmp, "valid_extra")
         populate_dir(my_d, data)
 
-        ud, md, vd = DataSourceMAAS.read_maas_seed_dir(my_d)
+        ud, md, _vd = DataSourceMAAS.read_maas_seed_dir(my_d)
 
         self.assertEqual(userdata, ud)
         for key in ('instance-id', 'local-hostname'):
@@ -149,7 +149,7 @@ class TestMAASDataSource(CiTestCase):
             'meta-data/local-hostname': 'test-hostname',
             'meta-data/vendor-data': yaml.safe_dump(expected_vd).encode(),
         }
-        ud, md, vd = self.mock_read_maas_seed_url(
+        _ud, md, vd = self.mock_read_maas_seed_url(
             valid, "http://example.com/foo";)
 
         self.assertEqual(valid['meta-data/instance-id'], md['instance-id'])
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index 70d50de..cdbd1e1 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -51,9 +51,6 @@ class TestNoCloudDataSource(CiTestCase):
         class PsuedoException(Exception):
             pass
 
-        def my_find_devs_with(*args, **kwargs):
-            raise PsuedoException
-
         self.mocks.enter_context(
             mock.patch.object(util, 'find_devs_with',
                               side_effect=PsuedoException))
diff --git a/tests/unittests/test_datasource/test_openstack.py b/tests/unittests/test_datasource/test_openstack.py
index 42c3155..585acc3 100644
--- a/tests/unittests/test_datasource/test_openstack.py
+++ b/tests/unittests/test_datasource/test_openstack.py
@@ -16,7 +16,7 @@ from six import StringIO
 
 from cloudinit import helpers
 from cloudinit import settings
-from cloudinit.sources import convert_vendordata
+from cloudinit.sources import convert_vendordata, UNSET
 from cloudinit.sources import DataSourceOpenStack as ds
 from cloudinit.sources.helpers import openstack
 from cloudinit import util
@@ -69,6 +69,8 @@ EC2_VERSIONS = [
     'latest',
 ]
 
+MOCK_PATH = 'cloudinit.sources.DataSourceOpenStack.'
+
 
 # TODO _register_uris should leverage test_ec2.register_mock_metaserver.
 def _register_uris(version, ec2_files, ec2_meta, os_files):
@@ -129,13 +131,14 @@ def _read_metadata_service():
 
 
 class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
+
+    with_logs = True
     VERSION = 'latest'
 
     def setUp(self):
         super(TestOpenStackDataSource, self).setUp()
         self.tmp = self.tmp_dir()
 
-    @hp.activate
     def test_successful(self):
         _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
         f = _read_metadata_service()
@@ -157,7 +160,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual('b0fa911b-69d4-4476-bbe2-1c92bff6535c',
                          metadata.get('instance-id'))
 
-    @hp.activate
     def test_no_ec2(self):
         _register_uris(self.VERSION, {}, {}, OS_FILES)
         f = _read_metadata_service()
@@ -168,7 +170,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual({}, f.get('ec2-metadata'))
         self.assertEqual(2, f.get('version'))
 
-    @hp.activate
     def test_bad_metadata(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -177,7 +178,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.NonReadable, _read_metadata_service)
 
-    @hp.activate
     def test_bad_uuid(self):
         os_files = copy.deepcopy(OS_FILES)
         os_meta = copy.deepcopy(OSTACK_META)
@@ -188,7 +188,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
     def test_userdata_empty(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -201,7 +200,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(CONTENT_1, f['files']['/etc/bar/bar.cfg'])
         self.assertFalse(f.get('userdata'))
 
-    @hp.activate
     def test_vendordata_empty(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -213,7 +211,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(CONTENT_1, f['files']['/etc/bar/bar.cfg'])
         self.assertFalse(f.get('vendordata'))
 
-    @hp.activate
     def test_vendordata_invalid(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -222,7 +219,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
     def test_metadata_invalid(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -231,14 +227,16 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
-    def test_datasource(self):
+    @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_datasource(self, m_dhcp):
         _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
-        ds_os = ds.DataSourceOpenStack(settings.CFG_BUILTIN,
-                                       None,
-                                       helpers.Paths({'run_dir': self.tmp}))
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertTrue(found)
         self.assertEqual(2, ds_os.version)
         md = dict(ds_os.metadata)
@@ -250,8 +248,40 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(2, len(ds_os.files))
         self.assertEqual(VENDOR_DATA, ds_os.vendordata_pure)
         self.assertIsNone(ds_os.vendordata_raw)
+        m_dhcp.assert_not_called()
 
     @hp.activate
+    @test_helpers.mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+    @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_local_datasource(self, m_dhcp, m_net):
+        """OpenStackLocal calls EphemeralDHCPNetwork and gets instance data."""
+        _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
+        ds_os_local = ds.DataSourceOpenStackLocal(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os_local._fallback_interface = 'eth9'  # Monkey patch for dhcp
+        m_dhcp.return_value = [{
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'broadcast-address': '192.168.2.255'}]
+
+        self.assertIsNone(ds_os_local.version)
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os_local.get_data()
+        self.assertTrue(found)
+        self.assertEqual(2, ds_os_local.version)
+        md = dict(ds_os_local.metadata)
+        md.pop('instance-id', None)
+        md.pop('local-hostname', None)
+        self.assertEqual(OSTACK_META, md)
+        self.assertEqual(EC2_META, ds_os_local.ec2_metadata)
+        self.assertEqual(USER_DATA, ds_os_local.userdata_raw)
+        self.assertEqual(2, len(ds_os_local.files))
+        self.assertEqual(VENDOR_DATA, ds_os_local.vendordata_pure)
+        self.assertIsNone(ds_os_local.vendordata_raw)
+        m_dhcp.assert_called_with('eth9')
+
     def test_bad_datasource_meta(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -262,11 +292,17 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
                                        None,
                                        helpers.Paths({'run_dir': self.tmp}))
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
+        self.assertIn(
+            'InvalidMetaDataException: Broken metadata address'
+            ' http://169.254.169.25',
+            self.logs.getvalue())
 
-    @hp.activate
     def test_no_datasource(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -281,11 +317,53 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             'timeout': 0,
         }
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
 
-    @hp.activate
+    def test_network_config_disabled_by_datasource_config(self):
+        """The network_config can be disabled from datasource config."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os.ds_cfg = {'apply_network_config': False}
+        sample_json = {'links': [{'ethernet_mac_address': 'mymac'}],
+                       'networks': [], 'services': []}
+        ds_os.network_json = sample_json  # Ignore this content from metadata
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            self.assertIsNone(ds_os.network_config)
+        m_convert_json.assert_not_called()
+
+    def test_network_config_from_network_json(self):
+        """The datasource gets network_config from network_data.json."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        example_cfg = {'version': 1, 'config': []}
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        sample_json = {'links': [{'ethernet_mac_address': 'mymac'}],
+                       'networks': [], 'services': []}
+        ds_os.network_json = sample_json
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            m_convert_json.return_value = example_cfg
+            self.assertEqual(example_cfg, ds_os.network_config)
+        self.assertIn(
+            'network config provided via network_json', self.logs.getvalue())
+        m_convert_json.assert_called_with(sample_json, known_macs=None)
+
+    def test_network_config_cached(self):
+        """The datasource caches the network_config property."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        example_cfg = {'version': 1, 'config': []}
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os._network_config = example_cfg
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            self.assertEqual(example_cfg, ds_os.network_config)
+        m_convert_json.assert_not_called()
+
     def test_disabled_datasource(self):
         os_files = copy.deepcopy(OS_FILES)
         os_meta = copy.deepcopy(OSTACK_META)
@@ -304,10 +382,42 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             'timeout': 0,
         }
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
 
+    @hp.activate
+    def test_wb__crawl_metadata_does_not_persist(self):
+        """_crawl_metadata returns current metadata and does not cache."""
+        _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        crawled_data = ds_os._crawl_metadata()
+        self.assertEqual(UNSET, ds_os.ec2_metadata)
+        self.assertIsNone(ds_os.userdata_raw)
+        self.assertEqual(0, len(ds_os.files))
+        self.assertIsNone(ds_os.vendordata_raw)
+        self.assertEqual(
+            ['dsmode', 'ec2-metadata', 'files', 'metadata', 'networkdata',
+             'userdata', 'vendordata', 'version'],
+            sorted(crawled_data.keys()))
+        self.assertEqual('local', crawled_data['dsmode'])
+        self.assertEqual(EC2_META, crawled_data['ec2-metadata'])
+        self.assertEqual(2, len(crawled_data['files']))
+        md = copy.deepcopy(crawled_data['metadata'])
+        md.pop('instance-id')
+        md.pop('local-hostname')
+        self.assertEqual(OSTACK_META, md)
+        self.assertEqual(
+            json.loads(OS_FILES['openstack/latest/network_data.json']),
+            crawled_data['networkdata'])
+        self.assertEqual(USER_DATA, crawled_data['userdata'])
+        self.assertEqual(VENDOR_DATA, crawled_data['vendordata'])
+        self.assertEqual(2, crawled_data['version'])
+
 
 class TestVendorDataLoading(test_helpers.TestCase):
     def cvj(self, data):
@@ -339,4 +449,89 @@ class TestVendorDataLoading(test_helpers.TestCase):
         data = {'foo': 'bar', 'cloud-init': ['VD_1', 'VD_2']}
         self.assertEqual(self.cvj(data), data['cloud-init'])
 
+
+@test_helpers.mock.patch(MOCK_PATH + 'util.is_x86')
+class TestDetectOpenStack(test_helpers.CiTestCase):
+
+    def test_detect_openstack_non_intel_x86(self, m_is_x86):
+        """Return True on non-intel platforms because dmi isn't conclusive."""
+        m_is_x86.return_value = False
+        self.assertTrue(
+            ds.detect_openstack(), 'Expected detect_openstack == True')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_not_detect_openstack_intel_x86_ec2(self, m_dmi, m_proc_env,
+                                                m_is_x86):
+        """Return False on EC2 platforms."""
+        m_is_x86.return_value = True
+        # No product_name in proc/1/environ
+        m_proc_env.return_value = {'HOME': '/'}
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish' on EC2
+            if dmi_key == 'chassis-asset-tag':
+                return ''  # Empty string on EC2
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertFalse(
+            ds.detect_openstack(), 'Expected detect_openstack == False on EC2')
+        m_proc_env.assert_called_with(1)
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_intel_product_name_compute(self, m_dmi,
+                                                         m_is_x86):
+        """Return True on OpenStack compute and nova instances."""
+        m_is_x86.return_value = True
+        openstack_product_names = ['OpenStack Nova', 'OpenStack Compute']
+
+        for product_name in openstack_product_names:
+            m_dmi.return_value = product_name
+            self.assertTrue(
+                ds.detect_openstack(), 'Failed to detect_openstack')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_opentelekomcloud_chassis_asset_tag(self, m_dmi,
+                                                                 m_is_x86):
+        """Return True on OpenStack reporting OpenTelekomCloud asset-tag."""
+        m_is_x86.return_value = True
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish' on OpenTelekomCloud
+            if dmi_key == 'chassis-asset-tag':
+                return 'OpenTelekomCloud'
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertTrue(
+            ds.detect_openstack(),
+            'Expected detect_openstack == True on OpenTelekomCloud')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_by_proc_1_environ(self, m_dmi, m_proc_env,
+                                                m_is_x86):
+        """Return True when nova product_name specified in /proc/1/environ."""
+        m_is_x86.return_value = True
+        # Nova product_name in proc/1/environ
+        m_proc_env.return_value = {
+            'HOME': '/', 'product_name': 'OpenStack Nova'}
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish'
+            if dmi_key == 'chassis-asset-tag':
+                return ''  # Nothin 'openstackish'
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertTrue(
+            ds.detect_openstack(),
+            'Expected detect_openstack == True on OpenTelekomCloud')
+        m_proc_env.assert_called_with(1)
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index 8dec06b..e4e9bb2 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -176,7 +176,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.vendordata_url = \
             DataSourceScaleway.BUILTIN_DS_CONFIG['vendordata_url']
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
@@ -212,7 +211,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.region)
         self.assertEqual(sleep.call_count, 0)
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
@@ -236,7 +234,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.get_vendordata_raw())
         self.assertEqual(sleep.call_count, 0)
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
index 88bae5f..dca0b3d 100644
--- a/tests/unittests/test_datasource/test_smartos.py
+++ b/tests/unittests/test_datasource/test_smartos.py
@@ -1,4 +1,5 @@
 # Copyright (C) 2013 Canonical Ltd.
+# Copyright (c) 2018, Joyent, Inc.
 #
 # Author: Ben Howard <ben.howard@xxxxxxxxxxxxx>
 #
@@ -15,23 +16,27 @@ from __future__ import print_function
 
 from binascii import crc32
 import json
+import multiprocessing
 import os
 import os.path
 import re
 import shutil
+import signal
 import stat
 import tempfile
+import unittest2
 import uuid
 
 from cloudinit import serial
 from cloudinit.sources import DataSourceSmartOS
 from cloudinit.sources.DataSourceSmartOS import (
-    convert_smartos_network_data as convert_net)
+    convert_smartos_network_data as convert_net,
+    SMARTOS_ENV_KVM, SERIAL_DEVICE, get_smartos_environ)
 
 import six
 
 from cloudinit import helpers as c_helpers
-from cloudinit.util import b64e
+from cloudinit.util import (b64e, subp)
 
 from cloudinit.tests.helpers import mock, FilesystemMockingTestCase, TestCase
 
@@ -318,12 +323,19 @@ MOCK_RETURNS = {
 
 DMI_DATA_RETURN = 'smartdc'
 
+# Useful for calculating the length of a frame body.  A SUCCESS body will be
+# followed by more characters or be one character less if SUCCESS with no
+# payload.  See Section 4.3 of https://eng.joyent.com/mdata/protocol.html.
+SUCCESS_LEN = len('0123abcd SUCCESS ')
+NOTFOUND_LEN = len('0123abcd NOTFOUND')
+
 
 class PsuedoJoyentClient(object):
     def __init__(self, data=None):
         if data is None:
             data = MOCK_RETURNS.copy()
         self.data = data
+        self._is_open = False
         return
 
     def get(self, key, default=None, strip=False):
@@ -344,6 +356,14 @@ class PsuedoJoyentClient(object):
     def exists(self):
         return True
 
+    def open_transport(self):
+        assert(not self._is_open)
+        self._is_open = True
+
+    def close_transport(self):
+        assert(self._is_open)
+        self._is_open = False
+
 
 class TestSmartOSDataSource(FilesystemMockingTestCase):
     def setUp(self):
@@ -421,6 +441,34 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
         self.assertEqual(MOCK_RETURNS['hostname'],
                          dsrc.metadata['local-hostname'])
 
+    def test_hostname_if_no_sdc_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        my_returns['sdc:hostname'] = 'sdc-' + my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['hostname'],
+                         dsrc.metadata['local-hostname'])
+
+    def test_sdc_hostname_if_no_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        my_returns['sdc:hostname'] = 'sdc-' + my_returns['hostname']
+        del my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['sdc:hostname'],
+                         dsrc.metadata['local-hostname'])
+
+    def test_sdc_uuid_if_no_hostname_or_sdc_hostname(self):
+        my_returns = MOCK_RETURNS.copy()
+        del my_returns['hostname']
+        dsrc = self._get_ds(mockdata=my_returns)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(my_returns['sdc:uuid'],
+                         dsrc.metadata['local-hostname'])
+
     def test_userdata(self):
         dsrc = self._get_ds(mockdata=MOCK_RETURNS)
         ret = dsrc.get_data()
@@ -592,8 +640,46 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
                          mydscfg['disk_aliases']['FOO'])
 
 
+class ShortReader(object):
+    """Implements a 'read' interface for bytes provided.
+    much like io.BytesIO but the 'endbyte' acts as if EOF.
+    When it is reached a short will be returned."""
+    def __init__(self, initial_bytes, endbyte=b'\0'):
+        self.data = initial_bytes
+        self.index = 0
+        self.len = len(self.data)
+        self.endbyte = endbyte
+
+    @property
+    def emptied(self):
+        return self.index >= self.len
+
+    def read(self, size=-1):
+        """Read size bytes but not past a null."""
+        if size == 0 or self.index >= self.len:
+            return b''
+
+        rsize = size
+        if size < 0 or size + self.index > self.len:
+            rsize = self.len - self.index
+
+        next_null = self.data.find(self.endbyte, self.index, rsize)
+        if next_null >= 0:
+            rsize = next_null - self.index + 1
+        i = self.index
+        self.index += rsize
+        ret = self.data[i:i + rsize]
+        if len(ret) and ret[-1:] == self.endbyte:
+            ret = ret[:-1]
+        return ret
+
+
 class TestJoyentMetadataClient(FilesystemMockingTestCase):
 
+    invalid = b'invalid command\n'
+    failure = b'FAILURE\n'
+    v2_ok = b'V2_OK\n'
+
     def setUp(self):
         super(TestJoyentMetadataClient, self).setUp()
 
@@ -603,7 +689,7 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
         self.response_parts = {
             'command': 'SUCCESS',
             'crc': 'b5a9ff00',
-            'length': 17 + len(b64e(self.metadata_value)),
+            'length': SUCCESS_LEN + len(b64e(self.metadata_value)),
             'payload': b64e(self.metadata_value),
             'request_id': '{0:08x}'.format(self.request_id),
         }
@@ -636,6 +722,11 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
         return DataSourceSmartOS.JoyentMetadataClient(
             fp=self.serial, smartos_type=DataSourceSmartOS.SMARTOS_ENV_KVM)
 
+    def _get_serial_client(self):
+        self.serial.timeout = 1
+        return DataSourceSmartOS.JoyentMetadataSerialClient(None,
+                                                            fp=self.serial)
+
     def assertEndsWith(self, haystack, prefix):
         self.assertTrue(haystack.endswith(prefix),
                         "{0} does not end with '{1}'".format(
@@ -646,12 +737,14 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
                         "{0} does not start with '{1}'".format(
                             repr(haystack), prefix))
 
+    def assertNoMoreSideEffects(self, obj):
+        self.assertRaises(StopIteration, obj)
+
     def test_get_metadata_writes_a_single_line(self):
         client = self._get_client()
         client.get('some_key')
         self.assertEqual(1, self.serial.write.call_count)
         written_line = self.serial.write.call_args[0][0]
-        print(type(written_line))
         self.assertEndsWith(written_line.decode('ascii'),
                             b'\n'.decode('ascii'))
         self.assertEqual(1, written_line.count(b'\n'))
@@ -732,11 +825,73 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
     def test_get_metadata_returns_None_if_value_not_found(self):
         self.response_parts['payload'] = ''
         self.response_parts['command'] = 'NOTFOUND'
-        self.response_parts['length'] = 17
+        self.response_parts['length'] = NOTFOUND_LEN
         client = self._get_client()
         client._checksum = lambda _: self.response_parts['crc']
         self.assertIsNone(client.get('some_key'))
 
+    def test_negotiate(self):
+        client = self._get_client()
+        reader = ShortReader(self.v2_ok)
+        client.fp.read.side_effect = reader.read
+        client._negotiate()
+        self.assertTrue(reader.emptied)
+
+    def test_negotiate_short_response(self):
+        client = self._get_client()
+        # chopped '\n' from v2_ok.
+        reader = ShortReader(self.v2_ok[:-1] + b'\0')
+        client.fp.read.side_effect = reader.read
+        self.assertRaises(DataSourceSmartOS.JoyentMetadataTimeoutException,
+                          client._negotiate)
+        self.assertTrue(reader.emptied)
+
+    def test_negotiate_bad_response(self):
+        client = self._get_client()
+        reader = ShortReader(b'garbage\n' + self.v2_ok)
+        client.fp.read.side_effect = reader.read
+        self.assertRaises(DataSourceSmartOS.JoyentMetadataFetchException,
+                          client._negotiate)
+        self.assertEqual(self.v2_ok, client.fp.read())
+
+    def test_serial_open_transport(self):
+        client = self._get_serial_client()
+        reader = ShortReader(b'garbage\0' + self.invalid + self.v2_ok)
+        client.fp.read.side_effect = reader.read
+        client.open_transport()
+        self.assertTrue(reader.emptied)
+
+    def test_flush_failure(self):
+        client = self._get_serial_client()
+        reader = ShortReader(b'garbage' + b'\0' + self.failure +
+                             self.invalid + self.v2_ok)
+        client.fp.read.side_effect = reader.read
+        client.open_transport()
+        self.assertTrue(reader.emptied)
+
+    def test_flush_many_timeouts(self):
+        client = self._get_serial_client()
+        reader = ShortReader(b'\0' * 100 + self.invalid + self.v2_ok)
+        client.fp.read.side_effect = reader.read
+        client.open_transport()
+        self.assertTrue(reader.emptied)
+
+    def test_list_metadata_returns_list(self):
+        parts = ['foo', 'bar']
+        value = b64e('\n'.join(parts))
+        self.response_parts['payload'] = value
+        self.response_parts['crc'] = '40873553'
+        self.response_parts['length'] = SUCCESS_LEN + len(value)
+        client = self._get_client()
+        self.assertEqual(client.list(), parts)
+
+    def test_list_metadata_returns_empty_list_if_no_customer_metadata(self):
+        del self.response_parts['payload']
+        self.response_parts['length'] = SUCCESS_LEN - 1
+        self.response_parts['crc'] = '14e563ba'
+        client = self._get_client()
+        self.assertEqual(client.list(), [])
+
 
 class TestNetworkConversion(TestCase):
     def test_convert_simple(self):
@@ -872,4 +1027,89 @@ class TestNetworkConversion(TestCase):
         found = convert_net(SDC_NICS_SINGLE_GATEWAY)
         self.assertEqual(expected, found)
 
+    def test_routes_on_all_nics(self):
+        routes = [
+            {'linklocal': False, 'dst': '3.0.0.0/8', 'gateway': '8.12.42.3'},
+            {'linklocal': False, 'dst': '4.0.0.0/8', 'gateway': '10.210.1.4'}]
+        expected = {
+            'version': 1,
+            'config': [
+                {'mac_address': '90:b8:d0:d8:82:b4', 'mtu': 1500,
+                 'name': 'net0', 'type': 'physical',
+                 'subnets': [{'address': '8.12.42.26/24',
+                              'gateway': '8.12.42.1', 'type': 'static',
+                              'routes': [{'network': '3.0.0.0/8',
+                                          'gateway': '8.12.42.3'},
+                                         {'network': '4.0.0.0/8',
+                                         'gateway': '10.210.1.4'}]}]},
+                {'mac_address': '90:b8:d0:0a:51:31', 'mtu': 1500,
+                 'name': 'net1', 'type': 'physical',
+                 'subnets': [{'address': '10.210.1.27/24', 'type': 'static',
+                              'routes': [{'network': '3.0.0.0/8',
+                                          'gateway': '8.12.42.3'},
+                                         {'network': '4.0.0.0/8',
+                                         'gateway': '10.210.1.4'}]}]}]}
+        found = convert_net(SDC_NICS_SINGLE_GATEWAY, routes=routes)
+        self.maxDiff = None
+        self.assertEqual(expected, found)
+
+
+@unittest2.skipUnless(get_smartos_environ() == SMARTOS_ENV_KVM,
+                      "Only supported on KVM and bhyve guests under SmartOS")
+@unittest2.skipUnless(os.access(SERIAL_DEVICE, os.W_OK),
+                      "Requires write access to " + SERIAL_DEVICE)
+class TestSerialConcurrency(TestCase):
+    """
+       This class tests locking on an actual serial port, and as such can only
+       be run in a kvm or bhyve guest running on a SmartOS host.  A test run on
+       a metadata socket will not be valid because a metadata socket ensures
+       there is only one session over a connection.  In contrast, in the
+       absence of proper locking multiple processes opening the same serial
+       port can corrupt each others' exchanges with the metadata server.
+    """
+    def setUp(self):
+        self.mdata_proc = multiprocessing.Process(target=self.start_mdata_loop)
+        self.mdata_proc.start()
+        super(TestSerialConcurrency, self).setUp()
+
+    def tearDown(self):
+        # os.kill() rather than mdata_proc.terminate() to avoid console spam.
+        os.kill(self.mdata_proc.pid, signal.SIGKILL)
+        self.mdata_proc.join()
+        super(TestSerialConcurrency, self).tearDown()
+
+    def start_mdata_loop(self):
+        """
+           The mdata-get command is repeatedly run in a separate process so
+           that it may try to race with metadata operations performed in the
+           main test process.  Use of mdata-get is better than two processes
+           using the protocol implementation in DataSourceSmartOS because we
+           are testing to be sure that cloud-init and mdata-get respect each
+           others locks.
+        """
+        rcs = list(range(0, 256))
+        while True:
+            subp(['mdata-get', 'sdc:routes'], rcs=rcs)
+
+    def test_all_keys(self):
+        self.assertIsNotNone(self.mdata_proc.pid)
+        ds = DataSourceSmartOS
+        keys = [tup[0] for tup in ds.SMARTOS_ATTRIB_MAP.values()]
+        keys.extend(ds.SMARTOS_ATTRIB_JSON.values())
+
+        client = ds.jmc_client_factory()
+        self.assertIsNotNone(client)
+
+        # The behavior that we are testing for was observed mdata-get running
+        # 10 times at roughly the same time as cloud-init fetched each key
+        # once.  cloud-init would regularly see failures before making it
+        # through all keys once.
+        for _ in range(0, 3):
+            for key in keys:
+                # We don't care about the return value, just that it doesn't
+                # thrown any exceptions.
+                client.get(key)
+
+        self.assertIsNone(self.mdata_proc.exitcode)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_distros/test_create_users.py b/tests/unittests/test_distros/test_create_users.py
index 5670904..07176ca 100644
--- a/tests/unittests/test_distros/test_create_users.py
+++ b/tests/unittests/test_distros/test_create_users.py
@@ -145,4 +145,12 @@ class TestCreateUser(TestCase):
             mock.call(['passwd', '-l', user])]
         self.assertEqual(m_subp.call_args_list, expected)
 
+    def test_explicit_sudo_false(self, m_subp, m_is_snappy):
+        user = 'foouser'
+        self.dist.create_user(user, sudo=False)
+        self.assertEqual(
+            m_subp.call_args_list,
+            [self._useradd2call([user, '-m']),
+             mock.call(['passwd', '-l', user])])
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py
index 1c2e45f..7765e40 100644
--- a/tests/unittests/test_distros/test_netconfig.py
+++ b/tests/unittests/test_distros/test_netconfig.py
@@ -189,6 +189,12 @@ hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
         status: active
 """
 
+    def setUp(self):
+        super(TestNetCfgDistro, self).setUp()
+        self.add_patch('cloudinit.util.system_is_snappy', 'm_snappy')
+        self.add_patch('cloudinit.util.system_info', 'm_sysinfo')
+        self.m_sysinfo.return_value = {'dist': ('Distro', '99.1', 'Codename')}
+
     def _get_distro(self, dname, renderers=None):
         cls = distros.fetch(dname)
         cfg = settings.CFG_BUILTIN
diff --git a/tests/unittests/test_distros/test_user_data_normalize.py b/tests/unittests/test_distros/test_user_data_normalize.py
index 0fa9cdb..fa4b6cf 100644
--- a/tests/unittests/test_distros/test_user_data_normalize.py
+++ b/tests/unittests/test_distros/test_user_data_normalize.py
@@ -22,6 +22,12 @@ bcfg = {
 
 class TestUGNormalize(TestCase):
 
+    def setUp(self):
+        super(TestUGNormalize, self).setUp()
+        self.add_patch('cloudinit.util.system_is_snappy', 'm_snappy')
+        self.add_patch('cloudinit.util.system_info', 'm_sysinfo')
+        self.m_sysinfo.return_value = {'dist': ('Distro', '99.1', 'Codename')}
+
     def _make_distro(self, dtype, def_user=None):
         cfg = dict(settings.CFG_BUILTIN)
         cfg['system_info']['distro'] = dtype
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 5364398..64d9f9f 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+from collections import namedtuple
 import copy
 import os
 from uuid import uuid4
@@ -7,9 +8,10 @@ from uuid import uuid4
 from cloudinit import safeyaml
 from cloudinit import util
 from cloudinit.tests.helpers import (
-    CiTestCase, dir2dict, populate_dir)
+    CiTestCase, dir2dict, populate_dir, populate_dir_with_ts)
 
-from cloudinit.sources import DataSourceIBMCloud as dsibm
+from cloudinit.sources import DataSourceIBMCloud as ds_ibm
+from cloudinit.sources import DataSourceSmartOS as ds_smartos
 
 UNAME_MYSYS = ("Linux bart 4.4.0-62-generic #83-Ubuntu "
                "SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 GNU/Linux")
@@ -66,19 +68,28 @@ P_SYS_VENDOR = "sys/class/dmi/id/sys_vendor"
 P_SEED_DIR = "var/lib/cloud/seed"
 P_DSID_CFG = "etc/cloud/ds-identify.cfg"
 
-IBM_PROVISIONING_CHECK_PATH = "/root/provisioningConfiguration.cfg"
 IBM_CONFIG_UUID = "9796-932E"
 
+MOCK_VIRT_IS_CONTAINER_OTHER = {'name': 'detect_virt',
+                                'RET': 'container-other', 'ret': 0}
 MOCK_VIRT_IS_KVM = {'name': 'detect_virt', 'RET': 'kvm', 'ret': 0}
 MOCK_VIRT_IS_VMWARE = {'name': 'detect_virt', 'RET': 'vmware', 'ret': 0}
+# currenty' SmartOS hypervisor "bhyve" is unknown by systemd-detect-virt.
+MOCK_VIRT_IS_VM_OTHER = {'name': 'detect_virt', 'RET': 'vm-other', 'ret': 0}
 MOCK_VIRT_IS_XEN = {'name': 'detect_virt', 'RET': 'xen', 'ret': 0}
 MOCK_UNAME_IS_PPC64 = {'name': 'uname', 'out': UNAME_PPC64EL, 'ret': 0}
 
+shell_true = 0
+shell_false = 1
 
-class TestDsIdentify(CiTestCase):
+CallReturn = namedtuple('CallReturn',
+                        ['rc', 'stdout', 'stderr', 'cfg', 'files'])
+
+
+class DsIdentifyBase(CiTestCase):
     dsid_path = os.path.realpath('tools/ds-identify')
 
-    def call(self, rootd=None, mocks=None, args=None, files=None,
+    def call(self, rootd=None, mocks=None, func="main", args=None, files=None,
              policy_dmi=DI_DEFAULT_POLICY,
              policy_no_dmi=DI_DEFAULT_POLICY_NO_DMI,
              ec2_strict_id=DI_EC2_STRICT_ID_DEFAULT):
@@ -135,7 +146,7 @@ class TestDsIdentify(CiTestCase):
                 mocklines.append(write_mock(d))
 
         endlines = [
-            'main %s' % ' '.join(['"%s"' % s for s in args])
+            func + ' ' + ' '.join(['"%s"' % s for s in args])
         ]
 
         with open(wrap, "w") as fp:
@@ -159,12 +170,14 @@ class TestDsIdentify(CiTestCase):
                 cfg = {"_INVALID_YAML": contents,
                        "_EXCEPTION": str(e)}
 
-        return rc, out, err, cfg, dir2dict(rootd)
+        return CallReturn(rc, out, err, cfg, dir2dict(rootd))
 
     def _call_via_dict(self, data, rootd=None, **kwargs):
         # return output of self.call with a dict input like VALID_CFG[item]
         xwargs = {'rootd': rootd}
-        for k in ('mocks', 'args', 'policy_dmi', 'policy_no_dmi', 'files'):
+        passthrough = ('mocks', 'func', 'args', 'policy_dmi',
+                       'policy_no_dmi', 'files')
+        for k in passthrough:
             if k in data:
                 xwargs[k] = data[k]
             if k in kwargs:
@@ -178,18 +191,21 @@ class TestDsIdentify(CiTestCase):
             data, RC_FOUND, dslist=[data.get('ds'), DS_NONE])
 
     def _check_via_dict(self, data, rc, dslist=None, **kwargs):
-        found_rc, out, err, cfg, files = self._call_via_dict(data, **kwargs)
+        ret = self._call_via_dict(data, **kwargs)
         good = False
         try:
-            self.assertEqual(rc, found_rc)
+            self.assertEqual(rc, ret.rc)
             if dslist is not None:
-                self.assertEqual(dslist, cfg['datasource_list'])
+                self.assertEqual(dslist, ret.cfg['datasource_list'])
             good = True
         finally:
             if not good:
-                _print_run_output(rc, out, err, cfg, files)
-        return rc, out, err, cfg, files
+                _print_run_output(ret.rc, ret.stdout, ret.stderr, ret.cfg,
+                                  ret.files)
+        return ret
 
+
+class TestDsIdentify(DsIdentifyBase):
     def test_wb_print_variables(self):
         """_print_info reports an array of discovered variables to stderr."""
         data = VALID_CFG['Azure-dmi-detection']
@@ -237,20 +253,50 @@ class TestDsIdentify(CiTestCase):
     def test_config_drive(self):
         """ConfigDrive datasource has a disk with LABEL=config-2."""
         self._test_ds_found('ConfigDrive')
-        return
 
     def test_config_drive_upper(self):
         """ConfigDrive datasource has a disk with LABEL=CONFIG-2."""
         self._test_ds_found('ConfigDriveUpper')
         return
 
+    def test_config_drive_seed(self):
+        """Config Drive seed directory."""
+        self._test_ds_found('ConfigDrive-seed')
+
+    def test_config_drive_interacts_with_ibmcloud_config_disk(self):
+        """Verify ConfigDrive interaction with IBMCloud.
+
+        If ConfigDrive is enabled and not IBMCloud, then ConfigDrive
+        should claim the ibmcloud 'config-2' disk.
+        If IBMCloud is enabled, then ConfigDrive should skip."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        cfgpath = 'etc/cloud/cloud.cfg.d/99_networklayer_common.cfg'
+
+        # with list including IBMCloud, config drive should be not found.
+        files[cfgpath] = 'datasource_list: [ ConfigDrive, IBMCloud ]\n'
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ret.cfg.get('datasource_list'), ['IBMCloud', 'None'])
+
+        # But if IBMCloud is not enabled, config drive should claim this.
+        files[cfgpath] = 'datasource_list: [ ConfigDrive, NoCloud ]\n'
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ret.cfg.get('datasource_list'), ['ConfigDrive', 'None'])
+
     def test_ibmcloud_template_userdata_in_provisioning(self):
         """Template provisioned with user-data during provisioning stage.
 
         Template provisioning with user-data has METADATA disk,
         datasource should return not found."""
         data = copy.deepcopy(VALID_CFG['IBMCloud-metadata'])
-        data['files'] = {IBM_PROVISIONING_CHECK_PATH: 'xxx'}
+        # change the 'is_ibm_provisioning' mock to return 1 (false)
+        isprov_m = [m for m in data['mocks']
+                    if m["name"] == "is_ibm_provisioning"][0]
+        isprov_m['ret'] = shell_true
         return self._check_via_dict(data, RC_NOT_FOUND)
 
     def test_ibmcloud_template_userdata(self):
@@ -265,7 +311,8 @@ class TestDsIdentify(CiTestCase):
 
         no disks attached.  Datasource should return not found."""
         data = copy.deepcopy(VALID_CFG['IBMCloud-nodisks'])
-        data['files'] = {IBM_PROVISIONING_CHECK_PATH: 'xxx'}
+        data['mocks'].append(
+            {'name': 'is_ibm_provisioning', 'ret': shell_true})
         return self._check_via_dict(data, RC_NOT_FOUND)
 
     def test_ibmcloud_template_no_userdata(self):
@@ -290,11 +337,42 @@ class TestDsIdentify(CiTestCase):
                 break
         if not offset:
             raise ValueError("Expected to find 'blkid' mock, but did not.")
-        data['mocks'][offset]['out'] = d['out'].replace(dsibm.IBM_CONFIG_UUID,
+        data['mocks'][offset]['out'] = d['out'].replace(ds_ibm.IBM_CONFIG_UUID,
                                                         "DEAD-BEEF")
         self._check_via_dict(
             data, rc=RC_FOUND, dslist=['ConfigDrive', DS_NONE])
 
+    def test_ibmcloud_with_nocloud_seed(self):
+        """NoCloud seed should be preferred over IBMCloud.
+
+        A nocloud seed should be preferred over IBMCloud even if enabled.
+        Ubuntu 16.04 images have <vlc>/seed/nocloud-net. LP: #1766401."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        files.update(VALID_CFG['NoCloud-seed']['files'])
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ['NoCloud', 'IBMCloud', 'None'],
+            ret.cfg.get('datasource_list'))
+
+    def test_ibmcloud_with_configdrive_seed(self):
+        """ConfigDrive seed should be preferred over IBMCloud.
+
+        A ConfigDrive seed should be preferred over IBMCloud even if enabled.
+        Ubuntu 16.04 images have a fstab entry that mounts the
+        METADATA disk into <vlc>/seed/config_drive. LP: ##1766401."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        files.update(VALID_CFG['ConfigDrive-seed']['files'])
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ['ConfigDrive', 'IBMCloud', 'None'],
+            ret.cfg.get('datasource_list'))
+
     def test_policy_disabled(self):
         """A Builtin policy of 'disabled' should return not found.
 
@@ -445,6 +523,80 @@ class TestDsIdentify(CiTestCase):
         """Hetzner cloud is identified in sys_vendor."""
         self._test_ds_found('Hetzner')
 
+    def test_smartos_bhyve(self):
+        """SmartOS cloud identified by SmartDC in dmi."""
+        self._test_ds_found('SmartOS-bhyve')
+
+    def test_smartos_lxbrand(self):
+        """SmartOS cloud identified on lxbrand container."""
+        self._test_ds_found('SmartOS-lxbrand')
+
+    def test_smartos_lxbrand_requires_socket(self):
+        """SmartOS cloud should not be identified if no socket file."""
+        mycfg = copy.deepcopy(VALID_CFG['SmartOS-lxbrand'])
+        del mycfg['files'][ds_smartos.METADATA_SOCKFILE]
+        self._check_via_dict(mycfg, rc=RC_NOT_FOUND, policy_dmi="disabled")
+
+    def test_path_env_gets_set_from_main(self):
+        """PATH environment should always have some tokens when main is run.
+
+        We explicitly call main as we want to ensure it updates PATH."""
+        cust = copy.deepcopy(VALID_CFG['NoCloud'])
+        rootd = self.tmp_dir()
+        mpp = 'main-printpath'
+        pre = "MYPATH="
+        cust['files'][mpp] = (
+            'PATH="/mycust/path"; main; r=$?; echo ' + pre + '$PATH; exit $r;')
+        ret = self._check_via_dict(
+            cust, RC_FOUND,
+            func=".", args=[os.path.join(rootd, mpp)], rootd=rootd)
+        line = [l for l in ret.stdout.splitlines() if l.startswith(pre)][0]
+        toks = line.replace(pre, "").split(":")
+        expected = ["/sbin", "/bin", "/usr/sbin", "/usr/bin", "/mycust/path"]
+        self.assertEqual(expected, [p for p in expected if p in toks],
+                         "path did not have expected tokens")
+
+
+class TestIsIBMProvisioning(DsIdentifyBase):
+    """Test the is_ibm_provisioning method in ds-identify."""
+
+    inst_log = "/root/swinstall.log"
+    prov_cfg = "/root/provisioningConfiguration.cfg"
+    boot_ref = "/proc/1/environ"
+    funcname = "is_ibm_provisioning"
+
+    def test_no_config(self):
+        """No provisioning config means not provisioning."""
+        ret = self.call(files={}, func=self.funcname)
+        self.assertEqual(shell_false, ret.rc)
+
+    def test_config_only(self):
+        """A provisioning config without a log means provisioning."""
+        ret = self.call(files={self.prov_cfg: "key=value"}, func=self.funcname)
+        self.assertEqual(shell_true, ret.rc)
+
+    def test_config_with_old_log(self):
+        """A config with a log from previous boot is not provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", -30),
+                self.boot_ref: ("PWD=/", 0)}
+        populate_dir_with_ts(rootd, data)
+        ret = self.call(rootd=rootd, func=self.funcname)
+        self.assertEqual(shell_false, ret.rc)
+        self.assertIn("from previous boot", ret.stderr)
+
+    def test_config_with_new_log(self):
+        """A config with a log from this boot is provisioning."""
+        rootd = self.tmp_dir()
+        data = {self.prov_cfg: ("key=value\nkey2=val2\n", -10),
+                self.inst_log: ("log data\n", 30),
+                self.boot_ref: ("PWD=/", 0)}
+        populate_dir_with_ts(rootd, data)
+        ret = self.call(rootd=rootd, func=self.funcname)
+        self.assertEqual(shell_true, ret.rc)
+        self.assertIn("from current boot", ret.stderr)
+
 
 def blkid_out(disks=None):
     """Convert a list of disk dictionaries into blkid content."""
@@ -631,6 +783,12 @@ VALID_CFG = {
              },
         ],
     },
+    'ConfigDrive-seed': {
+        'ds': 'ConfigDrive',
+        'files': {
+            os.path.join(P_SEED_DIR, 'config_drive', 'openstack',
+                         'latest', 'meta_data.json'): 'md\n'},
+    },
     'Hetzner': {
         'ds': 'Hetzner',
         'files': {P_SYS_VENDOR: 'Hetzner\n'},
@@ -639,6 +797,7 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'vfat', 'PARTUUID': uuid4()},
@@ -652,12 +811,13 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'ext3', 'PARTUUID': uuid4(),
                    'UUID': uuid4(), 'LABEL': 'cloudimg-bootfs'},
                   {'DEVNAME': 'xvdb', 'TYPE': 'vfat', 'LABEL': 'config-2',
-                   'UUID': dsibm.IBM_CONFIG_UUID},
+                   'UUID': ds_ibm.IBM_CONFIG_UUID},
                   {'DEVNAME': 'xvda2', 'TYPE': 'ext4',
                    'LABEL': 'cloudimg-rootfs', 'PARTUUID': uuid4(),
                    'UUID': uuid4()},
@@ -669,6 +829,7 @@ VALID_CFG = {
         'ds': 'IBMCloud',
         'mocks': [
             MOCK_VIRT_IS_XEN,
+            {'name': 'is_ibm_provisioning', 'ret': shell_false},
             {'name': 'blkid', 'ret': 0,
              'out': blkid_out(
                  [{'DEVNAME': 'xvda1', 'TYPE': 'vfat', 'PARTUUID': uuid4()},
@@ -677,6 +838,32 @@ VALID_CFG = {
              },
         ],
     },
+    'SmartOS-bhyve': {
+        'ds': 'SmartOS',
+        'mocks': [
+            MOCK_VIRT_IS_VM_OTHER,
+            {'name': 'blkid', 'ret': 0,
+             'out': blkid_out(
+                 [{'DEVNAME': 'vda1', 'TYPE': 'ext4',
+                   'PARTUUID': '49ec635a-01'},
+                  {'DEVNAME': 'vda2', 'TYPE': 'swap',
+                   'LABEL': 'cloudimg-swap', 'PARTUUID': '49ec635a-02'}]),
+             },
+        ],
+        'files': {P_PRODUCT_NAME: 'SmartDC HVM\n'},
+    },
+    'SmartOS-lxbrand': {
+        'ds': 'SmartOS',
+        'mocks': [
+            MOCK_VIRT_IS_CONTAINER_OTHER,
+            {'name': 'uname', 'ret': 0,
+             'out': ("Linux d43da87a-daca-60e8-e6d4-d2ed372662a3 4.3.0 "
+                     "BrandZ virtual linux x86_64 GNU/Linux")},
+            {'name': 'blkid', 'ret': 2, 'out': ''},
+        ],
+        'files': {ds_smartos.METADATA_SOCKFILE: 'would be a socket\n'},
+    }
+
 }
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_ec2_util.py b/tests/unittests/test_ec2_util.py
index af78997..3f50f57 100644
--- a/tests/unittests/test_ec2_util.py
+++ b/tests/unittests/test_ec2_util.py
@@ -11,7 +11,6 @@ from cloudinit import url_helper as uh
 class TestEc2Util(helpers.HttprettyTestCase):
     VERSION = 'latest'
 
-    @hp.activate
     def test_userdata_fetch(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -20,7 +19,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION)
         self.assertEqual('stuff', userdata.decode('utf-8'))
 
-    @hp.activate
     def test_userdata_fetch_fail_not_found(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -28,7 +26,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION, retries=0)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_userdata_fetch_fail_server_dead(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -36,7 +33,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION, retries=0)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_userdata_fetch_fail_server_not_found(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -44,7 +40,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_metadata_fetch_no_keys(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -62,7 +57,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(md['ami-launch-index'], '1')
 
-    @hp.activate
     def test_metadata_fetch_key(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -83,7 +77,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(1, len(md['public-keys']))
 
-    @hp.activate
     def test_metadata_fetch_with_2_keys(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -108,7 +101,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(2, len(md['public-keys']))
 
-    @hp.activate
     def test_metadata_fetch_bdm(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -140,7 +132,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(bdm['ami'], 'sdb')
         self.assertEqual(bdm['ephemeral0'], 'sdc')
 
-    @hp.activate
     def test_metadata_no_security_credentials(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
diff --git a/tests/unittests/test_filters/test_launch_index.py b/tests/unittests/test_filters/test_launch_index.py
index 6364d38..e1a5d2c 100644
--- a/tests/unittests/test_filters/test_launch_index.py
+++ b/tests/unittests/test_filters/test_launch_index.py
@@ -55,7 +55,7 @@ class TestLaunchFilter(helpers.ResourceUsingTestCase):
         return True
 
     def testMultiEmailIndex(self):
-        test_data = self.readResource('filter_cloud_multipart_2.email')
+        test_data = helpers.readResource('filter_cloud_multipart_2.email')
         ud_proc = ud.UserDataProcessor(self.getCloudPaths())
         message = ud_proc.process(test_data)
         self.assertTrue(count_messages(message) > 0)
@@ -70,7 +70,7 @@ class TestLaunchFilter(helpers.ResourceUsingTestCase):
         self.assertCounts(message, expected_counts)
 
     def testHeaderEmailIndex(self):
-        test_data = self.readResource('filter_cloud_multipart_header.email')
+        test_data = helpers.readResource('filter_cloud_multipart_header.email')
         ud_proc = ud.UserDataProcessor(self.getCloudPaths())
         message = ud_proc.process(test_data)
         self.assertTrue(count_messages(message) > 0)
@@ -85,7 +85,7 @@ class TestLaunchFilter(helpers.ResourceUsingTestCase):
         self.assertCounts(message, expected_counts)
 
     def testConfigEmailIndex(self):
-        test_data = self.readResource('filter_cloud_multipart_1.email')
+        test_data = helpers.readResource('filter_cloud_multipart_1.email')
         ud_proc = ud.UserDataProcessor(self.getCloudPaths())
         message = ud_proc.process(test_data)
         self.assertTrue(count_messages(message) > 0)
@@ -99,7 +99,7 @@ class TestLaunchFilter(helpers.ResourceUsingTestCase):
         self.assertCounts(message, expected_counts)
 
     def testNoneIndex(self):
-        test_data = self.readResource('filter_cloud_multipart.yaml')
+        test_data = helpers.readResource('filter_cloud_multipart.yaml')
         ud_proc = ud.UserDataProcessor(self.getCloudPaths())
         message = ud_proc.process(test_data)
         start_count = count_messages(message)
@@ -108,7 +108,7 @@ class TestLaunchFilter(helpers.ResourceUsingTestCase):
         self.assertTrue(self.equivalentMessage(message, filtered_message))
 
     def testIndexes(self):
-        test_data = self.readResource('filter_cloud_multipart.yaml')
+        test_data = helpers.readResource('filter_cloud_multipart.yaml')
         ud_proc = ud.UserDataProcessor(self.getCloudPaths())
         message = ud_proc.process(test_data)
         start_count = count_messages(message)
diff --git a/tests/unittests/test_handler/test_handler_apt_conf_v1.py b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
index 83f962a..6a4b03e 100644
--- a/tests/unittests/test_handler/test_handler_apt_conf_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
@@ -12,10 +12,6 @@ import shutil
 import tempfile
 
 
-def load_tfile_or_url(*args, **kwargs):
-    return(util.decode_binary(util.read_file_or_url(*args, **kwargs).contents))
-
-
 class TestAptProxyConfig(TestCase):
     def setUp(self):
         super(TestAptProxyConfig, self).setUp()
@@ -36,7 +32,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "myproxy"))
 
     def test_apt_http_proxy_written(self):
@@ -46,7 +42,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "myproxy"))
 
     def test_apt_all_proxy_written(self):
@@ -64,7 +60,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
 
         for ptype, pval in values.items():
             self.assertTrue(self._search_apt_config(contents, ptype, pval))
@@ -80,7 +76,7 @@ class TestAptProxyConfig(TestCase):
         cc_apt_configure.apply_apt_config({'proxy': "foo"},
                                           self.pfile, self.cfile)
         self.assertTrue(os.path.isfile(self.pfile))
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "foo"))
 
     def test_config_written(self):
@@ -92,14 +88,14 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.cfile))
         self.assertFalse(os.path.isfile(self.pfile))
 
-        self.assertEqual(load_tfile_or_url(self.cfile), payload)
+        self.assertEqual(util.load_file(self.cfile), payload)
 
     def test_config_replaced(self):
         util.write_file(self.pfile, "content doesnt matter")
         cc_apt_configure.apply_apt_config({'conf': "foo"},
                                           self.pfile, self.cfile)
         self.assertTrue(os.path.isfile(self.cfile))
-        self.assertEqual(load_tfile_or_url(self.cfile), "foo")
+        self.assertEqual(util.load_file(self.cfile), "foo")
 
     def test_config_deleted(self):
         # if no 'conf' is provided, delete any previously written file
diff --git a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
index d2b96f0..23bd6e1 100644
--- a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
@@ -64,13 +64,6 @@ deb-src http://archive.ubuntu.com/ubuntu/ fakerelease main restricted
 """)
 
 
-def load_tfile_or_url(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class TestAptSourceConfigSourceList(t_help.FilesystemMockingTestCase):
     """TestAptSourceConfigSourceList
     Main Class to test sources list rendering
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v1.py b/tests/unittests/test_handler/test_handler_apt_source_v1.py
index 46ca4ce..a3132fb 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v1.py
@@ -39,13 +39,6 @@ S0ORP6HXET3+jC8BMG4tBWCTK/XEZw==
 ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
 
 
-def load_tfile_or_url(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class FakeDistro(object):
     """Fake Distro helper object"""
     def update_package_sources(self):
@@ -125,7 +118,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "karmic-backports",
@@ -157,13 +150,13 @@ class TestAptSourceConfig(TestCase):
         self.apt_src_basic(self.aptlistfile, cfg)
 
         # extra verify on two extra files of this test
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "precise-backports",
                                    "main universe multiverse restricted"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "lucid-backports",
@@ -220,7 +213,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "multiverse"),
@@ -241,12 +234,12 @@ class TestAptSourceConfig(TestCase):
 
         # extra verify on two extra files of this test
         params = self._get_default_params()
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "main"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "universe"),
@@ -296,7 +289,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -336,14 +329,14 @@ class TestAptSourceConfig(TestCase):
                 'filename': self.aptlistfile3}
 
         self.apt_src_keyid(self.aptlistfile, [cfg1, cfg2, cfg3], 3)
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
                                     'cloud-init-test/ubuntu'),
                                    "xenial", "universe"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -375,7 +368,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index 7bb1b7c..7a64c23 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -49,13 +49,6 @@ ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
 TARGET = None
 
 
-def load_tfile(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
     """TestAptSourceConfig
     Main Class to test apt configs
@@ -119,7 +112,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "karmic-backports",
@@ -151,13 +144,13 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
         self._apt_src_basic(self.aptlistfile, cfg)
 
         # extra verify on two extra files of this test
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "precise-backports",
                                    "main universe multiverse restricted"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "lucid-backports",
@@ -174,7 +167,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "multiverse"),
@@ -201,12 +194,12 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         # extra verify on two extra files of this test
         params = self._get_default_params()
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "main"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "universe"),
@@ -240,7 +233,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -277,14 +270,14 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
                                    'keyid': "03683F77"}}
 
         self._apt_src_keyid(self.aptlistfile, cfg, 3)
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
                                     'cloud-init-test/ubuntu'),
                                    "xenial", "universe"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -310,7 +303,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(self.aptlistfile))
 
-        contents = load_tfile(self.aptlistfile)
+        contents = util.load_file(self.aptlistfile)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -528,7 +521,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         expected = sorted([npre + suff for opre, npre, suff in files])
         # create files
-        for (opre, npre, suff) in files:
+        for (opre, _npre, suff) in files:
             fpath = os.path.join(apt_lists_d, opre + suff)
             util.write_file(fpath, content=fpath)
 
diff --git a/tests/unittests/test_handler/test_handler_bootcmd.py b/tests/unittests/test_handler/test_handler_bootcmd.py
index 29fc25e..b137526 100644
--- a/tests/unittests/test_handler/test_handler_bootcmd.py
+++ b/tests/unittests/test_handler/test_handler_bootcmd.py
@@ -1,9 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.config import cc_bootcmd
+from cloudinit.config.cc_bootcmd import handle, schema
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
-from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJsonSchema
+from cloudinit.tests.helpers import (
+    CiTestCase, mock, SchemaTestCaseMixin, skipUnlessJsonSchema)
 
 import logging
 import tempfile
@@ -50,7 +51,7 @@ class TestBootcmd(CiTestCase):
         """When the provided config doesn't contain bootcmd, skip it."""
         cfg = {}
         mycloud = self._get_cloud('ubuntu')
-        cc_bootcmd.handle('notimportant', cfg, mycloud, LOG, None)
+        handle('notimportant', cfg, mycloud, LOG, None)
         self.assertIn(
             "Skipping module named notimportant, no 'bootcmd' key",
             self.logs.getvalue())
@@ -60,7 +61,7 @@ class TestBootcmd(CiTestCase):
         invalid_config = {'bootcmd': 1}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError) as context_manager:
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         self.assertIn('Failed to shellify bootcmd', self.logs.getvalue())
         self.assertEqual(
             "Input to shellify was type 'int'. Expected list or tuple.",
@@ -76,7 +77,7 @@ class TestBootcmd(CiTestCase):
         invalid_config = {'bootcmd': 1}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError):
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         self.assertIn(
             'Invalid config:\nbootcmd: 1 is not of type \'array\'',
             self.logs.getvalue())
@@ -93,7 +94,7 @@ class TestBootcmd(CiTestCase):
             'bootcmd': ['ls /', 20, ['wget', 'http://stuff/blah'], {'a': 'n'}]}
         cc = self._get_cloud('ubuntu')
         with self.assertRaises(TypeError) as context_manager:
-            cc_bootcmd.handle('cc_bootcmd', invalid_config, cc, LOG, [])
+            handle('cc_bootcmd', invalid_config, cc, LOG, [])
         expected_warnings = [
             'bootcmd.1: 20 is not valid under any of the given schemas',
             'bootcmd.3: {\'a\': \'n\'} is not valid under any of the given'
@@ -117,7 +118,7 @@ class TestBootcmd(CiTestCase):
             'echo {0} $INSTANCE_ID > {1}'.format(my_id, out_file)]}
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
-            cc_bootcmd.handle('cc_bootcmd', valid_config, cc, LOG, [])
+            handle('cc_bootcmd', valid_config, cc, LOG, [])
         self.assertEqual(my_id + ' iid-datasource-none\n',
                          util.load_file(out_file))
 
@@ -128,7 +129,7 @@ class TestBootcmd(CiTestCase):
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
             with self.assertRaises(util.ProcessExecutionError) as ctxt_manager:
-                cc_bootcmd.handle('does-not-matter', valid_config, cc, LOG, [])
+                handle('does-not-matter', valid_config, cc, LOG, [])
         self.assertIn(
             'Unexpected error while running command.\n'
             "Command: ['/bin/sh',",
@@ -138,4 +139,21 @@ class TestBootcmd(CiTestCase):
             self.logs.getvalue())
 
 
+@skipUnlessJsonSchema()
+class TestSchema(CiTestCase, SchemaTestCaseMixin):
+    """Directly test schema rather than through handle."""
+
+    schema = schema
+
+    def test_duplicates_are_fine_array_array(self):
+        """Duplicated commands array/array entries are allowed."""
+        self.assertSchemaValid(
+            ["byebye", "byebye"], 'command entries can be duplicate')
+
+    def test_duplicates_are_fine_array_string(self):
+        """Duplicated commands array/string entries are allowed."""
+        self.assertSchemaValid(
+            ["echo bye", "echo bye"], "command entries can be duplicate.")
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_chef.py b/tests/unittests/test_handler/test_handler_chef.py
index 0136a93..f4bbd66 100644
--- a/tests/unittests/test_handler/test_handler_chef.py
+++ b/tests/unittests/test_handler/test_handler_chef.py
@@ -14,19 +14,27 @@ from cloudinit.sources import DataSourceNone
 from cloudinit import util
 
 from cloudinit.tests.helpers import (
-    CiTestCase, FilesystemMockingTestCase, mock, skipIf)
+    HttprettyTestCase, FilesystemMockingTestCase, mock, skipIf)
 
 LOG = logging.getLogger(__name__)
 
 CLIENT_TEMPL = os.path.sep.join(["templates", "chef_client.rb.tmpl"])
 
+# This is adjusted to use http because using with https causes issue
+# in some openssl/httpretty combinations.
+#   https://github.com/gabrielfalcao/HTTPretty/issues/242
+# We saw issue in opensuse 42.3 with
+#    httpretty=0.8.8-7.1 ndg-httpsclient=0.4.0-3.2 pyOpenSSL=16.0.0-4.1
+OMNIBUS_URL_HTTP = cc_chef.OMNIBUS_URL.replace("https:", "http:")
 
-class TestInstallChefOmnibus(CiTestCase):
+
+class TestInstallChefOmnibus(HttprettyTestCase):
 
     def setUp(self):
+        super(TestInstallChefOmnibus, self).setUp()
         self.new_root = self.tmp_dir()
 
-    @httpretty.activate
+    @mock.patch("cloudinit.config.cc_chef.OMNIBUS_URL", OMNIBUS_URL_HTTP)
     def test_install_chef_from_omnibus_runs_chef_url_content(self):
         """install_chef_from_omnibus runs downloaded OMNIBUS_URL as script."""
         chef_outfile = self.tmp_path('chef.out', self.new_root)
@@ -65,7 +73,7 @@ class TestInstallChefOmnibus(CiTestCase):
             expected_subp_kwargs,
             m_subp_blob.call_args_list[0][1])
 
-    @httpretty.activate
+    @mock.patch("cloudinit.config.cc_chef.OMNIBUS_URL", OMNIBUS_URL_HTTP)
     @mock.patch('cloudinit.config.cc_chef.util.subp_blob_in_tempfile')
     def test_install_chef_from_omnibus_has_omnibus_version(self, m_subp_blob):
         """install_chef_from_omnibus provides version arg to OMNIBUS_URL."""
diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
index a205498..4dd7e09 100644
--- a/tests/unittests/test_handler/test_handler_lxd.py
+++ b/tests/unittests/test_handler/test_handler_lxd.py
@@ -33,12 +33,16 @@ class TestLxd(t_help.CiTestCase):
         cc = cloud.Cloud(ds, paths, {}, d, None)
         return cc
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_lxd_init(self, mock_util):
+    def test_lxd_init(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         mock_util.which.return_value = True
+        m_maybe_clean.return_value = None
         cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
         self.assertTrue(mock_util.which.called)
+        # no bridge config, so maybe_cleanup should not be called.
+        self.assertFalse(m_maybe_clean.called)
         init_call = mock_util.subp.call_args_list[0][0][0]
         self.assertEqual(init_call,
                          ['lxd', 'init', '--auto',
@@ -46,32 +50,39 @@ class TestLxd(t_help.CiTestCase):
                           '--storage-backend=zfs',
                           '--storage-pool=poolname'])
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_lxd_install(self, mock_util):
+    def test_lxd_install(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         mock_util.which.return_value = None
         cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
         self.assertNotIn('WARN', self.logs.getvalue())
         self.assertTrue(cc.distro.install_packages.called)
+        cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
+        self.assertFalse(m_maybe_clean.called)
         install_pkg = cc.distro.install_packages.call_args_list[0][0][0]
         self.assertEqual(sorted(install_pkg), ['lxd', 'zfs'])
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_no_init_does_nothing(self, mock_util):
+    def test_no_init_does_nothing(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         cc_lxd.handle('cc_lxd', {'lxd': {}}, cc, self.logger, [])
         self.assertFalse(cc.distro.install_packages.called)
         self.assertFalse(mock_util.subp.called)
+        self.assertFalse(m_maybe_clean.called)
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_no_lxd_does_nothing(self, mock_util):
+    def test_no_lxd_does_nothing(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         cc_lxd.handle('cc_lxd', {'package_update': True}, cc, self.logger, [])
         self.assertFalse(cc.distro.install_packages.called)
         self.assertFalse(mock_util.subp.called)
+        self.assertFalse(m_maybe_clean.called)
 
     def test_lxd_debconf_new_full(self):
         data = {"mode": "new",
@@ -147,14 +158,13 @@ class TestLxd(t_help.CiTestCase):
                 "domain": "lxd"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (["lxc", "network", "create", "testbr0",
+            (["network", "create", "testbr0",
               "ipv4.address=10.0.8.1/24", "ipv4.nat=true",
               "ipv4.dhcp.ranges=10.0.8.2-10.0.8.254",
               "ipv6.address=fd98:9e0:3744::1/64",
-              "ipv6.nat=true", "dns.domain=lxd",
-              "--force-local"],
-             ["lxc", "network", "attach-profile",
-              "testbr0", "default", "eth0", "--force-local"]))
+              "ipv6.nat=true", "dns.domain=lxd"],
+             ["network", "attach-profile",
+              "testbr0", "default", "eth0"]))
 
     def test_lxd_cmd_new_partial(self):
         data = {"mode": "new",
@@ -163,19 +173,18 @@ class TestLxd(t_help.CiTestCase):
                 "ipv6_nat": "true"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (["lxc", "network", "create", "lxdbr0", "ipv4.address=none",
-              "ipv6.address=fd98:9e0:3744::1/64", "ipv6.nat=true",
-              "--force-local"],
-             ["lxc", "network", "attach-profile",
-              "lxdbr0", "default", "eth0", "--force-local"]))
+            (["network", "create", "lxdbr0", "ipv4.address=none",
+              "ipv6.address=fd98:9e0:3744::1/64", "ipv6.nat=true"],
+             ["network", "attach-profile",
+              "lxdbr0", "default", "eth0"]))
 
     def test_lxd_cmd_existing(self):
         data = {"mode": "existing",
                 "name": "testbr0"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (None, ["lxc", "network", "attach-profile",
-                    "testbr0", "default", "eth0", "--force-local"]))
+            (None, ["network", "attach-profile",
+                    "testbr0", "default", "eth0"]))
 
     def test_lxd_cmd_none(self):
         data = {"mode": "none"}
@@ -183,4 +192,43 @@ class TestLxd(t_help.CiTestCase):
             cc_lxd.bridge_to_cmd(data),
             (None, None))
 
+
+class TestLxdMaybeCleanupDefault(t_help.CiTestCase):
+    """Test the implementation of maybe_cleanup_default."""
+
+    defnet = cc_lxd._DEFAULT_NETWORK_NAME
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_network_other_than_default_not_deleted(self, m_lxc):
+        """deletion or removal should only occur if bridge is default."""
+        cc_lxd.maybe_cleanup_default(
+            net_name="lxdbr1", did_init=True, create=True, attach=True)
+        m_lxc.assert_not_called()
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_did_init_false_does_not_delete(self, m_lxc):
+        """deletion or removal should only occur if did_init is True."""
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=False, create=True, attach=True)
+        m_lxc.assert_not_called()
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_network_deleted_if_create_true(self, m_lxc):
+        """deletion of network should occur if create is True."""
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=True, create=True, attach=False)
+        m_lxc.assert_called_once_with(["network", "delete", self.defnet])
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_device_removed_if_attach_true(self, m_lxc):
+        """deletion of network should occur if create is True."""
+        nic_name = "my_nic"
+        profile = "my_profile"
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=True, create=False, attach=True,
+            profile=profile, nic_name=nic_name)
+        m_lxc.assert_called_once_with(
+            ["profile", "device", "remove", profile, nic_name])
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index fe492d4..8fea6c2 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -1,8 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import os.path
-import shutil
-import tempfile
 
 from cloudinit.config import cc_mounts
 
@@ -18,8 +16,7 @@ class TestSanitizeDevname(test_helpers.FilesystemMockingTestCase):
 
     def setUp(self):
         super(TestSanitizeDevname, self).setUp()
-        self.new_root = tempfile.mkdtemp()
-        self.addCleanup(shutil.rmtree, self.new_root)
+        self.new_root = self.tmp_dir()
         self.patchOS(self.new_root)
 
     def _touch(self, path):
@@ -134,4 +131,103 @@ class TestSanitizeDevname(test_helpers.FilesystemMockingTestCase):
             cc_mounts.sanitize_devname(
                 'ephemeral0.1', lambda x: disk_path, mock.Mock()))
 
+
+class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
+
+    swap_path = '/dev/sdb1'
+
+    def setUp(self):
+        super(TestFstabHandling, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.patchOS(self.new_root)
+
+        self.fstab_path = os.path.join(self.new_root, 'etc/fstab')
+        self._makedirs('/etc')
+
+        self.add_patch('cloudinit.config.cc_mounts.FSTAB_PATH',
+                       'mock_fstab_path',
+                       self.fstab_path,
+                       autospec=False)
+
+        self.add_patch('cloudinit.config.cc_mounts._is_block_device',
+                       'mock_is_block_device',
+                       return_value=True)
+
+        self.add_patch('cloudinit.config.cc_mounts.util.subp',
+                       'mock_util_subp')
+
+        self.mock_cloud = mock.Mock()
+        self.mock_log = mock.Mock()
+        self.mock_cloud.device_name_to_device = self.device_name_to_device
+
+    def _makedirs(self, directory):
+        directory = os.path.join(self.new_root, directory.lstrip('/'))
+        if not os.path.exists(directory):
+            os.makedirs(directory)
+
+    def device_name_to_device(self, path):
+        if path == 'swap':
+            return self.swap_path
+        else:
+            dev = None
+
+        return dev
+
+    def test_fstab_no_swap_device(self):
+        '''Ensure that cloud-init adds a discovered swap partition
+        to /etc/fstab.'''
+
+        fstab_original_content = ''
+        fstab_expected_content = (
+            '%s\tnone\tswap\tsw,comment=cloudconfig\t'
+            '0\t0\n' % (self.swap_path,)
+        )
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
+    def test_fstab_same_swap_device_already_configured(self):
+        '''Ensure that cloud-init will not add a swap device if the same
+        device already exists in /etc/fstab.'''
+
+        fstab_original_content = '%s swap swap defaults 0 0\n' % (
+            self.swap_path,)
+        fstab_expected_content = fstab_original_content
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
+    def test_fstab_alternate_swap_device_already_configured(self):
+        '''Ensure that cloud-init will add a discovered swap device to
+        /etc/fstab even when there exists a swap definition on another
+        device.'''
+
+        fstab_original_content = '/dev/sdc1 swap swap defaults 0 0\n'
+        fstab_expected_content = (
+            fstab_original_content +
+            '%s\tnone\tswap\tsw,comment=cloudconfig\t'
+            '0\t0\n' % (self.swap_path,)
+        )
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 695897c..6fe3659 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -4,20 +4,21 @@ from cloudinit.config import cc_ntp
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
 from cloudinit.tests.helpers import (
-    FilesystemMockingTestCase, mock, skipUnlessJsonSchema)
+    CiTestCase, FilesystemMockingTestCase, mock, skipUnlessJsonSchema)
 
 
+import copy
 import os
 from os.path import dirname
 import shutil
 
-NTP_TEMPLATE = b"""\
+NTP_TEMPLATE = """\
 ## template: jinja
 servers {{servers}}
 pools {{pools}}
 """
 
-TIMESYNCD_TEMPLATE = b"""\
+TIMESYNCD_TEMPLATE = """\
 ## template:jinja
 [Time]
 {% if servers or pools -%}
@@ -32,56 +33,88 @@ class TestNtp(FilesystemMockingTestCase):
 
     def setUp(self):
         super(TestNtp, self).setUp()
-        self.subp = util.subp
         self.new_root = self.tmp_dir()
+        self.add_patch('cloudinit.util.system_is_snappy', 'm_snappy')
+        self.m_snappy.return_value = False
+        self.add_patch('cloudinit.util.system_info', 'm_sysinfo')
+        self.m_sysinfo.return_value = {'dist': ('Distro', '99.1', 'Codename')}
 
-    def _get_cloud(self, distro):
-        self.patchUtils(self.new_root)
+    def _get_cloud(self, distro, sys_cfg=None):
+        self.new_root = self.reRoot(root=self.new_root)
         paths = helpers.Paths({'templates_dir': self.new_root})
         cls = distros.fetch(distro)
-        mydist = cls(distro, {}, paths)
-        myds = DataSourceNone.DataSourceNone({}, mydist, paths)
-        return cloud.Cloud(myds, paths, {}, mydist, None)
+        if not sys_cfg:
+            sys_cfg = {}
+        mydist = cls(distro, sys_cfg, paths)
+        myds = DataSourceNone.DataSourceNone(sys_cfg, mydist, paths)
+        return cloud.Cloud(myds, paths, sys_cfg, mydist, None)
+
+    def _get_template_path(self, template_name, distro, basepath=None):
+        # ntp.conf.{distro} -> ntp.conf.debian.tmpl
+        template_fn = '{0}.tmpl'.format(
+            template_name.replace('{distro}', distro))
+        if not basepath:
+            basepath = self.new_root
+        path = os.path.join(basepath, template_fn)
+        return path
+
+    def _generate_template(self, template=None):
+        if not template:
+            template = NTP_TEMPLATE
+        confpath = os.path.join(self.new_root, 'client.conf')
+        template_fn = os.path.join(self.new_root, 'client.conf.tmpl')
+        util.write_file(template_fn, content=template)
+        return (confpath, template_fn)
+
+    def _mock_ntp_client_config(self, client=None, distro=None):
+        if not client:
+            client = 'ntp'
+        if not distro:
+            distro = 'ubuntu'
+        dcfg = cc_ntp.distro_ntp_client_configs(distro)
+        if client == 'systemd-timesyncd':
+            template = TIMESYNCD_TEMPLATE
+        else:
+            template = NTP_TEMPLATE
+        (confpath, _template_fn) = self._generate_template(template=template)
+        ntpconfig = copy.deepcopy(dcfg[client])
+        ntpconfig['confpath'] = confpath
+        ntpconfig['template_name'] = os.path.basename(confpath)
+        return ntpconfig
 
     @mock.patch("cloudinit.config.cc_ntp.util")
     def test_ntp_install(self, mock_util):
-        """ntp_install installs via install_func when check_exe is absent."""
+        """ntp_install_client runs install_func when check_exe is absent."""
         mock_util.which.return_value = None  # check_exe not found.
         install_func = mock.MagicMock()
-        cc_ntp.install_ntp(install_func, packages=['ntpx'], check_exe='ntpdx')
-
+        cc_ntp.install_ntp_client(install_func,
+                                  packages=['ntpx'], check_exe='ntpdx')
         mock_util.which.assert_called_with('ntpdx')
         install_func.assert_called_once_with(['ntpx'])
 
     @mock.patch("cloudinit.config.cc_ntp.util")
     def test_ntp_install_not_needed(self, mock_util):
-        """ntp_install doesn't attempt install when check_exe is found."""
-        mock_util.which.return_value = ["/usr/sbin/ntpd"]  # check_exe found.
+        """ntp_install_client doesn't install when check_exe is found."""
+        client = 'chrony'
+        mock_util.which.return_value = [client]  # check_exe found.
         install_func = mock.MagicMock()
-        cc_ntp.install_ntp(install_func, packages=['ntp'], check_exe='ntpd')
+        cc_ntp.install_ntp_client(install_func, packages=[client],
+                                  check_exe=client)
         install_func.assert_not_called()
 
     @mock.patch("cloudinit.config.cc_ntp.util")
     def test_ntp_install_no_op_with_empty_pkg_list(self, mock_util):
-        """ntp_install calls install_func with empty list"""
+        """ntp_install_client runs install_func with empty list"""
         mock_util.which.return_value = None  # check_exe not found
         install_func = mock.MagicMock()
-        cc_ntp.install_ntp(install_func, packages=[], check_exe='timesyncd')
+        cc_ntp.install_ntp_client(install_func, packages=[],
+                                  check_exe='timesyncd')
         install_func.assert_called_once_with([])
 
-    def test_ntp_rename_ntp_conf(self):
-        """When NTP_CONF exists, rename_ntp moves it."""
-        ntpconf = self.tmp_path("ntp.conf", self.new_root)
-        util.write_file(ntpconf, "")
-        with mock.patch("cloudinit.config.cc_ntp.NTP_CONF", ntpconf):
-            cc_ntp.rename_ntp_conf()
-        self.assertFalse(os.path.exists(ntpconf))
-        self.assertTrue(os.path.exists("{0}.dist".format(ntpconf)))
-
     @mock.patch("cloudinit.config.cc_ntp.util")
     def test_reload_ntp_defaults(self, mock_util):
         """Test service is restarted/reloaded (defaults)"""
-        service = 'ntp'
+        service = 'ntp_service_name'
         cmd = ['service', service, 'restart']
         cc_ntp.reload_ntp(service)
         mock_util.subp.assert_called_with(cmd, capture=True)
@@ -89,193 +122,169 @@ class TestNtp(FilesystemMockingTestCase):
     @mock.patch("cloudinit.config.cc_ntp.util")
     def test_reload_ntp_systemd(self, mock_util):
         """Test service is restarted/reloaded (systemd)"""
-        service = 'ntp'
-        cmd = ['systemctl', 'reload-or-restart', service]
+        service = 'ntp_service_name'
         cc_ntp.reload_ntp(service, systemd=True)
-        mock_util.subp.assert_called_with(cmd, capture=True)
-
-    @mock.patch("cloudinit.config.cc_ntp.util")
-    def test_reload_ntp_systemd_timesycnd(self, mock_util):
-        """Test service is restarted/reloaded (systemd/timesyncd)"""
-        service = 'systemd-timesycnd'
         cmd = ['systemctl', 'reload-or-restart', service]
-        cc_ntp.reload_ntp(service, systemd=True)
         mock_util.subp.assert_called_with(cmd, capture=True)
 
+    def test_ntp_rename_ntp_conf(self):
+        """When NTP_CONF exists, rename_ntp moves it."""
+        ntpconf = self.tmp_path("ntp.conf", self.new_root)
+        util.write_file(ntpconf, "")
+        cc_ntp.rename_ntp_conf(confpath=ntpconf)
+        self.assertFalse(os.path.exists(ntpconf))
+        self.assertTrue(os.path.exists("{0}.dist".format(ntpconf)))
+
     def test_ntp_rename_ntp_conf_skip_missing(self):
         """When NTP_CONF doesn't exist rename_ntp doesn't create a file."""
         ntpconf = self.tmp_path("ntp.conf", self.new_root)
         self.assertFalse(os.path.exists(ntpconf))
-        with mock.patch("cloudinit.config.cc_ntp.NTP_CONF", ntpconf):
-            cc_ntp.rename_ntp_conf()
+        cc_ntp.rename_ntp_conf(confpath=ntpconf)
         self.assertFalse(os.path.exists("{0}.dist".format(ntpconf)))
         self.assertFalse(os.path.exists(ntpconf))
 
-    def test_write_ntp_config_template_from_ntp_conf_tmpl_with_servers(self):
-        """write_ntp_config_template reads content from ntp.conf.tmpl.
-
-        It reads ntp.conf.tmpl if present and renders the value from servers
-        key. When no pools key is defined, template is rendered using an empty
-        list for pools.
-        """
-        distro = 'ubuntu'
-        cfg = {
-            'servers': ['192.168.2.1', '192.168.2.2']
-        }
-        mycloud = self._get_cloud(distro)
-        ntp_conf = self.tmp_path("ntp.conf", self.new_root)  # Doesn't exist
-        # Create ntp.conf.tmpl
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf)
-        content = util.read_file_or_url('file://' + ntp_conf).contents
+    def test_write_ntp_config_template_uses_ntp_conf_distro_no_servers(self):
+        """write_ntp_config_template reads from $client.conf.distro.tmpl"""
+        servers = []
+        pools = ['10.0.0.1', '10.0.0.2']
+        (confpath, template_fn) = self._generate_template()
+        mock_path = 'cloudinit.config.cc_ntp.temp_utils._TMPDIR'
+        with mock.patch(mock_path, self.new_root):
+            cc_ntp.write_ntp_config_template('ubuntu',
+                                             servers=servers, pools=pools,
+                                             path=confpath,
+                                             template_fn=template_fn,
+                                             template=None)
         self.assertEqual(
-            "servers ['192.168.2.1', '192.168.2.2']\npools []\n",
-            content.decode())
+            "servers []\npools ['10.0.0.1', '10.0.0.2']\n",
+            util.load_file(confpath))
 
-    def test_write_ntp_config_template_uses_ntp_conf_distro_no_servers(self):
-        """write_ntp_config_template reads content from ntp.conf.distro.tmpl.
+    def test_write_ntp_config_template_defaults_pools_w_empty_lists(self):
+        """write_ntp_config_template defaults pools servers upon empty config.
 
-        It reads ntp.conf.<distro>.tmpl before attempting ntp.conf.tmpl. It
-        renders the value from the keys servers and pools. When no
-        servers value is present, template is rendered using an empty list.
+        When both pools and servers are empty, default NR_POOL_SERVERS get
+        configured.
         """
         distro = 'ubuntu'
-        cfg = {
-            'pools': ['10.0.0.1', '10.0.0.2']
-        }
-        mycloud = self._get_cloud(distro)
-        ntp_conf = self.tmp_path('ntp.conf', self.new_root)  # Doesn't exist
-        # Create ntp.conf.tmpl which isn't read
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(b'NOT READ: ntp.conf.<distro>.tmpl is primary')
-        # Create ntp.conf.tmpl.<distro>
-        with open('{0}.{1}.tmpl'.format(ntp_conf, distro), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template(cfg, mycloud, ntp_conf)
-        content = util.read_file_or_url('file://' + ntp_conf).contents
+        pools = cc_ntp.generate_server_names(distro)
+        servers = []
+        (confpath, template_fn) = self._generate_template()
+        mock_path = 'cloudinit.config.cc_ntp.temp_utils._TMPDIR'
+        with mock.patch(mock_path, self.new_root):
+            cc_ntp.write_ntp_config_template(distro,
+                                             servers=servers, pools=pools,
+                                             path=confpath,
+                                             template_fn=template_fn,
+                                             template=None)
         self.assertEqual(
-            "servers []\npools ['10.0.0.1', '10.0.0.2']\n",
-            content.decode())
+            "servers []\npools {0}\n".format(pools),
+            util.load_file(confpath))
 
-    def test_write_ntp_config_template_defaults_pools_when_empty_lists(self):
-        """write_ntp_config_template defaults pools servers upon empty config.
+    def test_defaults_pools_empty_lists_sles(self):
+        """write_ntp_config_template defaults opensuse pools upon empty config.
 
         When both pools and servers are empty, default NR_POOL_SERVERS get
         configured.
         """
-        distro = 'ubuntu'
-        mycloud = self._get_cloud(distro)
-        ntp_conf = self.tmp_path('ntp.conf', self.new_root)  # Doesn't exist
-        # Create ntp.conf.tmpl
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.write_ntp_config_template({}, mycloud, ntp_conf)
-        content = util.read_file_or_url('file://' + ntp_conf).contents
-        default_pools = [
-            "{0}.{1}.pool.ntp.org".format(x, distro)
-            for x in range(0, cc_ntp.NR_POOL_SERVERS)]
+        distro = 'sles'
+        default_pools = cc_ntp.generate_server_names(distro)
+        (confpath, template_fn) = self._generate_template()
+
+        cc_ntp.write_ntp_config_template(distro,
+                                         servers=[], pools=[],
+                                         path=confpath,
+                                         template_fn=template_fn,
+                                         template=None)
+        for pool in default_pools:
+            self.assertIn('opensuse', pool)
         self.assertEqual(
             "servers []\npools {0}\n".format(default_pools),
-            content.decode())
+            util.load_file(confpath))
         self.assertIn(
             "Adding distro default ntp pool servers: {0}".format(
                 ",".join(default_pools)),
             self.logs.getvalue())
 
-    @mock.patch("cloudinit.config.cc_ntp.ntp_installable")
-    def test_ntp_handler_mocked_template(self, m_ntp_install):
-        """Test ntp handler renders ubuntu ntp.conf template."""
-        pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
-        servers = ['192.168.23.3', '192.168.23.4']
-        cfg = {
-            'ntp': {
-                'pools': pools,
-                'servers': servers
-            }
-        }
-        mycloud = self._get_cloud('ubuntu')
-        ntp_conf = self.tmp_path('ntp.conf', self.new_root)  # Doesn't exist
-        m_ntp_install.return_value = True
-
-        # Create ntp.conf.tmpl
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            with mock.patch.object(util, 'which', return_value=None):
-                cc_ntp.handle('notimportant', cfg, mycloud, None, None)
-
-        content = util.read_file_or_url('file://' + ntp_conf).contents
-        self.assertEqual(
-            'servers {0}\npools {1}\n'.format(servers, pools),
-            content.decode())
-
-    @mock.patch("cloudinit.config.cc_ntp.util")
-    def test_ntp_handler_mocked_template_snappy(self, m_util):
-        """Test ntp handler renders timesycnd.conf template on snappy."""
+    def test_timesyncd_template(self):
+        """Test timesycnd template is correct"""
         pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
         servers = ['192.168.23.3', '192.168.23.4']
-        cfg = {
-            'ntp': {
-                'pools': pools,
-                'servers': servers
-            }
-        }
-        mycloud = self._get_cloud('ubuntu')
-        m_util.system_is_snappy.return_value = True
-
-        # Create timesyncd.conf.tmpl
-        tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root)
-        template = '{0}.tmpl'.format(tsyncd_conf)
-        with open(template, 'wb') as stream:
-            stream.write(TIMESYNCD_TEMPLATE)
-
-        with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf):
-            cc_ntp.handle('notimportant', cfg, mycloud, None, None)
-
-        content = util.read_file_or_url('file://' + tsyncd_conf).contents
+        (confpath, template_fn) = self._generate_template(
+            template=TIMESYNCD_TEMPLATE)
+        cc_ntp.write_ntp_config_template('ubuntu',
+                                         servers=servers, pools=pools,
+                                         path=confpath,
+                                         template_fn=template_fn,
+                                         template=None)
         self.assertEqual(
             "[Time]\nNTP=%s %s \n" % (" ".join(servers), " ".join(pools)),
-            content.decode())
-
-    def test_ntp_handler_real_distro_templates(self):
-        """Test ntp handler renders the shipped distro ntp.conf templates."""
+            util.load_file(confpath))
+
+    def test_distro_ntp_client_configs(self):
+        """Test we have updated ntp client configs on different distros"""
+        delta = copy.deepcopy(cc_ntp.DISTRO_CLIENT_CONFIG)
+        base = copy.deepcopy(cc_ntp.NTP_CLIENT_CONFIG)
+        # confirm no-delta distros match the base config
+        for distro in cc_ntp.distros:
+            if distro not in delta:
+                result = cc_ntp.distro_ntp_client_configs(distro)
+                self.assertEqual(base, result)
+        # for distros with delta, ensure the merged config values match
+        # what is set in the delta
+        for distro in delta.keys():
+            result = cc_ntp.distro_ntp_client_configs(distro)
+            for client in delta[distro].keys():
+                for key in delta[distro][client].keys():
+                    self.assertEqual(delta[distro][client][key],
+                                     result[client][key])
+
+    def test_ntp_handler_real_distro_ntp_templates(self):
+        """Test ntp handler renders the shipped distro ntp client templates."""
         pools = ['0.mycompany.pool.ntp.org', '3.mycompany.pool.ntp.org']
         servers = ['192.168.23.3', '192.168.23.4']
-        cfg = {
-            'ntp': {
-                'pools': pools,
-                'servers': servers
-            }
-        }
-        ntp_conf = self.tmp_path('ntp.conf', self.new_root)  # Doesn't exist
-        for distro in ('debian', 'ubuntu', 'fedora', 'rhel', 'sles'):
-            mycloud = self._get_cloud(distro)
-            root_dir = dirname(dirname(os.path.realpath(util.__file__)))
-            tmpl_file = os.path.join(
-                '{0}/templates/ntp.conf.{1}.tmpl'.format(root_dir, distro))
-            # Create a copy in our tmp_dir
-            shutil.copy(
-                tmpl_file,
-                os.path.join(self.new_root, 'ntp.conf.%s.tmpl' % distro))
-            with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-                with mock.patch.object(util, 'which', return_value=[True]):
-                    cc_ntp.handle('notimportant', cfg, mycloud, None, None)
-
-            content = util.read_file_or_url('file://' + ntp_conf).contents
-            expected_servers = '\n'.join([
-                'server {0} iburst'.format(server) for server in servers])
-            self.assertIn(
-                expected_servers, content.decode(),
-                'failed to render ntp.conf for distro:{0}'.format(distro))
-            expected_pools = '\n'.join([
-                'pool {0} iburst'.format(pool) for pool in pools])
-            self.assertIn(
-                expected_pools, content.decode(),
-                'failed to render ntp.conf for distro:{0}'.format(distro))
+        for client in ['ntp', 'systemd-timesyncd', 'chrony']:
+            for distro in cc_ntp.distros:
+                distro_cfg = cc_ntp.distro_ntp_client_configs(distro)
+                ntpclient = distro_cfg[client]
+                confpath = (
+                    os.path.join(self.new_root, ntpclient.get('confpath')[1:]))
+                template = ntpclient.get('template_name')
+                # find sourcetree template file
+                root_dir = (
+                    dirname(dirname(os.path.realpath(util.__file__))) +
+                    '/templates')
+                source_fn = self._get_template_path(template, distro,
+                                                    basepath=root_dir)
+                template_fn = self._get_template_path(template, distro)
+                # don't fail if cloud-init doesn't have a template for
+                # a distro,client pair
+                if not os.path.exists(source_fn):
+                    continue
+                # Create a copy in our tmp_dir
+                shutil.copy(source_fn, template_fn)
+                cc_ntp.write_ntp_config_template(distro, servers=servers,
+                                                 pools=pools, path=confpath,
+                                                 template_fn=template_fn)
+                content = util.load_file(confpath)
+                if client in ['ntp', 'chrony']:
+                    expected_servers = '\n'.join([
+                        'server {0} iburst'.format(srv) for srv in servers])
+                    print('distro=%s client=%s' % (distro, client))
+                    self.assertIn(expected_servers, content,
+                                  ('failed to render {0} conf'
+                                   ' for distro:{1}'.format(client, distro)))
+                    expected_pools = '\n'.join([
+                        'pool {0} iburst'.format(pool) for pool in pools])
+                    self.assertIn(expected_pools, content,
+                                  ('failed to render {0} conf'
+                                   ' for distro:{1}'.format(client, distro)))
+                elif client == 'systemd-timesyncd':
+                    expected_content = (
+                        "# cloud-init generated file\n" +
+                        "# See timesyncd.conf(5) for details.\n\n" +
+                        "[Time]\nNTP=%s %s \n" % (" ".join(servers),
+                                                  " ".join(pools)))
+                    self.assertEqual(expected_content, content)
 
     def test_no_ntpcfg_does_nothing(self):
         """When no ntp section is defined handler logs a warning and noops."""
@@ -285,95 +294,96 @@ class TestNtp(FilesystemMockingTestCase):
             'not present or disabled by cfg\n',
             self.logs.getvalue())
 
-    def test_ntp_handler_schema_validation_allows_empty_ntp_config(self):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_schema_validation_allows_empty_ntp_config(self,
+                                                                   m_select):
         """Ntp schema validation allows for an empty ntp: configuration."""
         valid_empty_configs = [{'ntp': {}}, {'ntp': None}]
-        distro = 'ubuntu'
-        cc = self._get_cloud(distro)
-        ntp_conf = os.path.join(self.new_root, 'ntp.conf')
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
         for valid_empty_config in valid_empty_configs:
-            with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-                cc_ntp.handle('cc_ntp', valid_empty_config, cc, None, [])
-            with open(ntp_conf) as stream:
-                content = stream.read()
-            default_pools = [
-                "{0}.{1}.pool.ntp.org".format(x, distro)
-                for x in range(0, cc_ntp.NR_POOL_SERVERS)]
-            self.assertEqual(
-                "servers []\npools {0}\n".format(default_pools),
-                content)
-        self.assertNotIn('Invalid config:', self.logs.getvalue())
+            for distro in cc_ntp.distros:
+                mycloud = self._get_cloud(distro)
+                ntpconfig = self._mock_ntp_client_config(distro=distro)
+                confpath = ntpconfig['confpath']
+                m_select.return_value = ntpconfig
+                cc_ntp.handle('cc_ntp', valid_empty_config, mycloud, None, [])
+                pools = cc_ntp.generate_server_names(mycloud.distro.name)
+                self.assertEqual(
+                    "servers []\npools {0}\n".format(pools),
+                    util.load_file(confpath))
+            self.assertNotIn('Invalid config:', self.logs.getvalue())
 
     @skipUnlessJsonSchema()
-    def test_ntp_handler_schema_validation_warns_non_string_item_type(self):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_schema_validation_warns_non_string_item_type(self,
+                                                                      m_sel):
         """Ntp schema validation warns of non-strings in pools or servers.
 
         Schema validation is not strict, so ntp config is still be rendered.
         """
         invalid_config = {'ntp': {'pools': [123], 'servers': ['valid', None]}}
-        cc = self._get_cloud('ubuntu')
-        ntp_conf = os.path.join(self.new_root, 'ntp.conf')
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.handle('cc_ntp', invalid_config, cc, None, [])
-        self.assertIn(
-            "Invalid config:\nntp.pools.0: 123 is not of type 'string'\n"
-            "ntp.servers.1: None is not of type 'string'",
-            self.logs.getvalue())
-        with open(ntp_conf) as stream:
-            content = stream.read()
-        self.assertEqual("servers ['valid', None]\npools [123]\n", content)
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro)
+            confpath = ntpconfig['confpath']
+            m_sel.return_value = ntpconfig
+            cc_ntp.handle('cc_ntp', invalid_config, mycloud, None, [])
+            self.assertIn(
+                "Invalid config:\nntp.pools.0: 123 is not of type 'string'\n"
+                "ntp.servers.1: None is not of type 'string'",
+                self.logs.getvalue())
+            self.assertEqual("servers ['valid', None]\npools [123]\n",
+                             util.load_file(confpath))
 
     @skipUnlessJsonSchema()
-    def test_ntp_handler_schema_validation_warns_of_non_array_type(self):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_schema_validation_warns_of_non_array_type(self,
+                                                                   m_select):
         """Ntp schema validation warns of non-array pools or servers types.
 
         Schema validation is not strict, so ntp config is still be rendered.
         """
         invalid_config = {'ntp': {'pools': 123, 'servers': 'non-array'}}
-        cc = self._get_cloud('ubuntu')
-        ntp_conf = os.path.join(self.new_root, 'ntp.conf')
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.handle('cc_ntp', invalid_config, cc, None, [])
-        self.assertIn(
-            "Invalid config:\nntp.pools: 123 is not of type 'array'\n"
-            "ntp.servers: 'non-array' is not of type 'array'",
-            self.logs.getvalue())
-        with open(ntp_conf) as stream:
-            content = stream.read()
-        self.assertEqual("servers non-array\npools 123\n", content)
+
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro)
+            confpath = ntpconfig['confpath']
+            m_select.return_value = ntpconfig
+            cc_ntp.handle('cc_ntp', invalid_config, mycloud, None, [])
+            self.assertIn(
+                "Invalid config:\nntp.pools: 123 is not of type 'array'\n"
+                "ntp.servers: 'non-array' is not of type 'array'",
+                self.logs.getvalue())
+            self.assertEqual("servers non-array\npools 123\n",
+                             util.load_file(confpath))
 
     @skipUnlessJsonSchema()
-    def test_ntp_handler_schema_validation_warns_invalid_key_present(self):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_schema_validation_warns_invalid_key_present(self,
+                                                                     m_select):
         """Ntp schema validation warns of invalid keys present in ntp config.
 
         Schema validation is not strict, so ntp config is still be rendered.
         """
         invalid_config = {
             'ntp': {'invalidkey': 1, 'pools': ['0.mycompany.pool.ntp.org']}}
-        cc = self._get_cloud('ubuntu')
-        ntp_conf = os.path.join(self.new_root, 'ntp.conf')
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.handle('cc_ntp', invalid_config, cc, None, [])
-        self.assertIn(
-            "Invalid config:\nntp: Additional properties are not allowed "
-            "('invalidkey' was unexpected)",
-            self.logs.getvalue())
-        with open(ntp_conf) as stream:
-            content = stream.read()
-        self.assertEqual(
-            "servers []\npools ['0.mycompany.pool.ntp.org']\n",
-            content)
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro)
+            confpath = ntpconfig['confpath']
+            m_select.return_value = ntpconfig
+            cc_ntp.handle('cc_ntp', invalid_config, mycloud, None, [])
+            self.assertIn(
+                "Invalid config:\nntp: Additional properties are not allowed "
+                "('invalidkey' was unexpected)",
+                self.logs.getvalue())
+            self.assertEqual(
+                "servers []\npools ['0.mycompany.pool.ntp.org']\n",
+                util.load_file(confpath))
 
     @skipUnlessJsonSchema()
-    def test_ntp_handler_schema_validation_warns_of_duplicates(self):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_schema_validation_warns_of_duplicates(self, m_select):
         """Ntp schema validation warns of duplicates in servers or pools.
 
         Schema validation is not strict, so ntp config is still be rendered.
@@ -381,74 +391,330 @@ class TestNtp(FilesystemMockingTestCase):
         invalid_config = {
             'ntp': {'pools': ['0.mypool.org', '0.mypool.org'],
                     'servers': ['10.0.0.1', '10.0.0.1']}}
-        cc = self._get_cloud('ubuntu')
-        ntp_conf = os.path.join(self.new_root, 'ntp.conf')
-        with open('{0}.tmpl'.format(ntp_conf), 'wb') as stream:
-            stream.write(NTP_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.NTP_CONF', ntp_conf):
-            cc_ntp.handle('cc_ntp', invalid_config, cc, None, [])
-        self.assertIn(
-            "Invalid config:\nntp.pools: ['0.mypool.org', '0.mypool.org'] has "
-            "non-unique elements\nntp.servers: ['10.0.0.1', '10.0.0.1'] has "
-            "non-unique elements",
-            self.logs.getvalue())
-        with open(ntp_conf) as stream:
-            content = stream.read()
-        self.assertEqual(
-            "servers ['10.0.0.1', '10.0.0.1']\n"
-            "pools ['0.mypool.org', '0.mypool.org']\n",
-            content)
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro)
+            confpath = ntpconfig['confpath']
+            m_select.return_value = ntpconfig
+            cc_ntp.handle('cc_ntp', invalid_config, mycloud, None, [])
+            self.assertIn(
+                "Invalid config:\nntp.pools: ['0.mypool.org', '0.mypool.org']"
+                " has non-unique elements\nntp.servers: "
+                "['10.0.0.1', '10.0.0.1'] has non-unique elements",
+                self.logs.getvalue())
+            self.assertEqual(
+                "servers ['10.0.0.1', '10.0.0.1']\n"
+                "pools ['0.mypool.org', '0.mypool.org']\n",
+                util.load_file(confpath))
 
-    @mock.patch("cloudinit.config.cc_ntp.ntp_installable")
-    def test_ntp_handler_timesyncd(self, m_ntp_install):
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_timesyncd(self, m_select):
         """Test ntp handler configures timesyncd"""
-        m_ntp_install.return_value = False
-        distro = 'ubuntu'
-        cfg = {
-            'servers': ['192.168.2.1', '192.168.2.2'],
-            'pools': ['0.mypool.org'],
-        }
-        mycloud = self._get_cloud(distro)
-        tsyncd_conf = self.tmp_path("timesyncd.conf", self.new_root)
-        # Create timesyncd.conf.tmpl
-        template = '{0}.tmpl'.format(tsyncd_conf)
-        print(template)
-        with open(template, 'wb') as stream:
-            stream.write(TIMESYNCD_TEMPLATE)
-        with mock.patch('cloudinit.config.cc_ntp.TIMESYNCD_CONF', tsyncd_conf):
-            cc_ntp.write_ntp_config_template(cfg, mycloud, tsyncd_conf,
-                                             template='timesyncd.conf')
-
-        content = util.read_file_or_url('file://' + tsyncd_conf).contents
-        self.assertEqual(
-            "[Time]\nNTP=192.168.2.1 192.168.2.2 0.mypool.org \n",
-            content.decode())
+        servers = ['192.168.2.1', '192.168.2.2']
+        pools = ['0.mypool.org']
+        cfg = {'ntp': {'servers': servers, 'pools': pools}}
+        client = 'systemd-timesyncd'
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro,
+                                                     client=client)
+            confpath = ntpconfig['confpath']
+            m_select.return_value = ntpconfig
+            cc_ntp.handle('cc_ntp', cfg, mycloud, None, [])
+            self.assertEqual(
+                "[Time]\nNTP=192.168.2.1 192.168.2.2 0.mypool.org \n",
+                util.load_file(confpath))
+
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    def test_ntp_handler_enabled_false(self, m_select):
+        """Test ntp handler does not run if enabled: false """
+        cfg = {'ntp': {'enabled': False}}
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            cc_ntp.handle('notimportant', cfg, mycloud, None, None)
+            self.assertEqual(0, m_select.call_count)
+
+    @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
+    @mock.patch("cloudinit.distros.Distro.uses_systemd")
+    def test_ntp_the_whole_package(self, m_sysd, m_select):
+        """Test enabled config renders template, and restarts service """
+        cfg = {'ntp': {'enabled': True}}
+        for distro in cc_ntp.distros:
+            mycloud = self._get_cloud(distro)
+            ntpconfig = self._mock_ntp_client_config(distro=distro)
+            confpath = ntpconfig['confpath']
+            service_name = ntpconfig['service_name']
+            m_select.return_value = ntpconfig
+            pools = cc_ntp.generate_server_names(mycloud.distro.name)
+            # force uses systemd path
+            m_sysd.return_value = True
+            with mock.patch('cloudinit.config.cc_ntp.util') as m_util:
+                # allow use of util.mergemanydict
+                m_util.mergemanydict.side_effect = util.mergemanydict
+                # default client is present
+                m_util.which.return_value = True
+                # use the config 'enabled' value
+                m_util.is_false.return_value = util.is_false(
+                    cfg['ntp']['enabled'])
+                cc_ntp.handle('notimportant', cfg, mycloud, 

Follow ups