← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic.

Commit message:
New upstream snapshot for SRU into Bionic cloud-init version 18.3.

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1768600 in cloud-init: "UTF-8 support in User Data (text/x-shellscript) is broken"
  https://bugs.launchpad.net/cloud-init/+bug/1768600
  Bug #1770462 in cloud-init: "Allow empty stages"
  https://bugs.launchpad.net/cloud-init/+bug/1770462
  Bug #1771468 in cloud-init: "Allow a way to explicitly disable sudo for a user"
  https://bugs.launchpad.net/cloud-init/+bug/1771468
  Bug #1776701 in cloud-init: "ec2: xenial unnecessary  openstack datasource probes during discovery"
  https://bugs.launchpad.net/cloud-init/+bug/1776701
  Bug #1776958 in cloud-init: "error creating lxdbr0."
  https://bugs.launchpad.net/cloud-init/+bug/1776958
  Bug #1777743 in cloud-init: "Release 18.3"
  https://bugs.launchpad.net/cloud-init/+bug/1777743

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348311
-- 
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic.
diff --git a/ChangeLog b/ChangeLog
index daa7ccf..72c5287 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,229 @@
+18.3:
+ - docs: represent sudo:false in docs for user_groups config module
+ - Explicitly prevent `sudo` access for user module
+   [Jacob Bednarz] (LP: #1771468)
+ - lxd: Delete default network and detach device if lxd-init created them.
+   (LP: #1776958)
+ - openstack: avoid unneeded metadata probe on non-openstack platforms
+   (LP: #1776701)
+ - stages: fix tracebacks if a module stage is undefined or empty
+   [Robert Schweikert] (LP: #1770462)
+ - Be more safe on string/bytes when writing multipart user-data to disk.
+   (LP: #1768600)
+ - Fix get_proc_env for pids that have non-utf8 content in environment.
+   (LP: #1775371)
+ - tests: fix salt_minion integration test on bionic and later
+ - tests: provide human-readable integration test summary when --verbose
+ - tests: skip chrony integration tests on lxd running artful or older
+ - test: add optional --preserve-instance arg to integraiton tests
+ - netplan: fix mtu if provided by network config for all rendered types
+   (LP: #1774666)
+ - tests: remove pip install workarounds for pylxd, take upstream fix.
+ - subp: support combine_capture argument.
+ - tests: ordered tox dependencies for pylxd install
+ - util: add get_linux_distro function to replace platform.dist
+   [Robert Schweikert] (LP: #1745235)
+ - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+ - - Do not use the systemd_prefix macro, not available in this environment
+   [Robert Schweikert]
+ - doc: Add config info to ec2, openstack and cloudstack datasource docs
+ - Enable SmartOS network metadata to work with netplan via per-subnet
+   routes [Dan McDonald] (LP: #1763512)
+ - openstack: Allow discovery in init-local using dhclient in a sandbox.
+   (LP: #1749717)
+ - tests: Avoid using https in httpretty, improve HttPretty test case.
+   (LP: #1771659)
+ - yaml_load/schema: Add invalid line and column nums to error message
+ - Azure: Ignore NTFS mount errors when checking ephemeral drive
+   [Paul Meyer]
+ - packages/brpm: Get proper dependencies for cmdline distro.
+ - packages: Make rpm spec files patch in package version like in debs.
+ - tools/run-container: replace tools/run-centos with more generic.
+ - Update version.version_string to contain packaged version. (LP: #1770712)
+ - cc_mounts: Do not add devices to fstab that are already present.
+   [Lars Kellogg-Stedman]
+ - ds-identify: ensure that we have certain tokens in PATH. (LP: #1771382)
+ - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+ - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+ - cloud_tests: help pylint [Ryan Harper]
+ - flake8: fix flake8 errors in previous commit.
+ - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+ - tests: restructure SSH and initial connections [Joshua Powers]
+ - ds-identify: recognize container-other as a container, test SmartOS.
+ - cloud-config.service: run After snap.seeded.service. (LP: #1767131)
+ - tests: do not rely on host /proc/cmdline in test_net.py
+   [Lars Kellogg-Stedman] (LP: #1769952)
+ - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+ - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+ - tests: fix package and ca_cert cloud_tests on bionic
+   (LP: #1769985)
+ - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+ - pycodestyle: Fix deprecated string literals, move away from flake8.
+ - azure: Add reported ready marker file. [Joshua Chan] (LP: #1765214)
+ - tools: Support adding a release suffix through packages/bddeb.
+ - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+   [Harm Weites] (LP: #1404745)
+ - tools: Re-use the orig tarball in packages/bddeb if it is around.
+ - netinfo: fix netdev_pformat when a nic does not have an address
+   assigned. (LP: #1766302)
+ - collect-logs: add -v flag, write to stderr, limit journal to single
+   boot. (LP: #1766335)
+ - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+   (LP: #1766401)
+ - Add reporting events and log_time around early source of blocking time
+   [Ryan Harper]
+ - IBMCloud: recognize provisioning environment during debug boots.
+   (LP: #1767166)
+ - net: detect unstable network names and trigger a settle if needed
+   [Ryan Harper] (LP: #1766287)
+ - IBMCloud: improve documentation in datasource.
+ - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+ - packages/debian/control.in: add missing dependency on iproute2.
+   (LP: #1766711)
+ - DataSourceSmartOS: add locking of serial device.
+   [Mike Gerdts] (LP: #1746605)
+ - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
+ - DataSourceSmartOS: list() should always return a list
+   [Mike Gerdts] (LP: #1763480)
+ - schema: in validation, raise ImportError if strict but no jsonschema.
+ - set_passwords: Add newline to end of sshd config, only restart if
+   updated. (LP: #1677205)
+ - pylint: pay attention to unused variable warnings.
+ - doc: Add documentation for AliYun datasource. [Junjie Wang]
+ - Schema: do not warn on duplicate items in commands. (LP: #1764264)
+ - net: Depend on iproute2's ip instead of net-tools ifconfig or route
+ - DataSourceSmartOS: fix hang when metadata service is down
+   [Mike Gerdts] (LP: #1667735)
+ - DataSourceSmartOS: change default fs on ephemeral disk from ext3 to
+   ext4. [Mike Gerdts] (LP: #1763511)
+ - pycodestyle: Fix invalid escape sequences in string literals.
+ - Implement bash completion script for cloud-init command line
+   [Ryan Harper]
+ - tools: Fix make-tarball cli tool usage for development
+ - renderer: support unicode in render_from_file.
+ - Implement ntp client spec with auto support for distro selection
+   [Ryan Harper] (LP: #1749722)
+ - Apport: add Brightbox, IBM, LXD, and OpenTelekomCloud to list of clouds.
+ - tests: fix ec2 integration network metadata validation
+ - tests: fix integration tests to support lxd 3.0 release
+ - correct documentation to match correct attribute name usage.
+   [Dominic Schlegel] (LP: #1420018)
+ - cc_resizefs, util: handle no /dev/zfs [Ryan Harper]
+ - doc: Fix links in OpenStack datasource documentation.
+   [Dominic Schlegel] (LP: #1721660)
+ - docs: represent sudo:false in docs for user_groups config module
+ - Explicitly prevent `sudo` access for user module
+   [Jacob Bednarz] (LP: #1771468)
+ - lxd: Delete default network and detach device if lxd-init created them.
+   (LP: #1776958)
+ - openstack: avoid unneeded metadata probe on non-openstack platforms
+   (LP: #1776701)
+ - stages: fix tracebacks if a module stage is undefined or empty
+   [Robert Schweikert] (LP: #1770462)
+ - Be more safe on string/bytes when writing multipart user-data to disk.
+   (LP: #1768600)
+ - Fix get_proc_env for pids that have non-utf8 content in environment.
+   (LP: #1775371)
+ - tests: fix salt_minion integration test on bionic and later
+ - tests: provide human-readable integration test summary when --verbose
+ - tests: skip chrony integration tests on lxd running artful or older
+ - test: add optional --preserve-instance arg to integraiton tests
+ - netplan: fix mtu if provided by network config for all rendered types
+   (LP: #1774666)
+ - tests: remove pip install workarounds for pylxd, take upstream fix.
+ - subp: support combine_capture argument.
+ - tests: ordered tox dependencies for pylxd install
+ - util: add get_linux_distro function to replace platform.dist
+   [Robert Schweikert] (LP: #1745235)
+ - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+ - - Do not use the systemd_prefix macro, not available in this environment
+   [Robert Schweikert]
+ - doc: Add config info to ec2, openstack and cloudstack datasource docs
+ - Enable SmartOS network metadata to work with netplan via per-subnet
+   routes [Dan McDonald] (LP: #1763512)
+ - openstack: Allow discovery in init-local using dhclient in a sandbox.
+   (LP: #1749717)
+ - tests: Avoid using https in httpretty, improve HttPretty test case.
+   (LP: #1771659)
+ - yaml_load/schema: Add invalid line and column nums to error message
+ - Azure: Ignore NTFS mount errors when checking ephemeral drive
+   [Paul Meyer]
+ - packages/brpm: Get proper dependencies for cmdline distro.
+ - packages: Make rpm spec files patch in package version like in debs.
+ - tools/run-container: replace tools/run-centos with more generic.
+ - Update version.version_string to contain packaged version. (LP: #1770712)
+ - cc_mounts: Do not add devices to fstab that are already present.
+   [Lars Kellogg-Stedman]
+ - ds-identify: ensure that we have certain tokens in PATH. (LP: #1771382)
+ - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+ - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+ - cloud_tests: help pylint [Ryan Harper]
+ - flake8: fix flake8 errors in previous commit.
+ - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+ - tests: restructure SSH and initial connections [Joshua Powers]
+ - ds-identify: recognize container-other as a container, test SmartOS.
+ - cloud-config.service: run After snap.seeded.service. (LP: #1767131)
+ - tests: do not rely on host /proc/cmdline in test_net.py
+   [Lars Kellogg-Stedman] (LP: #1769952)
+ - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+ - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+ - tests: fix package and ca_cert cloud_tests on bionic
+   (LP: #1769985)
+ - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+ - pycodestyle: Fix deprecated string literals, move away from flake8.
+ - azure: Add reported ready marker file. [Joshua Chan] (LP: #1765214)
+ - tools: Support adding a release suffix through packages/bddeb.
+ - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+   [Harm Weites] (LP: #1404745)
+ - tools: Re-use the orig tarball in packages/bddeb if it is around.
+ - netinfo: fix netdev_pformat when a nic does not have an address
+   assigned. (LP: #1766302)
+ - collect-logs: add -v flag, write to stderr, limit journal to single
+   boot. (LP: #1766335)
+ - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+   (LP: #1766401)
+ - Add reporting events and log_time around early source of blocking time
+   [Ryan Harper]
+ - IBMCloud: recognize provisioning environment during debug boots.
+   (LP: #1767166)
+ - net: detect unstable network names and trigger a settle if needed
+   [Ryan Harper] (LP: #1766287)
+ - IBMCloud: improve documentation in datasource.
+ - sysconfig: dhcp6 subnet type should not imply dhcpv4 [Vitaly Kuznetsov]
+ - packages/debian/control.in: add missing dependency on iproute2.
+   (LP: #1766711)
+ - DataSourceSmartOS: add locking of serial device.
+   [Mike Gerdts] (LP: #1746605)
+ - DataSourceSmartOS: sdc:hostname is ignored [Mike Gerdts] (LP: #1765085)
+ - DataSourceSmartOS: list() should always return a list
+   [Mike Gerdts] (LP: #1763480)
+ - schema: in validation, raise ImportError if strict but no jsonschema.
+ - set_passwords: Add newline to end of sshd config, only restart if
+   updated. (LP: #1677205)
+ - pylint: pay attention to unused variable warnings.
+ - doc: Add documentation for AliYun datasource. [Junjie Wang]
+ - Schema: do not warn on duplicate items in commands. (LP: #1764264)
+ - net: Depend on iproute2's ip instead of net-tools ifconfig or route
+ - DataSourceSmartOS: fix hang when metadata service is down
+   [Mike Gerdts] (LP: #1667735)
+ - DataSourceSmartOS: change default fs on ephemeral disk from ext3 to
+   ext4. [Mike Gerdts] (LP: #1763511)
+ - pycodestyle: Fix invalid escape sequences in string literals.
+ - Implement bash completion script for cloud-init command line
+   [Ryan Harper]
+ - tools: Fix make-tarball cli tool usage for development
+ - renderer: support unicode in render_from_file.
+ - Implement ntp client spec with auto support for distro selection
+   [Ryan Harper] (LP: #1749722)
+ - Apport: add Brightbox, IBM, LXD, and OpenTelekomCloud to list of clouds.
+ - tests: fix ec2 integration network metadata validation
+ - tests: fix integration tests to support lxd 3.0 release
+ - correct documentation to match correct attribute name usage.
+   [Dominic Schlegel] (LP: #1420018)
+ - cc_resizefs, util: handle no /dev/zfs [Ryan Harper]
+ - doc: Fix links in OpenStack datasource documentation.
+   [Dominic Schlegel] (LP: #1721660)
+
 18.2:
  - Hetzner: Exit early if dmi system-manufacturer is not Hetzner.
  - Add missing dependency on isc-dhcp-client to trunk ubuntu packaging.
diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
index 35ca478..df72520 100644
--- a/cloudinit/cmd/devel/logs.py
+++ b/cloudinit/cmd/devel/logs.py
@@ -11,6 +11,7 @@ from cloudinit.temp_utils import tempdir
 from datetime import datetime
 import os
 import shutil
+import sys
 
 
 CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
@@ -31,6 +32,8 @@ def get_parser(parser=None):
         parser = argparse.ArgumentParser(
             prog='collect-logs',
             description='Collect and tar all cloud-init debug info')
+    parser.add_argument('--verbose', '-v', action='count', default=0,
+                        dest='verbosity', help="Be more verbose.")
     parser.add_argument(
         "--tarfile", '-t', default='cloud-init.tar.gz',
         help=('The tarfile to create containing all collected logs.'
@@ -43,17 +46,33 @@ def get_parser(parser=None):
     return parser
 
 
-def _write_command_output_to_file(cmd, filename):
+def _write_command_output_to_file(cmd, filename, msg, verbosity):
     """Helper which runs a command and writes output or error to filename."""
     try:
         out, _ = subp(cmd)
     except ProcessExecutionError as e:
         write_file(filename, str(e))
+        _debug("collecting %s failed.\n" % msg, 1, verbosity)
     else:
         write_file(filename, out)
+        _debug("collected %s\n" % msg, 1, verbosity)
+        return out
 
 
-def collect_logs(tarfile, include_userdata):
+def _debug(msg, level, verbosity):
+    if level <= verbosity:
+        sys.stderr.write(msg)
+
+
+def _collect_file(path, out_dir, verbosity):
+    if os.path.isfile(path):
+        copy(path, out_dir)
+        _debug("collected file: %s\n" % path, 1, verbosity)
+    else:
+        _debug("file %s did not exist\n" % path, 2, verbosity)
+
+
+def collect_logs(tarfile, include_userdata, verbosity=0):
     """Collect all cloud-init logs and tar them up into the provided tarfile.
 
     @param tarfile: The path of the tar-gzipped file to create.
@@ -64,28 +83,46 @@ def collect_logs(tarfile, include_userdata):
     log_dir = 'cloud-init-logs-{0}'.format(date)
     with tempdir(dir='/tmp') as tmp_dir:
         log_dir = os.path.join(tmp_dir, log_dir)
-        _write_command_output_to_file(
+        version = _write_command_output_to_file(
+            ['cloud-init', '--version'],
+            os.path.join(log_dir, 'version'),
+            "cloud-init --version", verbosity)
+        dpkg_ver = _write_command_output_to_file(
             ['dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'],
-            os.path.join(log_dir, 'version'))
+            os.path.join(log_dir, 'dpkg-version'),
+            "dpkg version", verbosity)
+        if not version:
+            version = dpkg_ver if dpkg_ver else "not-available"
+        _debug("collected cloud-init version: %s\n" % version, 1, verbosity)
         _write_command_output_to_file(
-            ['dmesg'], os.path.join(log_dir, 'dmesg.txt'))
+            ['dmesg'], os.path.join(log_dir, 'dmesg.txt'),
+            "dmesg output", verbosity)
         _write_command_output_to_file(
-            ['journalctl', '-o', 'short-precise'],
-            os.path.join(log_dir, 'journal.txt'))
+            ['journalctl', '--boot=0', '-o', 'short-precise'],
+            os.path.join(log_dir, 'journal.txt'),
+            "systemd journal of current boot", verbosity)
+
         for log in CLOUDINIT_LOGS:
-            copy(log, log_dir)
+            _collect_file(log, log_dir, verbosity)
         if include_userdata:
-            copy(USER_DATA_FILE, log_dir)
+            _collect_file(USER_DATA_FILE, log_dir, verbosity)
         run_dir = os.path.join(log_dir, 'run')
         ensure_dir(run_dir)
-        shutil.copytree(CLOUDINIT_RUN_DIR, os.path.join(run_dir, 'cloud-init'))
+        if os.path.exists(CLOUDINIT_RUN_DIR):
+            shutil.copytree(CLOUDINIT_RUN_DIR,
+                            os.path.join(run_dir, 'cloud-init'))
+            _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)
+        else:
+            _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,
+                   verbosity)
         with chdir(tmp_dir):
             subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
+    sys.stderr.write("Wrote %s\n" % tarfile)
 
 
 def handle_collect_logs_args(name, args):
     """Handle calls to 'cloud-init collect-logs' as a subcommand."""
-    collect_logs(args.tarfile, args.userdata)
+    collect_logs(args.tarfile, args.userdata, args.verbosity)
 
 
 def main():
diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
index dc4947c..98b4756 100644
--- a/cloudinit/cmd/devel/tests/test_logs.py
+++ b/cloudinit/cmd/devel/tests/test_logs.py
@@ -4,6 +4,7 @@ from cloudinit.cmd.devel import logs
 from cloudinit.util import ensure_dir, load_file, subp, write_file
 from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
 from datetime import datetime
+import mock
 import os
 
 
@@ -27,11 +28,13 @@ class TestCollectLogs(FilesystemMockingTestCase):
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
         date_logdir = 'cloud-init-logs-{0}'.format(date)
 
+        version_out = '/usr/bin/cloud-init 18.2fake\n'
         expected_subp = {
             ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
                 '0.7fake\n',
+            ('cloud-init', '--version'): version_out,
             ('dmesg',): 'dmesg-out\n',
-            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('journalctl', '--boot=0', '-o', 'short-precise'): 'journal-out\n',
             ('tar', 'czvf', output_tarfile, date_logdir): ''
         }
 
@@ -44,9 +47,12 @@ class TestCollectLogs(FilesystemMockingTestCase):
                 subp(cmd)  # Pass through tar cmd so we can check output
             return expected_subp[cmd_tuple], ''
 
+        fake_stderr = mock.MagicMock()
+
         wrap_and_call(
             'cloudinit.cmd.devel.logs',
             {'subp': {'side_effect': fake_subp},
+             'sys.stderr': {'new': fake_stderr},
              'CLOUDINIT_LOGS': {'new': [log1, log2]},
              'CLOUDINIT_RUN_DIR': {'new': self.run_dir}},
             logs.collect_logs, output_tarfile, include_userdata=False)
@@ -55,7 +61,9 @@ class TestCollectLogs(FilesystemMockingTestCase):
         out_logdir = self.tmp_path(date_logdir, self.new_root)
         self.assertEqual(
             '0.7fake\n',
-            load_file(os.path.join(out_logdir, 'version')))
+            load_file(os.path.join(out_logdir, 'dpkg-version')))
+        self.assertEqual(version_out,
+                         load_file(os.path.join(out_logdir, 'version')))
         self.assertEqual(
             'cloud-init-log',
             load_file(os.path.join(out_logdir, 'cloud-init.log')))
@@ -72,6 +80,7 @@ class TestCollectLogs(FilesystemMockingTestCase):
             'results',
             load_file(
                 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
+        fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
 
     def test_collect_logs_includes_optional_userdata(self):
         """collect-logs include userdata when --include-userdata is set."""
@@ -88,11 +97,13 @@ class TestCollectLogs(FilesystemMockingTestCase):
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
         date_logdir = 'cloud-init-logs-{0}'.format(date)
 
+        version_out = '/usr/bin/cloud-init 18.2fake\n'
         expected_subp = {
             ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'):
                 '0.7fake',
+            ('cloud-init', '--version'): version_out,
             ('dmesg',): 'dmesg-out\n',
-            ('journalctl', '-o', 'short-precise'): 'journal-out\n',
+            ('journalctl', '--boot=0', '-o', 'short-precise'): 'journal-out\n',
             ('tar', 'czvf', output_tarfile, date_logdir): ''
         }
 
@@ -105,9 +116,12 @@ class TestCollectLogs(FilesystemMockingTestCase):
                 subp(cmd)  # Pass through tar cmd so we can check output
             return expected_subp[cmd_tuple], ''
 
+        fake_stderr = mock.MagicMock()
+
         wrap_and_call(
             'cloudinit.cmd.devel.logs',
             {'subp': {'side_effect': fake_subp},
+             'sys.stderr': {'new': fake_stderr},
              'CLOUDINIT_LOGS': {'new': [log1, log2]},
              'CLOUDINIT_RUN_DIR': {'new': self.run_dir},
              'USER_DATA_FILE': {'new': userdata}},
@@ -118,3 +132,4 @@ class TestCollectLogs(FilesystemMockingTestCase):
         self.assertEqual(
             'user-data',
             load_file(os.path.join(out_logdir, 'user-data.txt')))
+        fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 3f2dbb9..d6ba90f 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -187,7 +187,7 @@ def attempt_cmdline_url(path, network=True, cmdline=None):
     data = None
     header = b'#cloud-config'
     try:
-        resp = util.read_file_or_url(**kwargs)
+        resp = url_helper.read_file_or_url(**kwargs)
         if resp.ok():
             data = resp.contents
             if not resp.contents.startswith(header):
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 09374d2..ac72ac4 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -47,11 +47,16 @@ lxd-bridge will be configured accordingly.
             domain: <domain>
 """
 
+from cloudinit import log as logging
 from cloudinit import util
 import os
 
 distros = ['ubuntu']
 
+LOG = logging.getLogger(__name__)
+
+_DEFAULT_NETWORK_NAME = "lxdbr0"
+
 
 def handle(name, cfg, cloud, log, args):
     # Get config
@@ -109,6 +114,7 @@ def handle(name, cfg, cloud, log, args):
     # Set up lxd-bridge if bridge config is given
     dconf_comm = "debconf-communicate"
     if bridge_cfg:
+        net_name = bridge_cfg.get("name", _DEFAULT_NETWORK_NAME)
         if os.path.exists("/etc/default/lxd-bridge") \
                 and util.which(dconf_comm):
             # Bridge configured through packaging
@@ -135,15 +141,18 @@ def handle(name, cfg, cloud, log, args):
         else:
             # Built-in LXD bridge support
             cmd_create, cmd_attach = bridge_to_cmd(bridge_cfg)
+            maybe_cleanup_default(
+                net_name=net_name, did_init=bool(init_cfg),
+                create=bool(cmd_create), attach=bool(cmd_attach))
             if cmd_create:
                 log.debug("Creating lxd bridge: %s" %
                           " ".join(cmd_create))
-                util.subp(cmd_create)
+                _lxc(cmd_create)
 
             if cmd_attach:
                 log.debug("Setting up default lxd bridge: %s" %
                           " ".join(cmd_create))
-                util.subp(cmd_attach)
+                _lxc(cmd_attach)
 
     elif bridge_cfg:
         raise RuntimeError(
@@ -204,10 +213,10 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("mode") == "none":
         return None, None
 
-    bridge_name = bridge_cfg.get("name", "lxdbr0")
+    bridge_name = bridge_cfg.get("name", _DEFAULT_NETWORK_NAME)
     cmd_create = []
-    cmd_attach = ["lxc", "network", "attach-profile", bridge_name,
-                  "default", "eth0", "--force-local"]
+    cmd_attach = ["network", "attach-profile", bridge_name,
+                  "default", "eth0"]
 
     if bridge_cfg.get("mode") == "existing":
         return None, cmd_attach
@@ -215,7 +224,7 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("mode") != "new":
         raise Exception("invalid bridge mode \"%s\"" % bridge_cfg.get("mode"))
 
-    cmd_create = ["lxc", "network", "create", bridge_name]
+    cmd_create = ["network", "create", bridge_name]
 
     if bridge_cfg.get("ipv4_address") and bridge_cfg.get("ipv4_netmask"):
         cmd_create.append("ipv4.address=%s/%s" %
@@ -247,8 +256,47 @@ def bridge_to_cmd(bridge_cfg):
     if bridge_cfg.get("domain"):
         cmd_create.append("dns.domain=%s" % bridge_cfg.get("domain"))
 
-    cmd_create.append("--force-local")
-
     return cmd_create, cmd_attach
 
+
+def _lxc(cmd):
+    env = {'LC_ALL': 'C'}
+    util.subp(['lxc'] + list(cmd) + ["--force-local"], update_env=env)
+
+
+def maybe_cleanup_default(net_name, did_init, create, attach,
+                          profile="default", nic_name="eth0"):
+    """Newer versions of lxc (3.0.1+) create a lxdbr0 network when
+    'lxd init --auto' is run.  Older versions did not.
+
+    By removing ay that lxd-init created, we simply leave the add/attach
+    code in-tact.
+
+    https://github.com/lxc/lxd/issues/4649""";
+    if net_name != _DEFAULT_NETWORK_NAME or not did_init:
+        return
+
+    fail_assume_enoent = " failed. Assuming it did not exist."
+    succeeded = " succeeded."
+    if create:
+        msg = "Deletion of lxd network '%s'" % net_name
+        try:
+            _lxc(["network", "delete", net_name])
+            LOG.debug(msg + succeeded)
+        except util.ProcessExecutionError as e:
+            if e.exit_code != 1:
+                raise e
+            LOG.debug(msg + fail_assume_enoent)
+
+    if attach:
+        msg = "Removal of device '%s' from profile '%s'" % (nic_name, profile)
+        try:
+            _lxc(["profile", "device", "remove", profile, nic_name])
+            LOG.debug(msg + succeeded)
+        except util.ProcessExecutionError as e:
+            if e.exit_code != 1:
+                raise e
+            LOG.debug(msg + fail_assume_enoent)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
index f14a4fc..339baba 100644
--- a/cloudinit/config/cc_mounts.py
+++ b/cloudinit/config/cc_mounts.py
@@ -76,6 +76,7 @@ DEVICE_NAME_FILTER = r"^([x]{0,1}[shv]d[a-z][0-9]*|sr[0-9]+)$"
 DEVICE_NAME_RE = re.compile(DEVICE_NAME_FILTER)
 WS = re.compile("[%s]+" % (whitespace))
 FSTAB_PATH = "/etc/fstab"
+MNT_COMMENT = "comment=cloudconfig"
 
 LOG = logging.getLogger(__name__)
 
@@ -232,8 +233,8 @@ def setup_swapfile(fname, size=None, maxsize=None):
     if str(size).lower() == "auto":
         try:
             memsize = util.read_meminfo()['total']
-        except IOError as e:
-            LOG.debug("Not creating swap. failed to read meminfo")
+        except IOError:
+            LOG.debug("Not creating swap: failed to read meminfo")
             return
 
         util.ensure_dir(tdir)
@@ -280,17 +281,17 @@ def handle_swapcfg(swapcfg):
 
     if os.path.exists(fname):
         if not os.path.exists("/proc/swaps"):
-            LOG.debug("swap file %s existed. no /proc/swaps. Being safe.",
-                      fname)
+            LOG.debug("swap file %s exists, but no /proc/swaps exists, "
+                      "being safe", fname)
             return fname
         try:
             for line in util.load_file("/proc/swaps").splitlines():
                 if line.startswith(fname + " "):
-                    LOG.debug("swap file %s already in use.", fname)
+                    LOG.debug("swap file %s already in use", fname)
                     return fname
-            LOG.debug("swap file %s existed, but not in /proc/swaps", fname)
+            LOG.debug("swap file %s exists, but not in /proc/swaps", fname)
         except Exception:
-            LOG.warning("swap file %s existed. Error reading /proc/swaps",
+            LOG.warning("swap file %s exists. Error reading /proc/swaps",
                         fname)
             return fname
 
@@ -327,6 +328,22 @@ def handle(_name, cfg, cloud, log, _args):
 
     LOG.debug("mounts configuration is %s", cfgmnt)
 
+    fstab_lines = []
+    fstab_devs = {}
+    fstab_removed = []
+
+    for line in util.load_file(FSTAB_PATH).splitlines():
+        if MNT_COMMENT in line:
+            fstab_removed.append(line)
+            continue
+
+        try:
+            toks = WS.split(line)
+        except Exception:
+            pass
+        fstab_devs[toks[0]] = line
+        fstab_lines.append(line)
+
     for i in range(len(cfgmnt)):
         # skip something that wasn't a list
         if not isinstance(cfgmnt[i], list):
@@ -336,12 +353,17 @@ def handle(_name, cfg, cloud, log, _args):
 
         start = str(cfgmnt[i][0])
         sanitized = sanitize_devname(start, cloud.device_name_to_device, log)
+        if sanitized != start:
+            log.debug("changed %s => %s" % (start, sanitized))
+
         if sanitized is None:
-            log.debug("Ignorming nonexistant named mount %s", start)
+            log.debug("Ignoring nonexistent named mount %s", start)
+            continue
+        elif sanitized in fstab_devs:
+            log.info("Device %s already defined in fstab: %s",
+                     sanitized, fstab_devs[sanitized])
             continue
 
-        if sanitized != start:
-            log.debug("changed %s => %s" % (start, sanitized))
         cfgmnt[i][0] = sanitized
 
         # in case the user did not quote a field (likely fs-freq, fs_passno)
@@ -373,11 +395,17 @@ def handle(_name, cfg, cloud, log, _args):
     for defmnt in defmnts:
         start = defmnt[0]
         sanitized = sanitize_devname(start, cloud.device_name_to_device, log)
-        if sanitized is None:
-            log.debug("Ignoring nonexistant default named mount %s", start)
-            continue
         if sanitized != start:
             log.debug("changed default device %s => %s" % (start, sanitized))
+
+        if sanitized is None:
+            log.debug("Ignoring nonexistent default named mount %s", start)
+            continue
+        elif sanitized in fstab_devs:
+            log.debug("Device %s already defined in fstab: %s",
+                      sanitized, fstab_devs[sanitized])
+            continue
+
         defmnt[0] = sanitized
 
         cfgmnt_has = False
@@ -397,7 +425,7 @@ def handle(_name, cfg, cloud, log, _args):
     actlist = []
     for x in cfgmnt:
         if x[1] is None:
-            log.debug("Skipping non-existent device named %s", x[0])
+            log.debug("Skipping nonexistent device named %s", x[0])
         else:
             actlist.append(x)
 
@@ -406,34 +434,21 @@ def handle(_name, cfg, cloud, log, _args):
         actlist.append([swapret, "none", "swap", "sw", "0", "0"])
 
     if len(actlist) == 0:
-        log.debug("No modifications to fstab needed.")
+        log.debug("No modifications to fstab needed")
         return
 
-    comment = "comment=cloudconfig"
     cc_lines = []
     needswap = False
     dirs = []
     for line in actlist:
         # write 'comment' in the fs_mntops, entry,  claiming this
-        line[3] = "%s,%s" % (line[3], comment)
+        line[3] = "%s,%s" % (line[3], MNT_COMMENT)
         if line[2] == "swap":
             needswap = True
         if line[1].startswith("/"):
             dirs.append(line[1])
         cc_lines.append('\t'.join(line))
 
-    fstab_lines = []
-    removed = []
-    for line in util.load_file(FSTAB_PATH).splitlines():
-        try:
-            toks = WS.split(line)
-            if toks[3].find(comment) != -1:
-                removed.append(line)
-                continue
-        except Exception:
-            pass
-        fstab_lines.append(line)
-
     for d in dirs:
         try:
             util.ensure_dir(d)
@@ -441,7 +456,7 @@ def handle(_name, cfg, cloud, log, _args):
             util.logexc(log, "Failed to make '%s' config-mount", d)
 
     sadds = [WS.sub(" ", n) for n in cc_lines]
-    sdrops = [WS.sub(" ", n) for n in removed]
+    sdrops = [WS.sub(" ", n) for n in fstab_removed]
 
     sops = (["- " + drop for drop in sdrops if drop not in sadds] +
             ["+ " + add for add in sadds if add not in sdrops])
diff --git a/cloudinit/config/cc_phone_home.py b/cloudinit/config/cc_phone_home.py
index 878069b..3be0d1c 100644
--- a/cloudinit/config/cc_phone_home.py
+++ b/cloudinit/config/cc_phone_home.py
@@ -41,6 +41,7 @@ keys to post. Available keys are:
 """
 
 from cloudinit import templater
+from cloudinit import url_helper
 from cloudinit import util
 
 from cloudinit.settings import PER_INSTANCE
@@ -136,9 +137,9 @@ def handle(name, cfg, cloud, log, args):
     }
     url = templater.render_string(url, url_params)
     try:
-        util.read_file_or_url(url, data=real_submit_keys,
-                              retries=tries, sec_between=3,
-                              ssl_details=util.fetch_ssl_details(cloud.paths))
+        url_helper.read_file_or_url(
+            url, data=real_submit_keys, retries=tries, sec_between=3,
+            ssl_details=util.fetch_ssl_details(cloud.paths))
     except Exception:
         util.logexc(log, "Failed to post phone home data to %s in %s tries",
                     url, tries)
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 82f29e1..2edddd0 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -81,7 +81,7 @@ def _resize_xfs(mount_point, devpth):
 
 
 def _resize_ufs(mount_point, devpth):
-    return ('growfs', devpth)
+    return ('growfs', '-y', devpth)
 
 
 def _resize_zfs(mount_point, devpth):
diff --git a/cloudinit/config/cc_users_groups.py b/cloudinit/config/cc_users_groups.py
index b215e95..c95bdaa 100644
--- a/cloudinit/config/cc_users_groups.py
+++ b/cloudinit/config/cc_users_groups.py
@@ -54,8 +54,9 @@ config keys for an entry in ``users`` are as follows:
     - ``ssh_authorized_keys``: Optional. List of ssh keys to add to user's
       authkeys file. Default: none
     - ``ssh_import_id``: Optional. SSH id to import for user. Default: none
-    - ``sudo``: Optional. Sudo rule to use, or list of sudo rules to use.
-      Default: none.
+    - ``sudo``: Optional. Sudo rule to use, list of sudo rules to use or False.
+      Default: none. An absence of sudo key, or a value of none or false
+      will result in no sudo rules being written for the user.
     - ``system``: Optional. Create user as system user with no home directory.
       Default: false
     - ``uid``: Optional. The user's ID. Default: The next available value.
@@ -82,6 +83,9 @@ config keys for an entry in ``users`` are as follows:
 
     users:
         - default
+        # User explicitly omitted from sudo permission; also default behavior.
+        - name: <some_restricted_user>
+          sudo: false
         - name: <username>
           expiredate: <date>
           gecos: <comment>
diff --git a/cloudinit/config/schema.py b/cloudinit/config/schema.py
index 76826e0..080a6d0 100644
--- a/cloudinit/config/schema.py
+++ b/cloudinit/config/schema.py
@@ -4,7 +4,7 @@
 from __future__ import print_function
 
 from cloudinit import importer
-from cloudinit.util import find_modules, read_file_or_url
+from cloudinit.util import find_modules, load_file
 
 import argparse
 from collections import defaultdict
@@ -93,20 +93,33 @@ def validate_cloudconfig_schema(config, schema, strict=False):
 def annotated_cloudconfig_file(cloudconfig, original_content, schema_errors):
     """Return contents of the cloud-config file annotated with schema errors.
 
-    @param cloudconfig: YAML-loaded object from the original_content.
+    @param cloudconfig: YAML-loaded dict from the original_content or empty
+        dict if unparseable.
     @param original_content: The contents of a cloud-config file
     @param schema_errors: List of tuples from a JSONSchemaValidationError. The
         tuples consist of (schemapath, error_message).
     """
     if not schema_errors:
         return original_content
-    schemapaths = _schemapath_for_cloudconfig(cloudconfig, original_content)
+    schemapaths = {}
+    if cloudconfig:
+        schemapaths = _schemapath_for_cloudconfig(
+            cloudconfig, original_content)
     errors_by_line = defaultdict(list)
     error_count = 1
     error_footer = []
     annotated_content = []
     for path, msg in schema_errors:
-        errors_by_line[schemapaths[path]].append(msg)
+        match = re.match(r'format-l(?P<line>\d+)\.c(?P<col>\d+).*', path)
+        if match:
+            line, col = match.groups()
+            errors_by_line[int(line)].append(msg)
+        else:
+            col = None
+            errors_by_line[schemapaths[path]].append(msg)
+        if col is not None:
+            msg = 'Line {line} column {col}: {msg}'.format(
+                line=line, col=col, msg=msg)
         error_footer.append('# E{0}: {1}'.format(error_count, msg))
         error_count += 1
     lines = original_content.decode().split('\n')
@@ -139,21 +152,34 @@ def validate_cloudconfig_file(config_path, schema, annotate=False):
     """
     if not os.path.exists(config_path):
         raise RuntimeError('Configfile {0} does not exist'.format(config_path))
-    content = read_file_or_url('file://{0}'.format(config_path)).contents
+    content = load_file(config_path, decode=False)
     if not content.startswith(CLOUD_CONFIG_HEADER):
         errors = (
-            ('header', 'File {0} needs to begin with "{1}"'.format(
+            ('format-l1.c1', 'File {0} needs to begin with "{1}"'.format(
                 config_path, CLOUD_CONFIG_HEADER.decode())),)
-        raise SchemaValidationError(errors)
-
+        error = SchemaValidationError(errors)
+        if annotate:
+            print(annotated_cloudconfig_file({}, content, error.schema_errors))
+        raise error
     try:
         cloudconfig = yaml.safe_load(content)
-    except yaml.parser.ParserError as e:
-        errors = (
-            ('format', 'File {0} is not valid yaml. {1}'.format(
-                config_path, str(e))),)
-        raise SchemaValidationError(errors)
-
+    except (yaml.YAMLError) as e:
+        line = column = 1
+        mark = None
+        if hasattr(e, 'context_mark') and getattr(e, 'context_mark'):
+            mark = getattr(e, 'context_mark')
+        elif hasattr(e, 'problem_mark') and getattr(e, 'problem_mark'):
+            mark = getattr(e, 'problem_mark')
+        if mark:
+            line = mark.line + 1
+            column = mark.column + 1
+        errors = (('format-l{line}.c{col}'.format(line=line, col=column),
+                   'File {0} is not valid yaml. {1}'.format(
+                       config_path, str(e))),)
+        error = SchemaValidationError(errors)
+        if annotate:
+            print(annotated_cloudconfig_file({}, content, error.schema_errors))
+        raise error
     try:
         validate_cloudconfig_schema(
             cloudconfig, schema, strict=True)
@@ -176,7 +202,7 @@ def _schemapath_for_cloudconfig(config, original_content):
     list_index = 0
     RE_YAML_INDENT = r'^(\s*)'
     scopes = []
-    for line_number, line in enumerate(content_lines):
+    for line_number, line in enumerate(content_lines, 1):
         indent_depth = len(re.match(RE_YAML_INDENT, line).groups()[0])
         line = line.strip()
         if not line or line.startswith('#'):
@@ -208,8 +234,8 @@ def _schemapath_for_cloudconfig(config, original_content):
                 scopes.append((indent_depth + 2, key + '.0'))
                 for inner_list_index in range(0, len(yaml.safe_load(value))):
                     list_key = key + '.' + str(inner_list_index)
-                    schema_line_numbers[list_key] = line_number + 1
-        schema_line_numbers[key] = line_number + 1
+                    schema_line_numbers[list_key] = line_number
+        schema_line_numbers[key] = line_number
     return schema_line_numbers
 
 
@@ -337,9 +363,11 @@ def handle_schema_args(name, args):
         try:
             validate_cloudconfig_file(
                 args.config_file, full_schema, args.annotate)
-        except (SchemaValidationError, RuntimeError) as e:
+        except SchemaValidationError as e:
             if not args.annotate:
                 error(str(e))
+        except RuntimeError as e:
+                error(str(e))
         else:
             print("Valid cloud-config file {0}".format(args.config_file))
     if args.doc:
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
index 6c22b07..ab0b077 100755
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -531,7 +531,7 @@ class Distro(object):
             self.lock_passwd(name)
 
         # Configure sudo access
-        if 'sudo' in kwargs:
+        if 'sudo' in kwargs and kwargs['sudo'] is not False:
             self.write_sudo_rules(name, kwargs['sudo'])
 
         # Import SSH keys
diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
index 5b1718a..ff22d56 100644
--- a/cloudinit/distros/freebsd.py
+++ b/cloudinit/distros/freebsd.py
@@ -266,7 +266,7 @@ class Distro(distros.Distro):
             self.lock_passwd(name)
 
         # Configure sudo access
-        if 'sudo' in kwargs:
+        if 'sudo' in kwargs and kwargs['sudo'] is not False:
             self.write_sudo_rules(name, kwargs['sudo'])
 
         # Import SSH keys
diff --git a/cloudinit/ec2_utils.py b/cloudinit/ec2_utils.py
index dc3f0fc..3b7b17f 100644
--- a/cloudinit/ec2_utils.py
+++ b/cloudinit/ec2_utils.py
@@ -150,11 +150,9 @@ def get_instance_userdata(api_version='latest',
         # NOT_FOUND occurs) and just in that case returning an empty string.
         exception_cb = functools.partial(_skip_retry_on_codes,
                                          SKIP_USERDATA_CODES)
-        response = util.read_file_or_url(ud_url,
-                                         ssl_details=ssl_details,
-                                         timeout=timeout,
-                                         retries=retries,
-                                         exception_cb=exception_cb)
+        response = url_helper.read_file_or_url(
+            ud_url, ssl_details=ssl_details, timeout=timeout,
+            retries=retries, exception_cb=exception_cb)
         user_data = response.contents
     except url_helper.UrlError as e:
         if e.code not in SKIP_USERDATA_CODES:
@@ -169,9 +167,9 @@ def _get_instance_metadata(tree, api_version='latest',
                            ssl_details=None, timeout=5, retries=5,
                            leaf_decoder=None):
     md_url = url_helper.combine_url(metadata_address, api_version, tree)
-    caller = functools.partial(util.read_file_or_url,
-                               ssl_details=ssl_details, timeout=timeout,
-                               retries=retries)
+    caller = functools.partial(
+        url_helper.read_file_or_url, ssl_details=ssl_details,
+        timeout=timeout, retries=retries)
 
     def mcaller(url):
         return caller(url).contents
diff --git a/cloudinit/handlers/upstart_job.py b/cloudinit/handlers/upstart_job.py
index 1ca92d4..dc33876 100644
--- a/cloudinit/handlers/upstart_job.py
+++ b/cloudinit/handlers/upstart_job.py
@@ -97,7 +97,7 @@ def _has_suitable_upstart():
             else:
                 util.logexc(LOG, "dpkg --compare-versions failed [%s]",
                             e.exit_code)
-        except Exception as e:
+        except Exception:
             util.logexc(LOG, "dpkg --compare-versions failed")
         return False
     else:
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 43226bd..3ffde52 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -359,8 +359,12 @@ def interface_has_own_mac(ifname, strict=False):
       1: randomly generated   3: set using dev_set_mac_address"""
 
     assign_type = read_sys_net_int(ifname, "addr_assign_type")
-    if strict and assign_type is None:
-        raise ValueError("%s had no addr_assign_type.")
+    if assign_type is None:
+        # None is returned if this nic had no 'addr_assign_type' entry.
+        # if strict, raise an error, if not return True.
+        if strict:
+            raise ValueError("%s had no addr_assign_type.")
+        return True
     return assign_type in (0, 1, 3)
 
 
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index c6a71d1..bd20a36 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -10,9 +10,12 @@ from . import ParserError
 from . import renderer
 from .network_state import subnet_is_ipv6
 
+from cloudinit import log as logging
 from cloudinit import util
 
 
+LOG = logging.getLogger(__name__)
+
 NET_CONFIG_COMMANDS = [
     "pre-up", "up", "post-up", "down", "pre-down", "post-down",
 ]
@@ -61,7 +64,7 @@ def _iface_add_subnet(iface, subnet):
 
 
 # TODO: switch to valid_map for attrs
-def _iface_add_attrs(iface, index):
+def _iface_add_attrs(iface, index, ipv4_subnet_mtu):
     # If the index is non-zero, this is an alias interface. Alias interfaces
     # represent additional interface addresses, and should not have additional
     # attributes. (extra attributes here are almost always either incorrect,
@@ -100,6 +103,13 @@ def _iface_add_attrs(iface, index):
             value = 'on' if iface[key] else 'off'
         if not value or key in ignore_map:
             continue
+        if key == 'mtu' and ipv4_subnet_mtu:
+            if value != ipv4_subnet_mtu:
+                LOG.warning(
+                    "Network config: ignoring %s device-level mtu:%s because"
+                    " ipv4 subnet-level mtu:%s provided.",
+                    iface['name'], value, ipv4_subnet_mtu)
+            continue
         if key in multiline_keys:
             for v in value:
                 content.append("    {0} {1}".format(renames.get(key, key), v))
@@ -377,12 +387,15 @@ class Renderer(renderer.Renderer):
         subnets = iface.get('subnets', {})
         if subnets:
             for index, subnet in enumerate(subnets):
+                ipv4_subnet_mtu = None
                 iface['index'] = index
                 iface['mode'] = subnet['type']
                 iface['control'] = subnet.get('control', 'auto')
                 subnet_inet = 'inet'
                 if subnet_is_ipv6(subnet):
                     subnet_inet += '6'
+                else:
+                    ipv4_subnet_mtu = subnet.get('mtu')
                 iface['inet'] = subnet_inet
                 if subnet['type'].startswith('dhcp'):
                     iface['mode'] = 'dhcp'
@@ -397,7 +410,7 @@ class Renderer(renderer.Renderer):
                     _iface_start_entry(
                         iface, index, render_hwaddress=render_hwaddress) +
                     _iface_add_subnet(iface, subnet) +
-                    _iface_add_attrs(iface, index)
+                    _iface_add_attrs(iface, index, ipv4_subnet_mtu)
                 )
                 for route in subnet.get('routes', []):
                     lines.extend(self._render_route(route, indent="    "))
@@ -409,7 +422,8 @@ class Renderer(renderer.Renderer):
             if 'bond-master' in iface or 'bond-slaves' in iface:
                 lines.append("auto {name}".format(**iface))
             lines.append("iface {name} {inet} {mode}".format(**iface))
-            lines.extend(_iface_add_attrs(iface, index=0))
+            lines.extend(
+                _iface_add_attrs(iface, index=0, ipv4_subnet_mtu=None))
             sections.append(lines)
         return sections
 
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index 6344348..4014363 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -34,7 +34,7 @@ def _get_params_dict_by_match(config, match):
                 if key.startswith(match))
 
 
-def _extract_addresses(config, entry):
+def _extract_addresses(config, entry, ifname):
     """This method parse a cloudinit.net.network_state dictionary (config) and
        maps netstate keys/values into a dictionary (entry) to represent
        netplan yaml.
@@ -124,6 +124,15 @@ def _extract_addresses(config, entry):
 
             addresses.append(addr)
 
+    if 'mtu' in config:
+        entry_mtu = entry.get('mtu')
+        if entry_mtu and config['mtu'] != entry_mtu:
+            LOG.warning(
+                "Network config: ignoring %s device-level mtu:%s because"
+                " ipv4 subnet-level mtu:%s provided.",
+                ifname, config['mtu'], entry_mtu)
+        else:
+            entry['mtu'] = config['mtu']
     if len(addresses) > 0:
         entry.update({'addresses': addresses})
     if len(routes) > 0:
@@ -262,10 +271,7 @@ class Renderer(renderer.Renderer):
                     else:
                         del eth['match']
                         del eth['set-name']
-                if 'mtu' in ifcfg:
-                    eth['mtu'] = ifcfg.get('mtu')
-
-                _extract_addresses(ifcfg, eth)
+                _extract_addresses(ifcfg, eth, ifname)
                 ethernets.update({ifname: eth})
 
             elif if_type == 'bond':
@@ -288,7 +294,7 @@ class Renderer(renderer.Renderer):
                 slave_interfaces = ifcfg.get('bond-slaves')
                 if slave_interfaces == 'none':
                     _extract_bond_slaves_by_name(interfaces, bond, ifname)
-                _extract_addresses(ifcfg, bond)
+                _extract_addresses(ifcfg, bond, ifname)
                 bonds.update({ifname: bond})
 
             elif if_type == 'bridge':
@@ -321,7 +327,7 @@ class Renderer(renderer.Renderer):
 
                 if len(br_config) > 0:
                     bridge.update({'parameters': br_config})
-                _extract_addresses(ifcfg, bridge)
+                _extract_addresses(ifcfg, bridge, ifname)
                 bridges.update({ifname: bridge})
 
             elif if_type == 'vlan':
@@ -333,7 +339,7 @@ class Renderer(renderer.Renderer):
                 macaddr = ifcfg.get('mac_address', None)
                 if macaddr is not None:
                     vlan['macaddress'] = macaddr.lower()
-                _extract_addresses(ifcfg, vlan)
+                _extract_addresses(ifcfg, vlan, ifname)
                 vlans.update({ifname: vlan})
 
         # inject global nameserver values under each all interface which
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index e53b9f1..3d71923 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -304,6 +304,13 @@ class Renderer(renderer.Renderer):
                     mtu_key = 'IPV6_MTU'
                     iface_cfg['IPV6INIT'] = True
                 if 'mtu' in subnet:
+                    mtu_mismatch = bool(mtu_key in iface_cfg and
+                                        subnet['mtu'] != iface_cfg[mtu_key])
+                    if mtu_mismatch:
+                        LOG.warning(
+                            'Network config: ignoring %s device-level mtu:%s'
+                            ' because ipv4 subnet-level mtu:%s provided.',
+                            iface_cfg.name, iface_cfg[mtu_key], subnet['mtu'])
                     iface_cfg[mtu_key] = subnet['mtu']
             elif subnet_type == 'manual':
                 # If the subnet has an MTU setting, then ONBOOT=True
diff --git a/cloudinit/netinfo.py b/cloudinit/netinfo.py
index f090616..9ff929c 100644
--- a/cloudinit/netinfo.py
+++ b/cloudinit/netinfo.py
@@ -138,7 +138,7 @@ def _netdev_info_ifconfig(ifconfig_data):
             elif toks[i].startswith("scope:"):
                 devs[curdev]['ipv6'][-1]['scope6'] = toks[i].lstrip("scope:")
             elif toks[i] == "scopeid":
-                res = re.match(".*<(\S+)>", toks[i + 1])
+                res = re.match(r'.*<(\S+)>', toks[i + 1])
                 if res:
                     devs[curdev]['ipv6'][-1]['scope6'] = res.group(1)
     return devs
@@ -158,12 +158,28 @@ def netdev_info(empty=""):
         LOG.warning(
             "Could not print networks: missing 'ip' and 'ifconfig' commands")
 
-    if empty != "":
-        for (_devname, dev) in devs.items():
-            for field in dev:
-                if dev[field] == "":
-                    dev[field] = empty
+    if empty == "":
+        return devs
 
+    recurse_types = (dict, tuple, list)
+
+    def fill(data, new_val="", empty_vals=("", b"")):
+        """Recursively replace 'empty_vals' in data (dict, tuple, list)
+           with new_val"""
+        if isinstance(data, dict):
+            myiter = data.items()
+        elif isinstance(data, (tuple, list)):
+            myiter = enumerate(data)
+        else:
+            raise TypeError("Unexpected input to fill")
+
+        for key, val in myiter:
+            if val in empty_vals:
+                data[key] = new_val
+            elif isinstance(val, recurse_types):
+                fill(val, new_val)
+
+    fill(devs, new_val=empty)
     return devs
 
 
@@ -353,8 +369,9 @@ def getgateway():
 
 def netdev_pformat():
     lines = []
+    empty = "."
     try:
-        netdev = netdev_info(empty=".")
+        netdev = netdev_info(empty=empty)
     except Exception as e:
         lines.append(
             util.center(
@@ -368,12 +385,15 @@ def netdev_pformat():
         for (dev, data) in sorted(netdev.items()):
             for addr in data.get('ipv4'):
                 tbl.add_row(
-                    [dev, data["up"], addr["ip"], addr["mask"],
-                     addr.get('scope', '.'), data["hwaddr"]])
+                    (dev, data["up"], addr["ip"], addr["mask"],
+                     addr.get('scope', empty), data["hwaddr"]))
             for addr in data.get('ipv6'):
                 tbl.add_row(
-                    [dev, data["up"], addr["ip"], ".", addr["scope6"],
-                     data["hwaddr"]])
+                    (dev, data["up"], addr["ip"], empty, addr["scope6"],
+                     data["hwaddr"]))
+            if len(data.get('ipv6')) + len(data.get('ipv4')) == 0:
+                tbl.add_row((dev, data["up"], empty, empty, empty,
+                             data["hwaddr"]))
         netdev_s = tbl.get_string()
         max_len = len(max(netdev_s.splitlines(), key=len))
         header = util.center("Net device info", "+", max_len)
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index f6e86f3..24fd65f 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -184,11 +184,11 @@ class DataSourceAltCloud(sources.DataSource):
             cmd = CMD_PROBE_FLOPPY
             (cmd_out, _err) = util.subp(cmd)
             LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
-        except ProcessExecutionError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except ProcessExecutionError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
-        except OSError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except OSError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
 
         floppy_dev = '/dev/fd0'
@@ -197,11 +197,11 @@ class DataSourceAltCloud(sources.DataSource):
         try:
             (cmd_out, _err) = util.udevadm_settle(exists=floppy_dev, timeout=5)
             LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
-        except ProcessExecutionError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except ProcessExecutionError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
-        except OSError as _err:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), _err)
+        except OSError as e:
+            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
             return False
 
         try:
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index a71197a..7007d9e 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -48,6 +48,7 @@ DEFAULT_FS = 'ext4'
 # DMI chassis-asset-tag is set static for all azure instances
 AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
 REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
+REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
 IMDS_URL = "http://169.254.169.254/metadata/reprovisiondata";
 
 
@@ -207,6 +208,7 @@ BUILTIN_CLOUD_CONFIG = {
 }
 
 DS_CFG_PATH = ['datasource', DS_NAME]
+DS_CFG_KEY_PRESERVE_NTFS = 'never_destroy_ntfs'
 DEF_EPHEMERAL_LABEL = 'Temporary Storage'
 
 # The redacted password fails to meet password complexity requirements
@@ -393,14 +395,9 @@ class DataSourceAzure(sources.DataSource):
         if found == ddir:
             LOG.debug("using files cached in %s", ddir)
 
-        # azure / hyper-v provides random data here
-        # TODO. find the seed on FreeBSD platform
-        # now update ds_cfg to reflect contents pass in config
-        if not util.is_FreeBSD():
-            seed = util.load_file("/sys/firmware/acpi/tables/OEM0",
-                                  quiet=True, decode=False)
-            if seed:
-                self.metadata['random_seed'] = seed
+        seed = _get_random_seed()
+        if seed:
+            self.metadata['random_seed'] = seed
 
         user_ds_cfg = util.get_cfg_by_path(self.cfg, DS_CFG_PATH, {})
         self.ds_cfg = util.mergemanydict([user_ds_cfg, self.ds_cfg])
@@ -436,11 +433,12 @@ class DataSourceAzure(sources.DataSource):
             LOG.debug("negotiating already done for %s",
                       self.get_instance_id())
 
-    def _poll_imds(self, report_ready=True):
+    def _poll_imds(self):
         """Poll IMDS for the new provisioning data until we get a valid
         response. Then return the returned JSON object."""
         url = IMDS_URL + "?api-version=2017-04-02"
         headers = {"Metadata": "true"}
+        report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
         LOG.debug("Start polling IMDS")
 
         def exc_cb(msg, exception):
@@ -450,13 +448,17 @@ class DataSourceAzure(sources.DataSource):
             # call DHCP and setup the ephemeral network to acquire the new IP.
             return False
 
-        need_report = report_ready
         while True:
             try:
                 with EphemeralDHCPv4() as lease:
-                    if need_report:
+                    if report_ready:
+                        path = REPORTED_READY_MARKER_FILE
+                        LOG.info(
+                            "Creating a marker file to report ready: %s", path)
+                        util.write_file(path, "{pid}: {time}\n".format(
+                            pid=os.getpid(), time=time()))
                         self._report_ready(lease=lease)
-                        need_report = False
+                        report_ready = False
                     return readurl(url, timeout=1, headers=headers,
                                    exception_cb=exc_cb, infinite=True).contents
             except UrlError:
@@ -490,8 +492,10 @@ class DataSourceAzure(sources.DataSource):
         if (cfg.get('PreprovisionedVm') is True or
                 os.path.isfile(path)):
             if not os.path.isfile(path):
-                LOG.info("Creating a marker file to poll imds")
-                util.write_file(path, "%s: %s\n" % (os.getpid(), time()))
+                LOG.info("Creating a marker file to poll imds: %s",
+                         path)
+                util.write_file(path, "{pid}: {time}\n".format(
+                    pid=os.getpid(), time=time()))
             return True
         return False
 
@@ -526,11 +530,14 @@ class DataSourceAzure(sources.DataSource):
                 "Error communicating with Azure fabric; You may experience."
                 "connectivity issues.", exc_info=True)
             return False
+        util.del_file(REPORTED_READY_MARKER_FILE)
         util.del_file(REPROVISION_MARKER_FILE)
         return fabric_data
 
     def activate(self, cfg, is_new_instance):
-        address_ephemeral_resize(is_new_instance=is_new_instance)
+        address_ephemeral_resize(is_new_instance=is_new_instance,
+                                 preserve_ntfs=self.ds_cfg.get(
+                                     DS_CFG_KEY_PRESERVE_NTFS, False))
         return
 
     @property
@@ -574,17 +581,29 @@ def _has_ntfs_filesystem(devpath):
     return os.path.realpath(devpath) in ntfs_devices
 
 
-def can_dev_be_reformatted(devpath):
-    """Determine if block device devpath is newly formatted ephemeral.
+def can_dev_be_reformatted(devpath, preserve_ntfs):
+    """Determine if the ephemeral drive at devpath should be reformatted.
 
-    A newly formatted disk will:
+    A fresh ephemeral disk is formatted by Azure and will:
       a.) have a partition table (dos or gpt)
       b.) have 1 partition that is ntfs formatted, or
           have 2 partitions with the second partition ntfs formatted.
           (larger instances with >2TB ephemeral disk have gpt, and will
            have a microsoft reserved partition as part 1.  LP: #1686514)
       c.) the ntfs partition will have no files other than possibly
-          'dataloss_warning_readme.txt'"""
+          'dataloss_warning_readme.txt'
+
+    User can indicate that NTFS should never be destroyed by setting
+    DS_CFG_KEY_PRESERVE_NTFS in dscfg.
+    If data is found on NTFS, user is warned to set DS_CFG_KEY_PRESERVE_NTFS
+    to make sure cloud-init does not accidentally wipe their data.
+    If cloud-init cannot mount the disk to check for data, destruction
+    will be allowed, unless the dscfg key is set."""
+    if preserve_ntfs:
+        msg = ('config says to never destroy NTFS (%s.%s), skipping checks' %
+               (".".join(DS_CFG_PATH), DS_CFG_KEY_PRESERVE_NTFS))
+        return False, msg
+
     if not os.path.exists(devpath):
         return False, 'device %s does not exist' % devpath
 
@@ -617,18 +636,27 @@ def can_dev_be_reformatted(devpath):
     bmsg = ('partition %s (%s) on device %s was ntfs formatted' %
             (cand_part, cand_path, devpath))
     try:
-        file_count = util.mount_cb(cand_path, count_files)
+        file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
+                                   update_env_for_mount={'LANG': 'C'})
     except util.MountFailedError as e:
+        if "mount: unknown filesystem type 'ntfs'" in str(e):
+            return True, (bmsg + ' but this system cannot mount NTFS,'
+                          ' assuming there are no important files.'
+                          ' Formatting allowed.')
         return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e)
 
     if file_count != 0:
+        LOG.warning("it looks like you're using NTFS on the ephemeral disk, "
+                    'to ensure that filesystem does not get wiped, set '
+                    '%s.%s in config', '.'.join(DS_CFG_PATH),
+                    DS_CFG_KEY_PRESERVE_NTFS)
         return False, bmsg + ' but had %d files on it.' % file_count
 
     return True, bmsg + ' and had no important files. Safe for reformatting.'
 
 
 def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
-                             is_new_instance=False):
+                             is_new_instance=False, preserve_ntfs=False):
     # wait for ephemeral disk to come up
     naplen = .2
     missing = util.wait_for_files([devpath], maxwait=maxwait, naplen=naplen,
@@ -644,7 +672,7 @@ def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120,
     if is_new_instance:
         result, msg = (True, "First instance boot.")
     else:
-        result, msg = can_dev_be_reformatted(devpath)
+        result, msg = can_dev_be_reformatted(devpath, preserve_ntfs)
 
     LOG.debug("reformattable=%s: %s", result, msg)
     if not result:
@@ -958,6 +986,18 @@ def _check_freebsd_cdrom(cdrom_dev):
     return False
 
 
+def _get_random_seed():
+    """Return content random seed file if available, otherwise,
+       return None."""
+    # azure / hyper-v provides random data here
+    # TODO. find the seed on FreeBSD platform
+    # now update ds_cfg to reflect contents pass in config
+    if util.is_FreeBSD():
+        return None
+    return util.load_file("/sys/firmware/acpi/tables/OEM0",
+                          quiet=True, decode=False)
+
+
 def list_possible_azure_ds_devs():
     devlist = []
     if util.is_FreeBSD():
diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py
index 0df545f..d4b758f 100644
--- a/cloudinit/sources/DataSourceCloudStack.py
+++ b/cloudinit/sources/DataSourceCloudStack.py
@@ -68,6 +68,10 @@ class DataSourceCloudStack(sources.DataSource):
 
     dsname = 'CloudStack'
 
+    # Setup read_url parameters per get_url_params.
+    url_max_wait = 120
+    url_timeout = 50
+
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
         self.seed_dir = os.path.join(paths.seed_dir, 'cs')
@@ -80,33 +84,18 @@ class DataSourceCloudStack(sources.DataSource):
         self.metadata_address = "http://%s/"; % (self.vr_addr,)
         self.cfg = {}
 
-    def _get_url_settings(self):
-        mcfg = self.ds_cfg
-        max_wait = 120
-        try:
-            max_wait = int(mcfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
+    def wait_for_metadata_service(self):
+        url_params = self.get_url_params()
 
-        if max_wait == 0:
+        if url_params.max_wait_seconds <= 0:
             return False
 
-        timeout = 50
-        try:
-            timeout = int(mcfg.get("timeout", timeout))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        return (max_wait, timeout)
-
-    def wait_for_metadata_service(self):
-        (max_wait, timeout) = self._get_url_settings()
-
         urls = [uhelp.combine_url(self.metadata_address,
                                   'latest/meta-data/instance-id')]
         start_time = time.time()
-        url = uhelp.wait_for_url(urls=urls, max_wait=max_wait,
-                                 timeout=timeout, status_cb=LOG.warn)
+        url = uhelp.wait_for_url(
+            urls=urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
 
         if url:
             LOG.debug("Using metadata source: '%s'", url)
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index c7b5fe5..4cb2897 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -43,7 +43,7 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
         self.version = None
         self.ec2_metadata = None
         self._network_config = None
-        self.network_json = None
+        self.network_json = sources.UNSET
         self.network_eni = None
         self.known_macs = None
         self.files = {}
@@ -69,7 +69,8 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
                 util.logexc(LOG, "Failed reading config drive from %s", sdir)
 
         if not found:
-            for dev in find_candidate_devs():
+            dslist = self.sys_cfg.get('datasource_list')
+            for dev in find_candidate_devs(dslist=dslist):
                 try:
                     # Set mtype if freebsd and turn off sync
                     if dev.startswith("/dev/cd"):
@@ -148,7 +149,7 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
     @property
     def network_config(self):
         if self._network_config is None:
-            if self.network_json is not None:
+            if self.network_json not in (None, sources.UNSET):
                 LOG.debug("network config provided via network_json")
                 self._network_config = openstack.convert_net_json(
                     self.network_json, known_macs=self.known_macs)
@@ -211,7 +212,7 @@ def write_injected_files(files):
                 util.logexc(LOG, "Failed writing file: %s", filename)
 
 
-def find_candidate_devs(probe_optical=True):
+def find_candidate_devs(probe_optical=True, dslist=None):
     """Return a list of devices that may contain the config drive.
 
     The returned list is sorted by search order where the first item has
@@ -227,6 +228,9 @@ def find_candidate_devs(probe_optical=True):
         * either vfat or iso9660 formated
         * labeled with 'config-2' or 'CONFIG-2'
     """
+    if dslist is None:
+        dslist = []
+
     # query optical drive to get it in blkid cache for 2.6 kernels
     if probe_optical:
         for device in OPTICAL_DEVICES:
@@ -257,7 +261,8 @@ def find_candidate_devs(probe_optical=True):
     devices = [d for d in candidates
                if d in by_label or not util.is_partition(d)]
 
-    if devices:
+    LOG.debug("devices=%s dslist=%s", devices, dslist)
+    if devices and "IBMCloud" in dslist:
         # IBMCloud uses config-2 label, but limited to a single UUID.
         ibm_platform, ibm_path = get_ibm_platform()
         if ibm_path in devices:
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index 21e9ef8..968ab3f 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -27,8 +27,6 @@ SKIP_METADATA_URL_CODES = frozenset([uhelp.NOT_FOUND])
 STRICT_ID_PATH = ("datasource", "Ec2", "strict_id")
 STRICT_ID_DEFAULT = "warn"
 
-_unset = "_unset"
-
 
 class Platforms(object):
     # TODO Rename and move to cloudinit.cloud.CloudNames
@@ -59,15 +57,16 @@ class DataSourceEc2(sources.DataSource):
     # for extended metadata content. IPv6 support comes in 2016-09-02
     extended_metadata_versions = ['2016-09-02']
 
+    # Setup read_url parameters per get_url_params.
+    url_max_wait = 120
+    url_timeout = 50
+
     _cloud_platform = None
 
-    _network_config = _unset  # Used for caching calculated network config v1
+    _network_config = sources.UNSET  # Used to cache calculated network cfg v1
 
     # Whether we want to get network configuration from the metadata service.
-    get_network_metadata = False
-
-    # Track the discovered fallback nic for use in configuration generation.
-    _fallback_interface = None
+    perform_dhcp_setup = False
 
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)
@@ -98,7 +97,7 @@ class DataSourceEc2(sources.DataSource):
         elif self.cloud_platform == Platforms.NO_EC2_METADATA:
             return False
 
-        if self.get_network_metadata:  # Setup networking in init-local stage.
+        if self.perform_dhcp_setup:  # Setup networking in init-local stage.
             if util.is_FreeBSD():
                 LOG.debug("FreeBSD doesn't support running dhclient with -sf")
                 return False
@@ -158,27 +157,11 @@ class DataSourceEc2(sources.DataSource):
         else:
             return self.metadata['instance-id']
 
-    def _get_url_settings(self):
-        mcfg = self.ds_cfg
-        max_wait = 120
-        try:
-            max_wait = int(mcfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
-
-        timeout = 50
-        try:
-            timeout = max(0, int(mcfg.get("timeout", timeout)))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        return (max_wait, timeout)
-
     def wait_for_metadata_service(self):
         mcfg = self.ds_cfg
 
-        (max_wait, timeout) = self._get_url_settings()
-        if max_wait <= 0:
+        url_params = self.get_url_params()
+        if url_params.max_wait_seconds <= 0:
             return False
 
         # Remove addresses from the list that wont resolve.
@@ -205,7 +188,8 @@ class DataSourceEc2(sources.DataSource):
 
         start_time = time.time()
         url = uhelp.wait_for_url(
-            urls=urls, max_wait=max_wait, timeout=timeout, status_cb=LOG.warn)
+            urls=urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds, status_cb=LOG.warn)
 
         if url:
             self.metadata_address = url2base[url]
@@ -310,11 +294,11 @@ class DataSourceEc2(sources.DataSource):
     @property
     def network_config(self):
         """Return a network config dict for rendering ENI or netplan files."""
-        if self._network_config != _unset:
+        if self._network_config != sources.UNSET:
             return self._network_config
 
         if self.metadata is None:
-            # this would happen if get_data hadn't been called. leave as _unset
+            # this would happen if get_data hadn't been called. leave as UNSET
             LOG.warning(
                 "Unexpected call to network_config when metadata is None.")
             return None
@@ -353,9 +337,7 @@ class DataSourceEc2(sources.DataSource):
                 self._fallback_interface = _legacy_fbnic
                 self.fallback_nic = None
             else:
-                self._fallback_interface = net.find_fallback_nic()
-                if self._fallback_interface is None:
-                    LOG.warning("Did not find a fallback interface on EC2.")
+                return super(DataSourceEc2, self).fallback_interface
         return self._fallback_interface
 
     def _crawl_metadata(self):
@@ -390,7 +372,7 @@ class DataSourceEc2Local(DataSourceEc2):
     metadata service. If the metadata service provides network configuration
     then render the network configuration for that instance based on metadata.
     """
-    get_network_metadata = True  # Get metadata network config if present
+    perform_dhcp_setup = True  # Use dhcp before querying metadata
 
     def get_data(self):
         supported_platforms = (Platforms.AWS,)
diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
index aa56add..bcb3854 100644
--- a/cloudinit/sources/DataSourceMAAS.py
+++ b/cloudinit/sources/DataSourceMAAS.py
@@ -198,7 +198,7 @@ def read_maas_seed_url(seed_url, read_file_or_url=None, timeout=None,
     If version is None, then <version>/ will not be used.
     """
     if read_file_or_url is None:
-        read_file_or_url = util.read_file_or_url
+        read_file_or_url = url_helper.read_file_or_url
 
     if seed_url.endswith("/"):
         seed_url = seed_url[:-1]
diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
index 5d3a8dd..2daea59 100644
--- a/cloudinit/sources/DataSourceNoCloud.py
+++ b/cloudinit/sources/DataSourceNoCloud.py
@@ -78,7 +78,7 @@ class DataSourceNoCloud(sources.DataSource):
                 LOG.debug("Using seeded data from %s", path)
                 mydata = _merge_new_seed(mydata, seeded)
                 break
-            except ValueError as e:
+            except ValueError:
                 pass
 
         # If the datasource config had a 'seedfrom' entry, then that takes
@@ -117,7 +117,7 @@ class DataSourceNoCloud(sources.DataSource):
                     try:
                         seeded = util.mount_cb(dev, _pp2d_callback,
                                                pp2d_kwargs)
-                    except ValueError as e:
+                    except ValueError:
                         if dev in label_list:
                             LOG.warning("device %s with label=%s not a"
                                         "valid seed.", dev, label)
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index d4a4111..16c1078 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -378,7 +378,7 @@ def read_context_disk_dir(source_dir, asuser=None):
         if asuser is not None:
             try:
                 pwd.getpwnam(asuser)
-            except KeyError as e:
+            except KeyError:
                 raise BrokenContextDiskDir(
                     "configured user '{user}' does not exist".format(
                         user=asuser))
diff --git a/cloudinit/sources/DataSourceOpenStack.py b/cloudinit/sources/DataSourceOpenStack.py
index fb166ae..365af96 100644
--- a/cloudinit/sources/DataSourceOpenStack.py
+++ b/cloudinit/sources/DataSourceOpenStack.py
@@ -7,6 +7,7 @@
 import time
 
 from cloudinit import log as logging
+from cloudinit.net.dhcp import EphemeralDHCPv4, NoDHCPLeaseError
 from cloudinit import sources
 from cloudinit import url_helper
 from cloudinit import util
@@ -22,51 +23,37 @@ DEFAULT_METADATA = {
     "instance-id": DEFAULT_IID,
 }
 
+# OpenStack DMI constants
+DMI_PRODUCT_NOVA = 'OpenStack Nova'
+DMI_PRODUCT_COMPUTE = 'OpenStack Compute'
+VALID_DMI_PRODUCT_NAMES = [DMI_PRODUCT_NOVA, DMI_PRODUCT_COMPUTE]
+DMI_ASSET_TAG_OPENTELEKOM = 'OpenTelekomCloud'
+VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM]
+
 
 class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
 
     dsname = "OpenStack"
 
+    _network_config = sources.UNSET  # Used to cache calculated network cfg v1
+
+    # Whether we want to get network configuration from the metadata service.
+    perform_dhcp_setup = False
+
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceOpenStack, self).__init__(sys_cfg, distro, paths)
         self.metadata_address = None
         self.ssl_details = util.fetch_ssl_details(self.paths)
         self.version = None
         self.files = {}
-        self.ec2_metadata = None
+        self.ec2_metadata = sources.UNSET
+        self.network_json = sources.UNSET
 
     def __str__(self):
         root = sources.DataSource.__str__(self)
         mstr = "%s [%s,ver=%s]" % (root, self.dsmode, self.version)
         return mstr
 
-    def _get_url_settings(self):
-        # TODO(harlowja): this is shared with ec2 datasource, we should just
-        # move it to a shared location instead...
-        # Note: the defaults here are different though.
-
-        # max_wait < 0 indicates do not wait
-        max_wait = -1
-        timeout = 10
-        retries = 5
-
-        try:
-            max_wait = int(self.ds_cfg.get("max_wait", max_wait))
-        except Exception:
-            util.logexc(LOG, "Failed to get max wait. using %s", max_wait)
-
-        try:
-            timeout = max(0, int(self.ds_cfg.get("timeout", timeout)))
-        except Exception:
-            util.logexc(LOG, "Failed to get timeout, using %s", timeout)
-
-        try:
-            retries = int(self.ds_cfg.get("retries", retries))
-        except Exception:
-            util.logexc(LOG, "Failed to get retries. using %s", retries)
-
-        return (max_wait, timeout, retries)
-
     def wait_for_metadata_service(self):
         urls = self.ds_cfg.get("metadata_urls", [DEF_MD_URL])
         filtered = [x for x in urls if util.is_resolvable_url(x)]
@@ -86,10 +73,11 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
             md_urls.append(md_url)
             url2base[md_url] = url
 
-        (max_wait, timeout, _retries) = self._get_url_settings()
+        url_params = self.get_url_params()
         start_time = time.time()
-        avail_url = url_helper.wait_for_url(urls=md_urls, max_wait=max_wait,
-                                            timeout=timeout)
+        avail_url = url_helper.wait_for_url(
+            urls=md_urls, max_wait=url_params.max_wait_seconds,
+            timeout=url_params.timeout_seconds)
         if avail_url:
             LOG.debug("Using metadata source: '%s'", url2base[avail_url])
         else:
@@ -99,38 +87,66 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
         self.metadata_address = url2base.get(avail_url)
         return bool(avail_url)
 
-    def _get_data(self):
-        try:
-            if not self.wait_for_metadata_service():
-                return False
-        except IOError:
-            return False
+    def check_instance_id(self, sys_cfg):
+        # quickly (local check only) if self.instance_id is still valid
+        return sources.instance_id_matches_system_uuid(self.get_instance_id())
 
-        (_max_wait, timeout, retries) = self._get_url_settings()
+    @property
+    def network_config(self):
+        """Return a network config dict for rendering ENI or netplan files."""
+        if self._network_config != sources.UNSET:
+            return self._network_config
+
+        # RELEASE_BLOCKER: SRU to Xenial and Artful SRU should not provide
+        # network_config by default unless configured in /etc/cloud/cloud.cfg*.
+        # Patch Xenial and Artful before release to default to False.
+        if util.is_false(self.ds_cfg.get('apply_network_config', True)):
+            self._network_config = None
+            return self._network_config
+        if self.network_json == sources.UNSET:
+            # this would happen if get_data hadn't been called. leave as UNSET
+            LOG.warning(
+                'Unexpected call to network_config when network_json is None.')
+            return None
+
+        LOG.debug('network config provided via network_json')
+        self._network_config = openstack.convert_net_json(
+            self.network_json, known_macs=None)
+        return self._network_config
 
-        try:
-            results = util.log_time(LOG.debug,
-                                    'Crawl of openstack metadata service',
-                                    read_metadata_service,
-                                    args=[self.metadata_address],
-                                    kwargs={'ssl_details': self.ssl_details,
-                                            'retries': retries,
-                                            'timeout': timeout})
-        except openstack.NonReadable:
-            return False
-        except (openstack.BrokenMetadata, IOError):
-            util.logexc(LOG, "Broken metadata address %s",
-                        self.metadata_address)
+    def _get_data(self):
+        """Crawl metadata, parse and persist that data for this instance.
+
+        @return: True when metadata discovered indicates OpenStack datasource.
+            False when unable to contact metadata service or when metadata
+            format is invalid or disabled.
+        """
+        if not detect_openstack():
             return False
+        if self.perform_dhcp_setup:  # Setup networking in init-local stage.
+            try:
+                with EphemeralDHCPv4(self.fallback_interface):
+                    results = util.log_time(
+                        logfunc=LOG.debug, msg='Crawl of metadata service',
+                        func=self._crawl_metadata)
+            except (NoDHCPLeaseError, sources.InvalidMetaDataException) as e:
+                util.logexc(LOG, str(e))
+                return False
+        else:
+            try:
+                results = self._crawl_metadata()
+            except sources.InvalidMetaDataException as e:
+                util.logexc(LOG, str(e))
+                return False
 
         self.dsmode = self._determine_dsmode([results.get('dsmode')])
         if self.dsmode == sources.DSMODE_DISABLED:
             return False
-
         md = results.get('metadata', {})
         md = util.mergemanydict([md, DEFAULT_METADATA])
         self.metadata = md
         self.ec2_metadata = results.get('ec2-metadata')
+        self.network_json = results.get('networkdata')
         self.userdata_raw = results.get('userdata')
         self.version = results['version']
         self.files.update(results.get('files', {}))
@@ -145,9 +161,50 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
 
         return True
 
-    def check_instance_id(self, sys_cfg):
-        # quickly (local check only) if self.instance_id is still valid
-        return sources.instance_id_matches_system_uuid(self.get_instance_id())
+    def _crawl_metadata(self):
+        """Crawl metadata service when available.
+
+        @returns: Dictionary with all metadata discovered for this datasource.
+        @raise: InvalidMetaDataException on unreadable or broken
+            metadata.
+        """
+        try:
+            if not self.wait_for_metadata_service():
+                raise sources.InvalidMetaDataException(
+                    'No active metadata service found')
+        except IOError as e:
+            raise sources.InvalidMetaDataException(
+                'IOError contacting metadata service: {error}'.format(
+                    error=str(e)))
+
+        url_params = self.get_url_params()
+
+        try:
+            result = util.log_time(
+                LOG.debug, 'Crawl of openstack metadata service',
+                read_metadata_service, args=[self.metadata_address],
+                kwargs={'ssl_details': self.ssl_details,
+                        'retries': url_params.num_retries,
+                        'timeout': url_params.timeout_seconds})
+        except openstack.NonReadable as e:
+            raise sources.InvalidMetaDataException(str(e))
+        except (openstack.BrokenMetadata, IOError):
+            msg = 'Broken metadata address {addr}'.format(
+                addr=self.metadata_address)
+            raise sources.InvalidMetaDataException(msg)
+        return result
+
+
+class DataSourceOpenStackLocal(DataSourceOpenStack):
+    """Run in init-local using a dhcp discovery prior to metadata crawl.
+
+    In init-local, no network is available. This subclass sets up minimal
+    networking with dhclient on a viable nic so that it can talk to the
+    metadata service. If the metadata service provides network configuration
+    then render the network configuration for that instance based on metadata.
+    """
+
+    perform_dhcp_setup = True  # Get metadata network config if present
 
 
 def read_metadata_service(base_url, ssl_details=None,
@@ -157,8 +214,23 @@ def read_metadata_service(base_url, ssl_details=None,
     return reader.read_v2()
 
 
+def detect_openstack():
+    """Return True when a potential OpenStack platform is detected."""
+    if not util.is_x86():
+        return True  # Non-Intel cpus don't properly report dmi product names
+    product_name = util.read_dmi_data('system-product-name')
+    if product_name in VALID_DMI_PRODUCT_NAMES:
+        return True
+    elif util.read_dmi_data('chassis-asset-tag') in VALID_DMI_ASSET_TAGS:
+        return True
+    elif util.get_proc_env(1).get('product_name') == DMI_PRODUCT_NOVA:
+        return True
+    return False
+
+
 # Used to match classes to dependencies
 datasources = [
+    (DataSourceOpenStackLocal, (sources.DEP_FILESYSTEM,)),
     (DataSourceOpenStack, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
 ]
 
diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
index 4ea00eb..f92e8b5 100644
--- a/cloudinit/sources/DataSourceSmartOS.py
+++ b/cloudinit/sources/DataSourceSmartOS.py
@@ -17,7 +17,7 @@
 #        of a serial console.
 #
 #   Certain behavior is defined by the DataDictionary
-#       http://us-east.manta.joyent.com/jmc/public/mdata/datadict.html
+#       https://eng.joyent.com/mdata/datadict.html
 #       Comments with "@datadictionary" are snippets of the definition
 
 import base64
@@ -165,9 +165,8 @@ class DataSourceSmartOS(sources.DataSource):
 
     dsname = "Joyent"
 
-    _unset = "_unset"
-    smartos_type = _unset
-    md_client = _unset
+    smartos_type = sources.UNSET
+    md_client = sources.UNSET
 
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
@@ -189,12 +188,12 @@ class DataSourceSmartOS(sources.DataSource):
         return "%s [client=%s]" % (root, self.md_client)
 
     def _init(self):
-        if self.smartos_type == self._unset:
+        if self.smartos_type == sources.UNSET:
             self.smartos_type = get_smartos_environ()
             if self.smartos_type is None:
                 self.md_client = None
 
-        if self.md_client == self._unset:
+        if self.md_client == sources.UNSET:
             self.md_client = jmc_client_factory(
                 smartos_type=self.smartos_type,
                 metadata_sockfile=self.ds_cfg['metadata_sockfile'],
@@ -299,6 +298,7 @@ class DataSourceSmartOS(sources.DataSource):
         self.userdata_raw = ud
         self.vendordata_raw = md['vendor-data']
         self.network_data = md['network-data']
+        self.routes_data = md['routes']
 
         self._set_provisioned()
         return True
@@ -322,7 +322,8 @@ class DataSourceSmartOS(sources.DataSource):
                     convert_smartos_network_data(
                         network_data=self.network_data,
                         dns_servers=self.metadata['dns_servers'],
-                        dns_domain=self.metadata['dns_domain']))
+                        dns_domain=self.metadata['dns_domain'],
+                        routes=self.routes_data))
         return self._network_config
 
 
@@ -745,7 +746,7 @@ def get_smartos_environ(uname_version=None, product_name=None):
     # report 'BrandZ virtual linux' as the kernel version
     if uname_version is None:
         uname_version = uname[3]
-    if uname_version.lower() == 'brandz virtual linux':
+    if uname_version == 'BrandZ virtual linux':
         return SMARTOS_ENV_LX_BRAND
 
     if product_name is None:
@@ -753,7 +754,7 @@ def get_smartos_environ(uname_version=None, product_name=None):
     else:
         system_type = product_name
 
-    if system_type and 'smartdc' in system_type.lower():
+    if system_type and system_type.startswith('SmartDC'):
         return SMARTOS_ENV_KVM
 
     return None
@@ -761,7 +762,8 @@ def get_smartos_environ(uname_version=None, product_name=None):
 
 # Convert SMARTOS 'sdc:nics' data to network_config yaml
 def convert_smartos_network_data(network_data=None,
-                                 dns_servers=None, dns_domain=None):
+                                 dns_servers=None, dns_domain=None,
+                                 routes=None):
     """Return a dictionary of network_config by parsing provided
        SMARTOS sdc:nics configuration data
 
@@ -779,6 +781,10 @@ def convert_smartos_network_data(network_data=None,
     keys are related to ip configuration.  For each ip in the 'ips' list
     we create a subnet entry under 'subnets' pairing the ip to a one in
     the 'gateways' list.
+
+    Each route in sdc:routes is mapped to a route on each interface.
+    The sdc:routes properties 'dst' and 'gateway' map to 'network' and
+    'gateway'.  The 'linklocal' sdc:routes property is ignored.
     """
 
     valid_keys = {
@@ -801,6 +807,10 @@ def convert_smartos_network_data(network_data=None,
             'scope',
             'type',
         ],
+        'route': [
+            'network',
+            'gateway',
+        ],
     }
 
     if dns_servers:
@@ -815,6 +825,9 @@ def convert_smartos_network_data(network_data=None,
     else:
         dns_domain = []
 
+    if not routes:
+        routes = []
+
     def is_valid_ipv4(addr):
         return '.' in addr
 
@@ -841,6 +854,7 @@ def convert_smartos_network_data(network_data=None,
             if ip == "dhcp":
                 subnet = {'type': 'dhcp4'}
             else:
+                routeents = []
                 subnet = dict((k, v) for k, v in nic.items()
                               if k in valid_keys['subnet'])
                 subnet.update({
@@ -862,6 +876,25 @@ def convert_smartos_network_data(network_data=None,
                             pgws[proto]['gw'] = gateways[0]
                             subnet.update({'gateway': pgws[proto]['gw']})
 
+                for route in routes:
+                    rcfg = dict((k, v) for k, v in route.items()
+                                if k in valid_keys['route'])
+                    # Linux uses the value of 'gateway' to determine
+                    # automatically if the route is a forward/next-hop
+                    # (non-local IP for gateway) or an interface/resolver
+                    # (local IP for gateway).  So we can ignore the
+                    # 'interface' attribute of sdc:routes, because SDC
+                    # guarantees that the gateway is a local IP for
+                    # "interface=true".
+                    #
+                    # Eventually we should be smart and compare "gateway"
+                    # to see if it's in the prefix.  We can then smartly
+                    # add or not-add this route.  But for now,
+                    # when in doubt, use brute force! Routes for everyone!
+                    rcfg.update({'network': route['dst']})
+                    routeents.append(rcfg)
+                    subnet.update({'routes': routeents})
+
             subnets.append(subnet)
         cfg.update({'subnets': subnets})
         config.append(cfg)
@@ -905,12 +938,14 @@ if __name__ == "__main__":
             keyname = SMARTOS_ATTRIB_JSON[key]
             data[key] = client.get_json(keyname)
         elif key == "network_config":
-            for depkey in ('network-data', 'dns_servers', 'dns_domain'):
+            for depkey in ('network-data', 'dns_servers', 'dns_domain',
+                           'routes'):
                 load_key(client, depkey, data)
             data[key] = convert_smartos_network_data(
                 network_data=data['network-data'],
                 dns_servers=data['dns_servers'],
-                dns_domain=data['dns_domain'])
+                dns_domain=data['dns_domain'],
+                routes=data['routes'])
         else:
             if key in SMARTOS_ATTRIB_MAP:
                 keyname, strip = SMARTOS_ATTRIB_MAP[key]
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index df0b374..90d7457 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -9,6 +9,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import abc
+from collections import namedtuple
 import copy
 import json
 import os
@@ -17,6 +18,7 @@ import six
 from cloudinit.atomic_helper import write_json
 from cloudinit import importer
 from cloudinit import log as logging
+from cloudinit import net
 from cloudinit import type_utils
 from cloudinit import user_data as ud
 from cloudinit import util
@@ -41,6 +43,8 @@ INSTANCE_JSON_FILE = 'instance-data.json'
 # Key which can be provide a cloud's official product name to cloud-init
 METADATA_CLOUD_NAME_KEY = 'cloud-name'
 
+UNSET = "_unset"
+
 LOG = logging.getLogger(__name__)
 
 
@@ -48,6 +52,11 @@ class DataSourceNotFoundException(Exception):
     pass
 
 
+class InvalidMetaDataException(Exception):
+    """Raised when metadata is broken, unavailable or disabled."""
+    pass
+
+
 def process_base64_metadata(metadata, key_path=''):
     """Strip ci-b64 prefix and return metadata with base64-encoded-keys set."""
     md_copy = copy.deepcopy(metadata)
@@ -68,6 +77,10 @@ def process_base64_metadata(metadata, key_path=''):
     return md_copy
 
 
+URLParams = namedtuple(
+    'URLParms', ['max_wait_seconds', 'timeout_seconds', 'num_retries'])
+
+
 @six.add_metaclass(abc.ABCMeta)
 class DataSource(object):
 
@@ -81,6 +94,14 @@ class DataSource(object):
     # Cached cloud_name as determined by _get_cloud_name
     _cloud_name = None
 
+    # Track the discovered fallback nic for use in configuration generation.
+    _fallback_interface = None
+
+    # read_url_params
+    url_max_wait = -1   # max_wait < 0 means do not wait
+    url_timeout = 10    # timeout for each metadata url read attempt
+    url_retries = 5     # number of times to retry url upon 404
+
     def __init__(self, sys_cfg, distro, paths, ud_proc=None):
         self.sys_cfg = sys_cfg
         self.distro = distro
@@ -128,6 +149,14 @@ class DataSource(object):
                 'meta-data': self.metadata,
                 'user-data': self.get_userdata_raw(),
                 'vendor-data': self.get_vendordata_raw()}}
+        if hasattr(self, 'network_json'):
+            network_json = getattr(self, 'network_json')
+            if network_json != UNSET:
+                instance_data['ds']['network_json'] = network_json
+        if hasattr(self, 'ec2_metadata'):
+            ec2_metadata = getattr(self, 'ec2_metadata')
+            if ec2_metadata != UNSET:
+                instance_data['ds']['ec2_metadata'] = ec2_metadata
         instance_data.update(
             self._get_standardized_metadata())
         try:
@@ -149,6 +178,42 @@ class DataSource(object):
             'Subclasses of DataSource must implement _get_data which'
             ' sets self.metadata, vendordata_raw and userdata_raw.')
 
+    def get_url_params(self):
+        """Return the Datasource's prefered url_read parameters.
+
+        Subclasses may override url_max_wait, url_timeout, url_retries.
+
+        @return: A URLParams object with max_wait_seconds, timeout_seconds,
+            num_retries.
+        """
+        max_wait = self.url_max_wait
+        try:
+            max_wait = int(self.ds_cfg.get("max_wait", self.url_max_wait))
+        except ValueError:
+            util.logexc(
+                LOG, "Config max_wait '%s' is not an int, using default '%s'",
+                self.ds_cfg.get("max_wait"), max_wait)
+
+        timeout = self.url_timeout
+        try:
+            timeout = max(
+                0, int(self.ds_cfg.get("timeout", self.url_timeout)))
+        except ValueError:
+            timeout = self.url_timeout
+            util.logexc(
+                LOG, "Config timeout '%s' is not an int, using default '%s'",
+                self.ds_cfg.get('timeout'), timeout)
+
+        retries = self.url_retries
+        try:
+            retries = int(self.ds_cfg.get("retries", self.url_retries))
+        except Exception:
+            util.logexc(
+                LOG, "Config retries '%s' is not an int, using default '%s'",
+                self.ds_cfg.get('retries'), retries)
+
+        return URLParams(max_wait, timeout, retries)
+
     def get_userdata(self, apply_filter=False):
         if self.userdata is None:
             self.userdata = self.ud_proc.process(self.get_userdata_raw())
@@ -162,6 +227,17 @@ class DataSource(object):
         return self.vendordata
 
     @property
+    def fallback_interface(self):
+        """Determine the network interface used during local network config."""
+        if self._fallback_interface is None:
+            self._fallback_interface = net.find_fallback_nic()
+            if self._fallback_interface is None:
+                LOG.warning(
+                    "Did not find a fallback interface on %s.",
+                    self.cloud_name)
+        return self._fallback_interface
+
+    @property
     def cloud_name(self):
         """Return lowercase cloud name as determined by the datasource.
 
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index 90c12df..e5696b1 100644
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -14,6 +14,7 @@ from cloudinit import temp_utils
 from contextlib import contextmanager
 from xml.etree import ElementTree
 
+from cloudinit import url_helper
 from cloudinit import util
 
 LOG = logging.getLogger(__name__)
@@ -55,14 +56,14 @@ class AzureEndpointHttpClient(object):
         if secure:
             headers = self.headers.copy()
             headers.update(self.extra_secure_headers)
-        return util.read_file_or_url(url, headers=headers)
+        return url_helper.read_file_or_url(url, headers=headers)
 
     def post(self, url, data=None, extra_headers=None):
         headers = self.headers
         if extra_headers is not None:
             headers = self.headers.copy()
             headers.update(extra_headers)
-        return util.read_file_or_url(url, data=data, headers=headers)
+        return url_helper.read_file_or_url(url, data=data, headers=headers)
 
 
 class GoalState(object):
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index 452e921..d5bc98a 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -17,6 +17,7 @@ from cloudinit import util
 class DataSourceTestSubclassNet(DataSource):
 
     dsname = 'MyTestSubclass'
+    url_max_wait = 55
 
     def __init__(self, sys_cfg, distro, paths, custom_userdata=None):
         super(DataSourceTestSubclassNet, self).__init__(
@@ -70,8 +71,7 @@ class TestDataSource(CiTestCase):
         """Init uses DataSource.dsname for sourcing ds_cfg."""
         sys_cfg = {'datasource': {'MyTestSubclass': {'key2': False}}}
         distro = 'distrotest'  # generally should be a Distro object
-        paths = Paths({})
-        datasource = DataSourceTestSubclassNet(sys_cfg, distro, paths)
+        datasource = DataSourceTestSubclassNet(sys_cfg, distro, self.paths)
         self.assertEqual({'key2': False}, datasource.ds_cfg)
 
     def test_str_is_classname(self):
@@ -81,6 +81,91 @@ class TestDataSource(CiTestCase):
             'DataSourceTestSubclassNet',
             str(DataSourceTestSubclassNet('', '', self.paths)))
 
+    def test_datasource_get_url_params_defaults(self):
+        """get_url_params default url config settings for the datasource."""
+        params = self.datasource.get_url_params()
+        self.assertEqual(params.max_wait_seconds, self.datasource.url_max_wait)
+        self.assertEqual(params.timeout_seconds, self.datasource.url_timeout)
+        self.assertEqual(params.num_retries, self.datasource.url_retries)
+
+    def test_datasource_get_url_params_subclassed(self):
+        """Subclasses can override get_url_params defaults."""
+        sys_cfg = {'datasource': {'MyTestSubclass': {'key2': False}}}
+        distro = 'distrotest'  # generally should be a Distro object
+        datasource = DataSourceTestSubclassNet(sys_cfg, distro, self.paths)
+        expected = (datasource.url_max_wait, datasource.url_timeout,
+                    datasource.url_retries)
+        url_params = datasource.get_url_params()
+        self.assertNotEqual(self.datasource.get_url_params(), url_params)
+        self.assertEqual(expected, url_params)
+
+    def test_datasource_get_url_params_ds_config_override(self):
+        """Datasource configuration options can override url param defaults."""
+        sys_cfg = {
+            'datasource': {
+                'MyTestSubclass': {
+                    'max_wait': '1', 'timeout': '2', 'retries': '3'}}}
+        datasource = DataSourceTestSubclassNet(
+            sys_cfg, self.distro, self.paths)
+        expected = (1, 2, 3)
+        url_params = datasource.get_url_params()
+        self.assertNotEqual(
+            (datasource.url_max_wait, datasource.url_timeout,
+             datasource.url_retries),
+            url_params)
+        self.assertEqual(expected, url_params)
+
+    def test_datasource_get_url_params_is_zero_or_greater(self):
+        """get_url_params ignores timeouts with a value below 0."""
+        # Set an override that is below 0 which gets ignored.
+        sys_cfg = {'datasource': {'_undef': {'timeout': '-1'}}}
+        datasource = DataSource(sys_cfg, self.distro, self.paths)
+        (_max_wait, timeout, _retries) = datasource.get_url_params()
+        self.assertEqual(0, timeout)
+
+    def test_datasource_get_url_uses_defaults_on_errors(self):
+        """On invalid system config values for url_params defaults are used."""
+        # All invalid values should be logged
+        sys_cfg = {'datasource': {
+            '_undef': {
+                'max_wait': 'nope', 'timeout': 'bug', 'retries': 'nonint'}}}
+        datasource = DataSource(sys_cfg, self.distro, self.paths)
+        url_params = datasource.get_url_params()
+        expected = (datasource.url_max_wait, datasource.url_timeout,
+                    datasource.url_retries)
+        self.assertEqual(expected, url_params)
+        logs = self.logs.getvalue()
+        expected_logs = [
+            "Config max_wait 'nope' is not an int, using default '-1'",
+            "Config timeout 'bug' is not an int, using default '10'",
+            "Config retries 'nonint' is not an int, using default '5'",
+        ]
+        for log in expected_logs:
+            self.assertIn(log, logs)
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_fallback_interface_is_discovered(self, m_get_fallback_nic):
+        """The fallback_interface is discovered via find_fallback_nic."""
+        m_get_fallback_nic.return_value = 'nic9'
+        self.assertEqual('nic9', self.datasource.fallback_interface)
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_fallback_interface_logs_undiscovered(self, m_get_fallback_nic):
+        """Log a warning when fallback_interface can not discover the nic."""
+        self.datasource._cloud_name = 'MySupahCloud'
+        m_get_fallback_nic.return_value = None  # Couldn't discover nic
+        self.assertIsNone(self.datasource.fallback_interface)
+        self.assertEqual(
+            'WARNING: Did not find a fallback interface on MySupahCloud.\n',
+            self.logs.getvalue())
+
+    @mock.patch('cloudinit.sources.net.find_fallback_nic')
+    def test_wb_fallback_interface_is_cached(self, m_get_fallback_nic):
+        """The fallback_interface is cached and won't be rediscovered."""
+        self.datasource._fallback_interface = 'nic10'
+        self.assertEqual('nic10', self.datasource.fallback_interface)
+        m_get_fallback_nic.assert_not_called()
+
     def test__get_data_unimplemented(self):
         """Raise an error when _get_data is not implemented."""
         with self.assertRaises(NotImplementedError) as context_manager:
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index bc4ebc8..286607b 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -362,16 +362,22 @@ class Init(object):
         self._store_vendordata()
 
     def setup_datasource(self):
-        if self.datasource is None:
-            raise RuntimeError("Datasource is None, cannot setup.")
-        self.datasource.setup(is_new_instance=self.is_new_instance())
+        with events.ReportEventStack("setup-datasource",
+                                     "setting up datasource",
+                                     parent=self.reporter):
+            if self.datasource is None:
+                raise RuntimeError("Datasource is None, cannot setup.")
+            self.datasource.setup(is_new_instance=self.is_new_instance())
 
     def activate_datasource(self):
-        if self.datasource is None:
-            raise RuntimeError("Datasource is None, cannot activate.")
-        self.datasource.activate(cfg=self.cfg,
-                                 is_new_instance=self.is_new_instance())
-        self._write_to_cache()
+        with events.ReportEventStack("activate-datasource",
+                                     "activating datasource",
+                                     parent=self.reporter):
+            if self.datasource is None:
+                raise RuntimeError("Datasource is None, cannot activate.")
+            self.datasource.activate(cfg=self.cfg,
+                                     is_new_instance=self.is_new_instance())
+            self._write_to_cache()
 
     def _store_userdata(self):
         raw_ud = self.datasource.get_userdata_raw()
@@ -691,7 +697,9 @@ class Modules(object):
         module_list = []
         if name not in self.cfg:
             return module_list
-        cfg_mods = self.cfg[name]
+        cfg_mods = self.cfg.get(name)
+        if not cfg_mods:
+            return module_list
         # Create 'module_list', an array of hashes
         # Where hash['mod'] = module name
         #       hash['freq'] = frequency
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index 117a9cf..5bfe7fa 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -3,6 +3,7 @@
 from __future__ import print_function
 
 import functools
+import httpretty
 import logging
 import os
 import shutil
@@ -111,12 +112,12 @@ class TestCase(unittest2.TestCase):
         super(TestCase, self).setUp()
         self.reset_global_state()
 
-    def add_patch(self, target, attr, **kwargs):
+    def add_patch(self, target, attr, *args, **kwargs):
         """Patches specified target object and sets it as attr on test
         instance also schedules cleanup"""
         if 'autospec' not in kwargs:
             kwargs['autospec'] = True
-        m = mock.patch(target, **kwargs)
+        m = mock.patch(target, *args, **kwargs)
         p = m.start()
         self.addCleanup(m.stop)
         setattr(self, attr, p)
@@ -303,14 +304,21 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
 class HttprettyTestCase(CiTestCase):
     # necessary as http_proxy gets in the way of httpretty
     # https://github.com/gabrielfalcao/HTTPretty/issues/122
+    # Also make sure that allow_net_connect is set to False.
+    # And make sure reset and enable/disable are done.
 
     def setUp(self):
         self.restore_proxy = os.environ.get('http_proxy')
         if self.restore_proxy is not None:
             del os.environ['http_proxy']
         super(HttprettyTestCase, self).setUp()
+        httpretty.HTTPretty.allow_net_connect = False
+        httpretty.reset()
+        httpretty.enable()
 
     def tearDown(self):
+        httpretty.disable()
+        httpretty.reset()
         if self.restore_proxy:
             os.environ['http_proxy'] = self.restore_proxy
         super(HttprettyTestCase, self).tearDown()
diff --git a/cloudinit/tests/test_netinfo.py b/cloudinit/tests/test_netinfo.py
index 2537c1c..d76e768 100644
--- a/cloudinit/tests/test_netinfo.py
+++ b/cloudinit/tests/test_netinfo.py
@@ -4,7 +4,7 @@
 
 from copy import copy
 
-from cloudinit.netinfo import netdev_pformat, route_pformat
+from cloudinit.netinfo import netdev_info, netdev_pformat, route_pformat
 from cloudinit.tests.helpers import CiTestCase, mock, readResource
 
 
@@ -73,6 +73,51 @@ class TestNetInfo(CiTestCase):
 
     @mock.patch('cloudinit.netinfo.util.which')
     @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_info_nettools_down(self, m_subp, m_which):
+        """test netdev_info using nettools and down interfaces."""
+        m_subp.return_value = (
+            readResource("netinfo/new-ifconfig-output-down"), "")
+        m_which.side_effect = lambda x: x if x == 'ifconfig' else None
+        self.assertEqual(
+            {'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False},
+             'lo': {'ipv4': [{'ip': '127.0.0.1', 'mask': '255.0.0.0'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True}},
+            netdev_info("."))
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
+    def test_netdev_info_iproute_down(self, m_subp, m_which):
+        """Test netdev_info with ip and down interfaces."""
+        m_subp.return_value = (
+            readResource("netinfo/sample-ipaddrshow-output-down"), "")
+        m_which.side_effect = lambda x: x if x == 'ip' else None
+        self.assertEqual(
+            {'lo': {'ipv4': [{'ip': '127.0.0.1', 'bcast': '.',
+                              'mask': '255.0.0.0', 'scope': 'host'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True},
+             'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False}},
+            netdev_info("."))
+
+    @mock.patch('cloudinit.netinfo.netdev_info')
+    def test_netdev_pformat_with_down(self, m_netdev_info):
+        """test netdev_pformat when netdev_info returns 'down' interfaces."""
+        m_netdev_info.return_value = (
+            {'lo': {'ipv4': [{'ip': '127.0.0.1', 'mask': '255.0.0.0',
+                              'scope': 'host'}],
+                    'ipv6': [{'ip': '::1/128', 'scope6': 'host'}],
+                    'hwaddr': '.', 'up': True},
+             'eth0': {'ipv4': [], 'ipv6': [],
+                      'hwaddr': '00:16:3e:de:51:a6', 'up': False}})
+        self.assertEqual(
+            readResource("netinfo/netdev-formatted-output-down"),
+            netdev_pformat())
+
+    @mock.patch('cloudinit.netinfo.util.which')
+    @mock.patch('cloudinit.netinfo.util.subp')
     def test_route_nettools_pformat(self, m_subp, m_which):
         """route_pformat properly rendering nettools route info."""
 
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
index b778a3a..113249d 100644
--- a/cloudinit/tests/test_url_helper.py
+++ b/cloudinit/tests/test_url_helper.py
@@ -1,7 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.url_helper import oauth_headers
+from cloudinit.url_helper import oauth_headers, read_file_or_url
 from cloudinit.tests.helpers import CiTestCase, mock, skipIf
+from cloudinit import util
+
+import httpretty
 
 
 try:
@@ -38,3 +41,26 @@ class TestOAuthHeaders(CiTestCase):
             'url', 'consumer_key', 'token_key', 'token_secret',
             'consumer_secret')
         self.assertEqual('url', return_value)
+
+
+class TestReadFileOrUrl(CiTestCase):
+    def test_read_file_or_url_str_from_file(self):
+        """Test that str(result.contents) on file is text version of contents.
+        It should not be "b'data'", but just "'data'" """
+        tmpf = self.tmp_path("myfile1")
+        data = b'This is my file content\n'
+        util.write_file(tmpf, data, omode="wb")
+        result = read_file_or_url("file://%s" % tmpf)
+        self.assertEqual(result.contents, data)
+        self.assertEqual(str(result), data.decode('utf-8'))
+
+    @httpretty.activate
+    def test_read_file_or_url_str_from_url(self):
+        """Test that str(result.contents) on url is text version of contents.
+        It should not be "b'data'", but just "'data'" """
+        url = 'http://hostname/path'
+        data = b'This is my url content\n'
+        httpretty.register_uri(httpretty.GET, url, data)
+        result = read_file_or_url(url)
+        self.assertEqual(result.contents, data)
+        self.assertEqual(str(result), data.decode('utf-8'))
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index 3c05a43..17853fc 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -3,11 +3,12 @@
 """Tests for cloudinit.util"""
 
 import logging
-from textwrap import dedent
+import platform
 
 import cloudinit.util as util
 
 from cloudinit.tests.helpers import CiTestCase, mock
+from textwrap import dedent
 
 LOG = logging.getLogger(__name__)
 
@@ -16,6 +17,29 @@ MOUNT_INFO = [
     '153 68 254:0 / /home rw,relatime shared:101 - xfs /dev/sda2 rw,attr2'
 ]
 
+OS_RELEASE_SLES = dedent("""\
+    NAME="SLES"\n
+    VERSION="12-SP3"\n
+    VERSION_ID="12.3"\n
+    PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n
+    ID="sles"\nANSI_COLOR="0;32"\n
+    CPE_NAME="cpe:/o:suse:sles:12:sp3"\n
+""")
+
+OS_RELEASE_UBUNTU = dedent("""\
+    NAME="Ubuntu"\n
+    VERSION="16.04.3 LTS (Xenial Xerus)"\n
+    ID=ubuntu\n
+    ID_LIKE=debian\n
+    PRETTY_NAME="Ubuntu 16.04.3 LTS"\n
+    VERSION_ID="16.04"\n
+    HOME_URL="http://www.ubuntu.com/"\n
+    SUPPORT_URL="http://help.ubuntu.com/"\n
+    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"\n
+    VERSION_CODENAME=xenial\n
+    UBUNTU_CODENAME=xenial\n
+""")
+
 
 class FakeCloud(object):
 
@@ -261,4 +285,56 @@ class TestUdevadmSettle(CiTestCase):
         self.assertRaises(util.ProcessExecutionError, util.udevadm_settle)
 
 
+@mock.patch('os.path.exists')
+class TestGetLinuxDistro(CiTestCase):
+
+    @classmethod
+    def os_release_exists(self, path):
+        """Side effect function"""
+        if path == '/etc/os-release':
+            return 1
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_distro_quoted_name(self, m_os_release, m_path_exists):
+        """Verify we get the correct name if the os-release file has
+        the distro name in quotes"""
+        m_os_release.return_value = OS_RELEASE_SLES
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('sles', '12.3', platform.machine()), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_distro_bare_name(self, m_os_release, m_path_exists):
+        """Verify we get the correct name if the os-release file does not
+        have the distro name in quotes"""
+        m_os_release.return_value = OS_RELEASE_UBUNTU
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('ubuntu', '16.04', platform.machine()), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):
+        """Verify we get no information if os-release does not exist"""
+        m_platform_dist.return_value = ('', '', '')
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('', '', ''), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_no_impl(self, m_platform_dist, m_path_exists):
+        """Verify we get an empty tuple when no information exists and
+        Exceptions are not propagated"""
+        m_platform_dist.side_effect = Exception()
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('', '', ''), dist)
+
+    @mock.patch('platform.dist')
+    def test_get_linux_distro_plat_data(self, m_platform_dist, m_path_exists):
+        """Verify we get the correct platform information"""
+        m_platform_dist.return_value = ('foo', '1.1', 'aarch64')
+        m_path_exists.return_value = 0
+        dist = util.get_linux_distro()
+        self.assertEqual(('foo', '1.1', 'aarch64'), dist)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_version.py b/cloudinit/tests/test_version.py
index d012f69..a96c2a4 100644
--- a/tests/unittests/test_version.py
+++ b/cloudinit/tests/test_version.py
@@ -3,6 +3,8 @@
 from cloudinit.tests.helpers import CiTestCase
 from cloudinit import version
 
+import mock
+
 
 class TestExportsFeatures(CiTestCase):
     def test_has_network_config_v1(self):
@@ -11,4 +13,19 @@ class TestExportsFeatures(CiTestCase):
     def test_has_network_config_v2(self):
         self.assertIn('NETWORK_CONFIG_V2', version.FEATURES)
 
+
+class TestVersionString(CiTestCase):
+    @mock.patch("cloudinit.version._PACKAGED_VERSION",
+                "17.2-3-gb05b9972-0ubuntu1")
+    def test_package_version_respected(self):
+        """If _PACKAGED_VERSION is filled in, then it should be returned."""
+        self.assertEqual("17.2-3-gb05b9972-0ubuntu1", version.version_string())
+
+    @mock.patch("cloudinit.version._PACKAGED_VERSION", "@@PACKAGED_VERSION@@")
+    @mock.patch("cloudinit.version.__VERSION__", "17.2")
+    def test_package_version_skipped(self):
+        """If _PACKAGED_VERSION is not modified, then return __VERSION__."""
+        self.assertEqual("17.2", version.version_string())
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 1de07b1..8067979 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -15,6 +15,7 @@ import six
 import time
 
 from email.utils import parsedate
+from errno import ENOENT
 from functools import partial
 from itertools import count
 from requests import exceptions
@@ -80,6 +81,32 @@ def combine_url(base, *add_ons):
     return url
 
 
+def read_file_or_url(url, timeout=5, retries=10,
+                     headers=None, data=None, sec_between=1, ssl_details=None,
+                     headers_cb=None, exception_cb=None):
+    url = url.lstrip()
+    if url.startswith("/"):
+        url = "file://%s" % url
+    if url.lower().startswith("file://"):
+        if data:
+            LOG.warning("Unable to post data to file resource %s", url)
+        file_path = url[len("file://"):]
+        try:
+            with open(file_path, "rb") as fp:
+                contents = fp.read()
+        except IOError as e:
+            code = e.errno
+            if e.errno == ENOENT:
+                code = NOT_FOUND
+            raise UrlError(cause=e, code=code, headers=None, url=url)
+        return FileResponse(file_path, contents=contents)
+    else:
+        return readurl(url, timeout=timeout, retries=retries, headers=headers,
+                       headers_cb=headers_cb, data=data,
+                       sec_between=sec_between, ssl_details=ssl_details,
+                       exception_cb=exception_cb)
+
+
 # Made to have same accessors as UrlResponse so that the
 # read_file_or_url can return this or that object and the
 # 'user' of those objects will not need to know the difference.
@@ -96,7 +123,7 @@ class StringResponse(object):
         return True
 
     def __str__(self):
-        return self.contents
+        return self.contents.decode('utf-8')
 
 
 class FileResponse(StringResponse):
diff --git a/cloudinit/user_data.py b/cloudinit/user_data.py
index cc55daf..ed83d2d 100644
--- a/cloudinit/user_data.py
+++ b/cloudinit/user_data.py
@@ -19,7 +19,7 @@ import six
 
 from cloudinit import handlers
 from cloudinit import log as logging
-from cloudinit.url_helper import UrlError
+from cloudinit.url_helper import read_file_or_url, UrlError
 from cloudinit import util
 
 LOG = logging.getLogger(__name__)
@@ -224,8 +224,8 @@ class UserDataProcessor(object):
                 content = util.load_file(include_once_fn)
             else:
                 try:
-                    resp = util.read_file_or_url(include_url,
-                                                 ssl_details=self.ssl_details)
+                    resp = read_file_or_url(include_url,
+                                            ssl_details=self.ssl_details)
                     if include_once_on and resp.ok():
                         util.write_file(include_once_fn, resp.contents,
                                         mode=0o600)
@@ -337,8 +337,10 @@ def is_skippable(part):
 
 # Coverts a raw string into a mime message
 def convert_string(raw_data, content_type=NOT_MULTIPART_TYPE):
+    """convert a string (more likely bytes) or a message into
+    a mime message."""
     if not raw_data:
-        raw_data = ''
+        raw_data = b''
 
     def create_binmsg(data, content_type):
         maintype, subtype = content_type.split("/", 1)
@@ -346,15 +348,17 @@ def convert_string(raw_data, content_type=NOT_MULTIPART_TYPE):
         msg.set_payload(data)
         return msg
 
-    try:
-        data = util.decode_binary(util.decomp_gzip(raw_data))
-        if "mime-version:" in data[0:4096].lower():
-            msg = util.message_from_string(data)
-        else:
-            msg = create_binmsg(data, content_type)
-    except UnicodeDecodeError:
-        msg = create_binmsg(raw_data, content_type)
+    if isinstance(raw_data, six.text_type):
+        bdata = raw_data.encode('utf-8')
+    else:
+        bdata = raw_data
+    bdata = util.decomp_gzip(bdata, decode=False)
+    if b"mime-version:" in bdata[0:4096].lower():
+        msg = util.message_from_string(bdata.decode('utf-8'))
+    else:
+        msg = create_binmsg(bdata, content_type)
 
     return msg
 
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 2828ca3..6da9511 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -576,6 +576,39 @@ def get_cfg_option_int(yobj, key, default=0):
     return int(get_cfg_option_str(yobj, key, default=default))
 
 
+def get_linux_distro():
+    distro_name = ''
+    distro_version = ''
+    if os.path.exists('/etc/os-release'):
+        os_release = load_file('/etc/os-release')
+        for line in os_release.splitlines():
+            if line.strip().startswith('ID='):
+                distro_name = line.split('=')[-1]
+                distro_name = distro_name.replace('"', '')
+            if line.strip().startswith('VERSION_ID='):
+                # Lets hope for the best that distros stay consistent ;)
+                distro_version = line.split('=')[-1]
+                distro_version = distro_version.replace('"', '')
+    else:
+        dist = ('', '', '')
+        try:
+            # Will be removed in 3.7
+            dist = platform.dist()  # pylint: disable=W1505
+        except Exception:
+            pass
+        finally:
+            found = None
+            for entry in dist:
+                if entry:
+                    found = 1
+            if not found:
+                LOG.warning('Unable to determine distribution, template '
+                            'expansion may have unexpected results')
+        return dist
+
+    return (distro_name, distro_version, platform.machine())
+
+
 def system_info():
     info = {
         'platform': platform.platform(),
@@ -583,19 +616,19 @@ def system_info():
         'release': platform.release(),
         'python': platform.python_version(),
         'uname': platform.uname(),
-        'dist': platform.dist(),  # pylint: disable=W1505
+        'dist': get_linux_distro()
     }
     system = info['system'].lower()
     var = 'unknown'
     if system == "linux":
         linux_dist = info['dist'][0].lower()
-        if linux_dist in ('centos', 'fedora', 'debian'):
+        if linux_dist in ('centos', 'debian', 'fedora', 'rhel', 'suse'):
             var = linux_dist
         elif linux_dist in ('ubuntu', 'linuxmint', 'mint'):
             var = 'ubuntu'
         elif linux_dist == 'redhat':
             var = 'rhel'
-        elif linux_dist == 'suse':
+        elif linux_dist in ('opensuse', 'sles'):
             var = 'suse'
         else:
             var = 'linux'
@@ -857,37 +890,6 @@ def fetch_ssl_details(paths=None):
     return ssl_details
 
 
-def read_file_or_url(url, timeout=5, retries=10,
-                     headers=None, data=None, sec_between=1, ssl_details=None,
-                     headers_cb=None, exception_cb=None):
-    url = url.lstrip()
-    if url.startswith("/"):
-        url = "file://%s" % url
-    if url.lower().startswith("file://"):
-        if data:
-            LOG.warning("Unable to post data to file resource %s", url)
-        file_path = url[len("file://"):]
-        try:
-            contents = load_file(file_path, decode=False)
-        except IOError as e:
-            code = e.errno
-            if e.errno == ENOENT:
-                code = url_helper.NOT_FOUND
-            raise url_helper.UrlError(cause=e, code=code, headers=None,
-                                      url=url)
-        return url_helper.FileResponse(file_path, contents=contents)
-    else:
-        return url_helper.readurl(url,
-                                  timeout=timeout,
-                                  retries=retries,
-                                  headers=headers,
-                                  headers_cb=headers_cb,
-                                  data=data,
-                                  sec_between=sec_between,
-                                  ssl_details=ssl_details,
-                                  exception_cb=exception_cb)
-
-
 def load_yaml(blob, default=None, allowed=(dict,)):
     loaded = default
     blob = decode_binary(blob)
@@ -905,8 +907,20 @@ def load_yaml(blob, default=None, allowed=(dict,)):
                              " but got %s instead") %
                             (allowed, type_utils.obj_name(converted)))
         loaded = converted
-    except (yaml.YAMLError, TypeError, ValueError):
-        logexc(LOG, "Failed loading yaml blob")
+    except (yaml.YAMLError, TypeError, ValueError) as e:
+        msg = 'Failed loading yaml blob'
+        mark = None
+        if hasattr(e, 'context_mark') and getattr(e, 'context_mark'):
+            mark = getattr(e, 'context_mark')
+        elif hasattr(e, 'problem_mark') and getattr(e, 'problem_mark'):
+            mark = getattr(e, 'problem_mark')
+        if mark:
+            msg += (
+                '. Invalid format at line {line} column {col}: "{err}"'.format(
+                    line=mark.line + 1, col=mark.column + 1, err=e))
+        else:
+            msg += '. {err}'.format(err=e)
+        LOG.warning(msg)
     return loaded
 
 
@@ -925,12 +939,14 @@ def read_seeded(base="", ext="", timeout=5, retries=10, file_retries=0):
         ud_url = "%s%s%s" % (base, "user-data", ext)
         md_url = "%s%s%s" % (base, "meta-data", ext)
 
-    md_resp = read_file_or_url(md_url, timeout, retries, file_retries)
+    md_resp = url_helper.read_file_or_url(md_url, timeout, retries,
+                                          file_retries)
     md = None
     if md_resp.ok():
         md = load_yaml(decode_binary(md_resp.contents), default={})
 
-    ud_resp = read_file_or_url(ud_url, timeout, retries, file_retries)
+    ud_resp = url_helper.read_file_or_url(ud_url, timeout, retries,
+                                          file_retries)
     ud = None
     if ud_resp.ok():
         ud = ud_resp.contents
@@ -1154,7 +1170,9 @@ def gethostbyaddr(ip):
 
 def is_resolvable_url(url):
     """determine if this url is resolvable (existing or ip)."""
-    return is_resolvable(urlparse.urlparse(url).hostname)
+    return log_time(logfunc=LOG.debug, msg="Resolving URL: " + url,
+                    func=is_resolvable,
+                    args=(urlparse.urlparse(url).hostname,))
 
 
 def search_for_mirror(candidates):
@@ -1608,7 +1626,8 @@ def mounts():
     return mounted
 
 
-def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True):
+def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,
+             update_env_for_mount=None):
     """
     Mount the device, call method 'callback' passing the directory
     in which it was mounted, then unmount.  Return whatever 'callback'
@@ -1670,7 +1689,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True):
                         mountcmd.extend(['-t', mtype])
                     mountcmd.append(device)
                     mountcmd.append(tmpd)
-                    subp(mountcmd)
+                    subp(mountcmd, update_env=update_env_for_mount)
                     umount = tmpd  # This forces it to be unmounted (when set)
                     mountpoint = tmpd
                     break
@@ -1857,9 +1876,55 @@ def subp_blob_in_tempfile(blob, *args, **kwargs):
         return subp(*args, **kwargs)
 
 
-def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
+def subp(args, data=None, rcs=None, env=None, capture=True,
+         combine_capture=False, shell=False,
          logstring=False, decode="replace", target=None, update_env=None,
          status_cb=None):
+    """Run a subprocess.
+
+    :param args: command to run in a list. [cmd, arg1, arg2...]
+    :param data: input to the command, made available on its stdin.
+    :param rcs:
+        a list of allowed return codes.  If subprocess exits with a value not
+        in this list, a ProcessExecutionError will be raised.  By default,
+        data is returned as a string.  See 'decode' parameter.
+    :param env: a dictionary for the command's environment.
+    :param capture:
+        boolean indicating if output should be captured.  If True, then stderr
+        and stdout will be returned.  If False, they will not be redirected.
+    :param combine_capture:
+        boolean indicating if stderr should be redirected to stdout. When True,
+        interleaved stderr and stdout will be returned as the first element of
+        a tuple, the second will be empty string or bytes (per decode).
+        if combine_capture is True, then output is captured independent of
+        the value of capture.
+    :param shell: boolean indicating if this should be run with a shell.
+    :param logstring:
+        the command will be logged to DEBUG.  If it contains info that should
+        not be logged, then logstring will be logged instead.
+    :param decode:
+        if False, no decoding will be done and returned stdout and stderr will
+        be bytes.  Other allowed values are 'strict', 'ignore', and 'replace'.
+        These values are passed through to bytes().decode() as the 'errors'
+        parameter.  There is no support for decoding to other than utf-8.
+    :param target:
+        not supported, kwarg present only to make function signature similar
+        to curtin's subp.
+    :param update_env:
+        update the enviornment for this command with this dictionary.
+        this will not affect the current processes os.environ.
+    :param status_cb:
+        call this fuction with a single string argument before starting
+        and after finishing.
+
+    :return
+        if not capturing, return is (None, None)
+        if capturing, stdout and stderr are returned.
+            if decode:
+                entries in tuple will be python2 unicode or python3 string
+            if not decode:
+                entries in tuple will be python2 string or python3 bytes
+    """
 
     # not supported in cloud-init (yet), for now kept in the call signature
     # to ease maintaining code shared between cloud-init and curtin
@@ -1885,7 +1950,8 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
         status_cb('Begin run command: {command}\n'.format(command=command))
     if not logstring:
         LOG.debug(("Running command %s with allowed return codes %s"
-                   " (shell=%s, capture=%s)"), args, rcs, shell, capture)
+                   " (shell=%s, capture=%s)"),
+                  args, rcs, shell, 'combine' if combine_capture else capture)
     else:
         LOG.debug(("Running hidden command to protect sensitive "
                    "input/output logstring: %s"), logstring)
@@ -1896,6 +1962,9 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
     if capture:
         stdout = subprocess.PIPE
         stderr = subprocess.PIPE
+    if combine_capture:
+        stdout = subprocess.PIPE
+        stderr = subprocess.STDOUT
     if data is None:
         # using devnull assures any reads get null, rather
         # than possibly waiting on input.
@@ -1934,10 +2003,11 @@ def subp(args, data=None, rcs=None, env=None, capture=True, shell=False,
             devnull_fp.close()
 
     # Just ensure blank instead of none.
-    if not out and capture:
-        out = b''
-    if not err and capture:
-        err = b''
+    if capture or combine_capture:
+        if not out:
+            out = b''
+        if not err:
+            err = b''
     if decode:
         def ldecode(data, m='utf-8'):
             if not isinstance(data, bytes):
@@ -2061,24 +2131,33 @@ def is_container():
     return False
 
 
-def get_proc_env(pid):
+def get_proc_env(pid, encoding='utf-8', errors='replace'):
     """
     Return the environment in a dict that a given process id was started with.
-    """
 
-    env = {}
-    fn = os.path.join("/proc/", str(pid), "environ")
+    @param encoding: if true, then decoding will be done with
+                     .decode(encoding, errors) and text will be returned.
+                     if false then binary will be returned.
+    @param errors:   only used if encoding is true."""
+    fn = os.path.join("/proc", str(pid), "environ")
+
     try:
-        contents = load_file(fn)
-        toks = contents.split("\x00")
-        for tok in toks:
-            if tok == "":
-                continue
-            (name, val) = tok.split("=", 1)
-            if name:
-                env[name] = val
+        contents = load_file(fn, decode=False)
     except (IOError, OSError):
-        pass
+        return {}
+
+    env = {}
+    null, equal = (b"\x00", b"=")
+    if encoding:
+        null, equal = ("\x00", "=")
+        contents = contents.decode(encoding, errors)
+
+    for tok in contents.split(null):
+        if not tok:
+            continue
+        (name, val) = tok.split(equal, 1)
+        if name:
+            env[name] = val
     return env
 
 
@@ -2545,11 +2624,21 @@ def _call_dmidecode(key, dmidecode_path):
         if result.replace(".", "") == "":
             return ""
         return result
-    except (IOError, OSError) as _err:
-        LOG.debug('failed dmidecode cmd: %s\n%s', cmd, _err)
+    except (IOError, OSError) as e:
+        LOG.debug('failed dmidecode cmd: %s\n%s', cmd, e)
         return None
 
 
+def is_x86(uname_arch=None):
+    """Return True if platform is x86-based"""
+    if uname_arch is None:
+        uname_arch = os.uname()[4]
+    x86_arch_match = (
+        uname_arch == 'x86_64' or
+        (uname_arch[0] == 'i' and uname_arch[2:] == '86'))
+    return x86_arch_match
+
+
 def read_dmi_data(key):
     """
     Wrapper for reading DMI data.
@@ -2577,8 +2666,7 @@ def read_dmi_data(key):
 
     # running dmidecode can be problematic on some arches (LP: #1243287)
     uname_arch = os.uname()[4]
-    if not (uname_arch == "x86_64" or
-            (uname_arch.startswith("i") and uname_arch[2:] == "86") or
+    if not (is_x86(uname_arch) or
             uname_arch == 'aarch64' or
             uname_arch == 'amd64'):
         LOG.debug("dmidata is not supported on %s", uname_arch)
diff --git a/cloudinit/version.py b/cloudinit/version.py
index ccd0f84..3b60fc4 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,8 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "18.2"
+__VERSION__ = "18.3"
+_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
     # supports network config version 1
@@ -15,6 +16,9 @@ FEATURES = [
 
 
 def version_string():
+    """Extract a version string from cloud-init."""
+    if not _PACKAGED_VERSION.startswith('@@'):
+        return _PACKAGED_VERSION
     return __VERSION__
 
 # vi: ts=4 expandtab
diff --git a/debian/changelog b/debian/changelog
index 49d7a49..9b7ae67 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,71 @@
-cloud-init (18.2-27-g6ef92c98-0ubuntu1~18.04.2) UNRELEASED; urgency=medium
+cloud-init (18.3-0ubuntu1~18.04.1) bionic-proposed; urgency=medium
 
   * debian/rules: update version.version_string to contain packaged version.
     (LP: #1770712)
-
- -- Scott Moser <smoser@xxxxxxxxxx>  Mon, 04 Jun 2018 10:10:59 -0400
+  * New upstream release. (LP: #1777912)
+    - release 18.3
+    - docs: represent sudo:false in docs for user_groups config module
+    - Explicitly prevent `sudo` access for user module [Jacob Bednarz]
+    - lxd: Delete default network and detach device if lxd-init created them.
+    - openstack: avoid unneeded metadata probe on non-openstack platforms
+    - stages: fix tracebacks if a module stage is undefined or empty
+      [Robert Schweikert]
+    - Be more safe on string/bytes when writing multipart user-data to disk.
+    - Fix get_proc_env for pids that have non-utf8 content in environment.
+    - tests: fix salt_minion integration test on bionic and later
+    - tests: provide human-readable integration test summary when --verbose
+    - tests: skip chrony integration tests on lxd running artful or older
+    - test: add optional --preserve-instance arg to integraiton tests
+    - netplan: fix mtu if provided by network config for all rendered types
+    - tests: remove pip install workarounds for pylxd, take upstream fix.
+    - subp: support combine_capture argument.
+    - tests: ordered tox dependencies for pylxd install
+    - util: add get_linux_distro function to replace platform.dist
+      [Robert Schweikert]
+    - pyflakes: fix unused variable references identified by pyflakes 2.0.0.
+    - - Do not use the systemd_prefix macro, not available in this environment
+      [Robert Schweikert]
+    - doc: Add config info to ec2, openstack and cloudstack datasource docs
+    - Enable SmartOS network metadata to work with netplan via per-subnet
+      routes [Dan McDonald]
+    - openstack: Allow discovery in init-local using dhclient in a sandbox.
+    - tests: Avoid using https in httpretty, improve HttPretty test case.
+    - yaml_load/schema: Add invalid line and column nums to error message
+    - Azure: Ignore NTFS mount errors when checking ephemeral drive
+      [Paul Meyer]
+    - packages/brpm: Get proper dependencies for cmdline distro.
+    - packages: Make rpm spec files patch in package version like in debs.
+    - tools/run-container: replace tools/run-centos with more generic.
+    - Update version.version_string to contain packaged version.
+    - cc_mounts: Do not add devices to fstab that are already present.
+      [Lars Kellogg-Stedman]
+    - ds-identify: ensure that we have certain tokens in PATH.
+    - tests: enable Ubuntu Cosmic in integration tests [Joshua Powers]
+    - read_file_or_url: move to url_helper, fix bug in its FileResponse.
+    - cloud_tests: help pylint
+    - flake8: fix flake8 errors in previous commit.
+    - typos: Fix spelling mistakes in cc_mounts.py log messages [Stephen Ford]
+    - tests: restructure SSH and initial connections [Joshua Powers]
+    - ds-identify: recognize container-other as a container, test SmartOS.
+    - cloud-config.service: run After snap.seeded.service.
+    - tests: do not rely on host /proc/cmdline in test_net.py
+      [Lars Kellogg-Stedman]
+    - ds-identify: Remove dupe call to is_ds_enabled, improve debug message.
+    - SmartOS: fix get_interfaces for nics that do not have addr_assign_type.
+    - tests: fix package and ca_cert cloud_tests on bionic
+    - ds-identify: make shellcheck 0.4.6 happy with ds-identify.
+    - pycodestyle: Fix deprecated string literals, move away from flake8.
+    - azure: Add reported ready marker file. [Joshua Chan]
+    - tools: Support adding a release suffix through packages/bddeb.
+    - FreeBSD: Invoke growfs on ufs filesystems such that it does not prompt.
+      [Harm Weites]
+    - tools: Re-use the orig tarball in packages/bddeb if it is around.
+    - netinfo: fix netdev_pformat when a nic does not have an address assigned.
+    - collect-logs: add -v flag, write to stderr, limit journal to single boot.
+    - IBMCloud: Disable config-drive and nocloud only if IBMCloud is enabled.
+    - Add reporting events and log_time around early source of blocking time
+
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Wed, 20 Jun 2018 13:21:02 -0600
 
 cloud-init (18.2-27-g6ef92c98-0ubuntu1~18.04.1) bionic; urgency=medium
 
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
index 7bca24a..01ecad7 100644
--- a/doc/examples/cloud-config-user-groups.txt
+++ b/doc/examples/cloud-config-user-groups.txt
@@ -30,6 +30,11 @@ users:
     gecos: Magic Cloud App Daemon User
     inactive: true
     system: true
+  - name: fizzbuzz
+    sudo: False
+    ssh_authorized_keys:
+      - <ssh pub key 1>
+      - <ssh pub key 2>
   - snapuser: joe@xxxxxxxxxx
 
 # Valid Values:
@@ -71,13 +76,21 @@ users:
 #   no_log_init: When set to true, do not initialize lastlog and faillog database.
 #   ssh_import_id: Optional. Import SSH ids
 #   ssh_authorized_keys: Optional. [list] Add keys to user's authorized keys file
-#   sudo: Defaults to none. Set to the sudo string you want to use, i.e.
-#           ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following
-#           format.
-#               sudo:
-#                   - ALL=(ALL) NOPASSWD:/bin/mysql
-#                   - ALL=(ALL) ALL
-#           Note: Please double check your syntax and make sure it is valid.
+#   sudo: Defaults to none. Accepts a sudo rule string, a list of sudo rule
+#         strings or False to explicitly deny sudo usage. Examples:
+#
+#         Allow a user unrestricted sudo access.
+#             sudo:  ALL=(ALL) NOPASSWD:ALL
+#
+#         Adding multiple sudo rule strings.
+#             sudo:
+#               - ALL=(ALL) NOPASSWD:/bin/mysql
+#               - ALL=(ALL) ALL
+#
+#         Prevent sudo access for a user.
+#             sudo: False
+#
+#         Note: Please double check your syntax and make sure it is valid.
 #               cloud-init does not parse/check the syntax of the sudo
 #               directive.
 #   system: Create the user as a system user. This means no home directory.
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 38ba75d..30e57d8 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -17,6 +17,103 @@ own way) internally a datasource abstract class was created to allow for a
 single way to access the different cloud systems methods to provide this data
 through the typical usage of subclasses.
 
+
+instance-data
+-------------
+For reference, cloud-init stores all the metadata, vendordata and userdata
+provided by a cloud in a json blob at ``/run/cloud-init/instance-data.json``.
+While the json contains datasource-specific keys and names, cloud-init will
+maintain a minimal set of standardized keys that will remain stable on any
+cloud. Standardized instance-data keys will be present under a "v1" key.
+Any datasource metadata cloud-init consumes will all be present under the
+"ds" key.
+
+Below is an instance-data.json example from an OpenStack instance:
+
+.. sourcecode:: json
+
+  {
+   "base64-encoded-keys": [
+    "ds/meta-data/random_seed",
+    "ds/user-data"
+   ],
+   "ds": {
+    "ec2_metadata": {
+     "ami-id": "ami-0000032f",
+     "ami-launch-index": "0",
+     "ami-manifest-path": "FIXME",
+     "block-device-mapping": {
+      "ami": "vda",
+      "ephemeral0": "/dev/vdb",
+      "root": "/dev/vda"
+     },
+     "hostname": "xenial-test.novalocal",
+     "instance-action": "none",
+     "instance-id": "i-0006e030",
+     "instance-type": "m1.small",
+     "local-hostname": "xenial-test.novalocal",
+     "local-ipv4": "10.5.0.6",
+     "placement": {
+      "availability-zone": "None"
+     },
+     "public-hostname": "xenial-test.novalocal",
+     "public-ipv4": "10.245.162.145",
+     "reservation-id": "r-fxm623oa",
+     "security-groups": "default"
+    },
+    "meta-data": {
+     "availability_zone": null,
+     "devices": [],
+     "hostname": "xenial-test.novalocal",
+     "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
+     "launch_index": 0,
+     "local-hostname": "xenial-test.novalocal",
+     "name": "xenial-test",
+     "project_id": "e0eb2d2538814...",
+     "random_seed": "A6yPN...",
+     "uuid": "3e39d278-0644-4728-9479-678f92..."
+    },
+    "network_json": {
+     "links": [
+      {
+       "ethernet_mac_address": "fa:16:3e:7d:74:9b",
+       "id": "tap9ca524d5-6e",
+       "mtu": 8958,
+       "type": "ovs",
+       "vif_id": "9ca524d5-6e5a-4809-936a-6901..."
+      }
+     ],
+     "networks": [
+      {
+       "id": "network0",
+       "link": "tap9ca524d5-6e",
+       "network_id": "c6adfc18-9753-42eb-b3ea-18b57e6b837f",
+       "type": "ipv4_dhcp"
+      }
+     ],
+     "services": [
+      {
+       "address": "10.10.160.2",
+       "type": "dns"
+      }
+     ]
+    },
+    "user-data": "I2Nsb3VkLWNvbmZpZ...",
+    "vendor-data": null
+   },
+   "v1": {
+    "availability-zone": null,
+    "cloud-name": "openstack",
+    "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
+    "local-hostname": "xenial-test",
+    "region": null
+   }
+  }
+
+
+
+Datasource API
+--------------
 The current interface that a datasource object must provide is the following:
 
 .. sourcecode:: python
diff --git a/doc/rtd/topics/datasources/cloudstack.rst b/doc/rtd/topics/datasources/cloudstack.rst
index 225093a..a3101ed 100644
--- a/doc/rtd/topics/datasources/cloudstack.rst
+++ b/doc/rtd/topics/datasources/cloudstack.rst
@@ -4,7 +4,9 @@ CloudStack
 ==========
 
 `Apache CloudStack`_ expose user-data, meta-data, user password and account
-sshkey thru the Virtual-Router. For more details on meta-data and user-data,
+sshkey thru the Virtual-Router. The datasource obtains the VR address via
+dhcp lease information given to the instance.
+For more details on meta-data and user-data,
 refer the `CloudStack Administrator Guide`_. 
 
 URLs to access user-data and meta-data from the Virtual Machine. Here 10.1.1.1
@@ -18,14 +20,26 @@ is the Virtual Router IP:
 
 Configuration
 -------------
+The following configuration can be set for the datasource in system
+configuration (in `/etc/cloud/cloud.cfg` or `/etc/cloud/cloud.cfg.d/`).
 
-Apache CloudStack datasource can be configured as follows:
+The settings that may be configured are:
 
-.. code:: yaml
+ * **max_wait**:  the maximum amount of clock time in seconds that should be
+   spent searching metadata_urls.  A value less than zero will result in only
+   one request being made, to the first in the list. (default: 120)
+ * **timeout**: the timeout value provided to urlopen for each individual http
+   request.  This is used both when selecting a metadata_url and when crawling
+   the metadata service. (default: 50)
 
-    datasource:
-      CloudStack: {}
-      None: {}
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   CloudStack:
+    max_wait: 120
+    timeout: 50
     datasource_list:
       - CloudStack
 
diff --git a/doc/rtd/topics/datasources/ec2.rst b/doc/rtd/topics/datasources/ec2.rst
index 3bc66e1..64c325d 100644
--- a/doc/rtd/topics/datasources/ec2.rst
+++ b/doc/rtd/topics/datasources/ec2.rst
@@ -60,4 +60,34 @@ To see which versions are supported from your cloud provider use the following U
     ...
     latest
 
+
+
+Configuration
+-------------
+The following configuration can be set for the datasource in system
+configuration (in `/etc/cloud/cloud.cfg` or `/etc/cloud/cloud.cfg.d/`).
+
+The settings that may be configured are:
+
+ * **metadata_urls**: This list of urls will be searched for an Ec2
+   metadata service. The first entry that successfully returns a 200 response
+   for <url>/<version>/meta-data/instance-id will be selected.
+   (default: ['http://169.254.169.254', 'http://instance-data:8773']).
+ * **max_wait**:  the maximum amount of clock time in seconds that should be
+   spent searching metadata_urls.  A value less than zero will result in only
+   one request being made, to the first in the list. (default: 120)
+ * **timeout**: the timeout value provided to urlopen for each individual http
+   request.  This is used both when selecting a metadata_url and when crawling
+   the metadata service. (default: 50)
+
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   Ec2:
+    metadata_urls: ["http://169.254.169.254:80";, "http://instance-data:8773";]
+    max_wait: 120
+    timeout: 50
+
 .. vi: textwidth=78
diff --git a/doc/rtd/topics/datasources/openstack.rst b/doc/rtd/topics/datasources/openstack.rst
index 43592de..421da08 100644
--- a/doc/rtd/topics/datasources/openstack.rst
+++ b/doc/rtd/topics/datasources/openstack.rst
@@ -7,6 +7,21 @@ This datasource supports reading data from the
 `OpenStack Metadata Service
 <https://docs.openstack.org/nova/latest/admin/networking-nova.html#metadata-service>`_.
 
+Discovery
+-------------
+To determine whether a platform looks like it may be OpenStack, cloud-init
+checks the following environment attributes as a potential OpenStack platform:
+
+ * Maybe OpenStack if
+
+   * **non-x86 cpu architecture**: because DMI data is buggy on some arches
+ * Is OpenStack **if x86 architecture and ANY** of the following
+
+   * **/proc/1/environ**: Nova-lxd contains *product_name=OpenStack Nova*
+   * **DMI product_name**: Either *Openstack Nova* or *OpenStack Compute*
+   * **DMI chassis_asset_tag** is *OpenTelekomCloud*
+
+
 Configuration
 -------------
 The following configuration can be set for the datasource in system
@@ -25,18 +40,22 @@ The settings that may be configured are:
    the metadata service. (default: 10)
  * **retries**: The number of retries that should be done for an http request.
    This value is used only after metadata_url is selected. (default: 5)
+ * **apply_network_config**: A boolean specifying whether to configure the
+   network for the instance based on network_data.json provided by the
+   metadata service. When False, only configure dhcp on the primary nic for
+   this instances. (default: True)
 
-An example configuration with the default values is provided as example below:
+An example configuration with the default values is provided below:
 
 .. sourcecode:: yaml
 
-  #cloud-config
   datasource:
    OpenStack:
     metadata_urls: ["http://169.254.169.254";]
     max_wait: -1
     timeout: 10
     retries: 5
+    apply_network_config: True
 
 
 Vendor Data
diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst
index 2f8ab54..3b0148c 100644
--- a/doc/rtd/topics/network-config-format-v1.rst
+++ b/doc/rtd/topics/network-config-format-v1.rst
@@ -130,6 +130,18 @@ the bond interfaces.
 The ``bond_interfaces`` key accepts a list of network device ``name`` values
 from the configuration.  This list may be empty.
 
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
+.. note::
+
+  The possible supported values of a device's MTU is not available at
+  configuration time.  It's possible to specify a value too large or to
+  small for a device and may be ignored by the device.
+
 **params**:  *<Dictionary of key: value bonding parameter pairs>*
 
 The ``params`` key in a bond holds a dictionary of bonding parameters.
@@ -268,6 +280,21 @@ Type ``vlan`` requires the following keys:
 - ``vlan_link``: Specify the underlying link via its ``name``.
 - ``vlan_id``: Specify the VLAN numeric id.
 
+The following optional keys are supported:
+
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
+.. note::
+
+  The possible supported values of a device's MTU is not available at
+  configuration time.  It's possible to specify a value too large or to
+  small for a device and may be ignored by the device.
+
+
 **VLAN Example**::
 
    network:
diff --git a/doc/rtd/topics/network-config-format-v2.rst b/doc/rtd/topics/network-config-format-v2.rst
index 335d236..ea370ef 100644
--- a/doc/rtd/topics/network-config-format-v2.rst
+++ b/doc/rtd/topics/network-config-format-v2.rst
@@ -174,6 +174,12 @@ recognized by ``inet_pton(3)``
 Example for IPv4: ``gateway4: 172.16.0.1``
 Example for IPv6: ``gateway6: 2001:4::1``
 
+**mtu**: *<MTU SizeBytes>*
+
+The MTU key represents a device's Maximum Transmission Unit, the largest size
+packet or frame, specified in octets (eight-bit bytes), that can be sent in a
+packet- or frame-based network.  Specifying ``mtu`` is optional.
+
 **nameservers**: *<(mapping)>*
 
 Set DNS servers and search domains, for manual address configuration. There
diff --git a/doc/rtd/topics/tests.rst b/doc/rtd/topics/tests.rst
index cac4a6e..b83bd89 100644
--- a/doc/rtd/topics/tests.rst
+++ b/doc/rtd/topics/tests.rst
@@ -58,7 +58,8 @@ explaining how to run one or the other independently.
     $ tox -e citest -- run --verbose \
         --os-name stretch --os-name xenial \
         --deb cloud-init_0.7.8~my_patch_all.deb \
-        --preserve-data --data-dir ~/collection
+        --preserve-data --data-dir ~/collection \
+        --preserve-instance
 
 The above command will do the following:
 
@@ -76,6 +77,10 @@ The above command will do the following:
 * ``--preserve-data`` always preserve collected data, do not remove data
   after successful test run
 
+* ``--preserve-instance`` do not destroy the instance after test to allow
+  for debugging the stopped instance during integration test development. By
+  default, test instances are destroyed after the test completes.
+
 * ``--data-dir ~/collection`` write collected data into `~/collection`,
   rather than using a temporary directory
 
diff --git a/integration-requirements.txt b/integration-requirements.txt
index df3a73e..e5bb5b2 100644
--- a/integration-requirements.txt
+++ b/integration-requirements.txt
@@ -13,7 +13,7 @@ paramiko==2.4.0
 
 # lxd backend
 # 04/03/2018: enables use of lxd 3.0
-git+https://github.com/lxc/pylxd.git@1a85a12a23401de6e96b1aeaf59ecbff2e88f49d
+git+https://github.com/lxc/pylxd.git@4b8ab1802f9aee4eb29cf7b119dae0aa47150779
 
 
 # finds latest image information
diff --git a/packages/bddeb b/packages/bddeb
index 4f2e2dd..95602a0 100755
--- a/packages/bddeb
+++ b/packages/bddeb
@@ -1,11 +1,14 @@
 #!/usr/bin/env python3
 
 import argparse
+import csv
 import json
 import os
 import shutil
 import sys
 
+UNRELEASED = "UNRELEASED"
+
 
 def find_root():
     # expected path is in <top_dir>/packages/
@@ -28,6 +31,24 @@ if "avoid-pep8-E402-import-not-top-of-file":
 DEBUILD_ARGS = ["-S", "-d"]
 
 
+def get_release_suffix(release):
+    """Given ubuntu release (xenial), return a suffix for package (~16.04.1)"""
+    csv_path = "/usr/share/distro-info/ubuntu.csv"
+    rels = {}
+    # fields are version, codename, series, created, release, eol, eol-server
+    if os.path.exists(csv_path):
+        with open(csv_path, "r") as fp:
+            # version has "16.04 LTS" or "16.10", so drop "LTS" portion.
+            rels = {row['series']: row['version'].replace(' LTS', '')
+                    for row in csv.DictReader(fp)}
+    if release in rels:
+        return "~%s.1" % rels[release]
+    elif release != UNRELEASED:
+        print("missing distro-info-data package, unable to give "
+              "per-release suffix.\n")
+    return ""
+
+
 def run_helper(helper, args=None, strip=True):
     if args is None:
         args = []
@@ -117,7 +138,7 @@ def get_parser():
 
     parser.add_argument("--release", dest="release",
                         help=("build with changelog referencing RELEASE"),
-                        default="UNRELEASED")
+                        default=UNRELEASED)
 
     for ent in DEBUILD_ARGS:
         parser.add_argument(ent, dest="debuild_args", action='append_const',
@@ -148,7 +169,10 @@ def main():
     if args.verbose:
         capture = False
 
-    templ_data = {'debian_release': args.release}
+    templ_data = {
+        'debian_release': args.release,
+        'release_suffix': get_release_suffix(args.release)}
+
     with temp_utils.tempdir() as tdir:
 
         # output like 0.7.6-1022-g36e92d3
@@ -157,10 +181,18 @@ def main():
         # This is really only a temporary archive
         # since we will extract it then add in the debian
         # folder, then re-archive it for debian happiness
-        print("Creating a temporary tarball using the 'make-tarball' helper")
         tarball = "cloud-init_%s.orig.tar.gz" % ver_data['version_long']
         tarball_fp = util.abs_join(tdir, tarball)
-        run_helper('make-tarball', ['--long', '--output=' + tarball_fp])
+        path = None
+        for pd in ("./", "../", "../dl/"):
+            if os.path.exists(pd + tarball):
+                path = pd + tarball
+                print("Using existing tarball %s" % path)
+                shutil.copy(path, tarball_fp)
+                break
+        if path is None:
+            print("Creating a temp tarball using the 'make-tarball' helper")
+            run_helper('make-tarball', ['--long', '--output=' + tarball_fp])
 
         print("Extracting temporary tarball %r" % (tarball))
         cmd = ['tar', '-xvzf', tarball_fp, '-C', tdir]
diff --git a/packages/brpm b/packages/brpm
index 3439cf3..a154ef2 100755
--- a/packages/brpm
+++ b/packages/brpm
@@ -42,13 +42,13 @@ def run_helper(helper, args=None, strip=True):
     return stdout
 
 
-def read_dependencies(requirements_file='requirements.txt'):
+def read_dependencies(distro, requirements_file='requirements.txt'):
     """Returns the Python package depedencies from requirements.txt files.
 
     @returns a tuple of (requirements, test_requirements)
     """
     pkg_deps = run_helper(
-        'read-dependencies', args=['--distro', 'redhat']).splitlines()
+        'read-dependencies', args=['--distro', distro]).splitlines()
     test_deps = run_helper(
         'read-dependencies', args=[
             '--requirements-file', 'test-requirements.txt',
@@ -83,7 +83,7 @@ def generate_spec_contents(args, version_data, tmpl_fn, top_dir, arc_fn):
         rpm_upstream_version = version_data['version']
     subs['rpm_upstream_version'] = rpm_upstream_version
 
-    deps, test_deps = read_dependencies()
+    deps, test_deps = read_dependencies(distro=args.distro)
     subs['buildrequires'] = deps + test_deps
     subs['requires'] = deps
 
diff --git a/packages/debian/changelog.in b/packages/debian/changelog.in
index bdf8d56..930322f 100644
--- a/packages/debian/changelog.in
+++ b/packages/debian/changelog.in
@@ -1,5 +1,5 @@
 ## template:basic
-cloud-init (${version_long}-1~bddeb) ${debian_release}; urgency=low
+cloud-init (${version_long}-1~bddeb${release_suffix}) ${debian_release}; urgency=low
 
   * build
 
diff --git a/packages/debian/rules.in b/packages/debian/rules.in
index 4aa907e..e542c7f 100755
--- a/packages/debian/rules.in
+++ b/packages/debian/rules.in
@@ -3,6 +3,7 @@
 INIT_SYSTEM ?= systemd
 export PYBUILD_INSTALL_ARGS=--init-system=$(INIT_SYSTEM)
 PYVER ?= python${pyver}
+DEB_VERSION := $(shell dpkg-parsechangelog --show-field=Version)
 
 %:
 	dh $@ --with $(PYVER),systemd --buildsystem pybuild
@@ -14,6 +15,7 @@ override_dh_install:
 	cp tools/21-cloudinit.conf debian/cloud-init/etc/rsyslog.d/21-cloudinit.conf
 	install -D ./tools/Z99-cloud-locale-test.sh debian/cloud-init/etc/profile.d/Z99-cloud-locale-test.sh
 	install -D ./tools/Z99-cloudinit-warnings.sh debian/cloud-init/etc/profile.d/Z99-cloudinit-warnings.sh
+	flist=$$(find $(CURDIR)/debian/ -type f -name version.py) && sed -i 's,@@PACKAGED_VERSION@@,$(DEB_VERSION),' $${flist:-did-not-find-version-py-for-replacement}
 
 override_dh_auto_test:
 ifeq (,$(findstring nocheck,$(DEB_BUILD_OPTIONS)))
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index 91faf3c..a3a6d1e 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -115,6 +115,13 @@ rm -rf $RPM_BUILD_ROOT%{python_sitelib}/tests
 mkdir -p $RPM_BUILD_ROOT/%{_sharedstatedir}/cloud
 mkdir -p $RPM_BUILD_ROOT/%{_libexecdir}/%{name}
 
+# patch in the full version to version.py
+version_pys=$(cd "$RPM_BUILD_ROOT" && find . -name version.py -type f)
+[ -n "$version_pys" ] ||
+   { echo "failed to find 'version.py' to patch with version." 1>&2; exit 1; }
+( cd "$RPM_BUILD_ROOT" &&
+  sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $version_pys )
+
 %clean
 rm -rf $RPM_BUILD_ROOT
 
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index bbb965a..e781d74 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -5,7 +5,7 @@
 # Or: http://www.rpm.org/max-rpm/ch-rpm-inside.html
 
 Name:           cloud-init
-Version:        {{version}}
+Version:        {{rpm_upstream_version}}
 Release:        1{{subrelease}}%{?dist}
 Summary:        Cloud instance init scripts
 
@@ -16,22 +16,13 @@ URL:            http://launchpad.net/cloud-init
 Source0:        {{archive_name}}
 BuildRoot:      %{_tmppath}/%{name}-%{version}-build
 
-%if 0%{?suse_version} && 0%{?suse_version} <= 1110
-%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")}
-%else
 BuildArch:      noarch
-%endif
+
 
 {% for r in buildrequires %}
 BuildRequires:        {{r}}
 {% endfor %}
 
-%if 0%{?suse_version} && 0%{?suse_version} <= 1210
-  %define initsys sysvinit
-%else
-  %define initsys systemd
-%endif
-
 # Install pypi 'dynamic' requirements
 {% for r in requires %}
 Requires:       {{r}}
@@ -39,7 +30,7 @@ Requires:       {{r}}
 
 # Custom patches
 {% for p in patches %}
-Patch{{loop.index0}: {{p}}
+Patch{{loop.index0}}: {{p}}
 {% endfor %}
 
 %description
@@ -63,35 +54,21 @@ end for
 %{__python} setup.py install \
             --skip-build --root=%{buildroot} --prefix=%{_prefix} \
             --record-rpm=INSTALLED_FILES --install-lib=%{python_sitelib} \
-            --init-system=%{initsys}
+            --init-system=systemd
+
+# Move udev rules
+mkdir -p %{buildroot}/usr/lib/udev/rules.d/
+mv %{buildroot}/lib/udev/rules.d/* %{buildroot}/usr/lib/udev/rules.d/
 
 # Remove non-SUSE templates
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.debian.*
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.redhat.*
 rm %{buildroot}/%{_sysconfdir}/cloud/templates/*.ubuntu.*
 
-# Remove cloud-init tests
-rm -r %{buildroot}/%{python_sitelib}/tests
-
-# Move sysvinit scripts to the correct place and create symbolic links
-%if %{initsys} == sysvinit
-   mkdir -p %{buildroot}/%{_initddir}
-   mv %{buildroot}%{_sysconfdir}/rc.d/init.d/* %{buildroot}%{_initddir}/
-   rmdir %{buildroot}%{_sysconfdir}/rc.d/init.d
-   rmdir %{buildroot}%{_sysconfdir}/rc.d
-
-   mkdir -p %{buildroot}/%{_sbindir}
-   pushd %{buildroot}/%{_initddir}
-   for file in * ; do
-      ln -s %{_initddir}/${file} %{buildroot}/%{_sbindir}/rc${file}
-   done
-   popd
-%endif
-
 # Move documentation
 mkdir -p %{buildroot}/%{_defaultdocdir}
 mv %{buildroot}/usr/share/doc/cloud-init %{buildroot}/%{_defaultdocdir}
-for doc in TODO LICENSE ChangeLog requirements.txt; do
+for doc in LICENSE ChangeLog requirements.txt; do
    cp ${doc} %{buildroot}/%{_defaultdocdir}/cloud-init
 done
 
@@ -102,29 +79,35 @@ done
 
 mkdir -p %{buildroot}/var/lib/cloud
 
+# patch in the full version to version.py
+version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
+[ -n "$version_pys" ] ||
+   { echo "failed to find 'version.py' to patch with version." 1>&2; exit 1; }
+( cd "%{buildroot}" &&
+  sed -i "s,@@PACKAGED_VERSION@@,%{version}-%{release}," $version_pys )
+
 %postun
 %insserv_cleanup
 
 %files
 
-# Sysvinit scripts
-%if %{initsys} == sysvinit
-   %attr(0755, root, root) %{_initddir}/cloud-config
-   %attr(0755, root, root) %{_initddir}/cloud-final
-   %attr(0755, root, root) %{_initddir}/cloud-init-local
-   %attr(0755, root, root) %{_initddir}/cloud-init
-
-   %{_sbindir}/rccloud-*
-%endif
-
 # Program binaries
 %{_bindir}/cloud-init*
 
+# systemd files
+/usr/lib/systemd/system-generators/*
+/usr/lib/systemd/system/*
+
 # There doesn't seem to be an agreed upon place for these
 # although it appears the standard says /usr/lib but rpmbuild
 # will try /usr/lib64 ??
 /usr/lib/%{name}/uncloud-init
 /usr/lib/%{name}/write-ssh-key-fingerprints
+/usr/lib/%{name}/ds-identify
+
+# udev rules
+/usr/lib/udev/rules.d/66-azure-ephemeral.rules
+
 
 # Docs
 %doc %{_defaultdocdir}/cloud-init/*
@@ -138,6 +121,9 @@ mkdir -p %{buildroot}/var/lib/cloud
 %config(noreplace) %{_sysconfdir}/cloud/templates/*
 %{_sysconfdir}/bash_completion.d/cloud-init
 
+%{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient
+%{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager
+
 # Python code is here...
 %{python_sitelib}/*
 
diff --git a/setup.py b/setup.py
index 85b2337..5ed8eae 100755
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@ from distutils.errors import DistutilsArgError
 import subprocess
 
 RENDERED_TMPD_PREFIX = "RENDERED_TEMPD"
-
+VARIANT = None
 
 def is_f(p):
     return os.path.isfile(p)
@@ -114,10 +114,20 @@ def render_tmpl(template):
     atexit.register(shutil.rmtree, tmpd)
     bname = os.path.basename(template).rstrip(tmpl_ext)
     fpath = os.path.join(tmpd, bname)
-    tiny_p([sys.executable, './tools/render-cloudcfg', template, fpath])
+    if VARIANT:
+        tiny_p([sys.executable, './tools/render-cloudcfg', '--variant',
+            VARIANT, template, fpath])
+    else:
+        tiny_p([sys.executable, './tools/render-cloudcfg', template, fpath])
     # return path relative to setup.py
     return os.path.join(os.path.basename(tmpd), bname)
 
+# User can set the variant for template rendering
+if '--distro' in sys.argv:
+    idx = sys.argv.index('--distro')
+    VARIANT = sys.argv[idx+1]
+    del sys.argv[idx+1]
+    sys.argv.remove('--distro')
 
 INITSYS_FILES = {
     'sysvinit': [f for f in glob('sysvinit/redhat/*') if is_f(f)],
@@ -260,7 +270,7 @@ requirements = read_requires()
 setuptools.setup(
     name='cloud-init',
     version=get_version(),
-    description='EC2 initialisation magic',
+    description='Cloud instance initialisation magic',
     author='Scott Moser',
     author_email='scott.moser@xxxxxxxxxxxxx',
     url='http://launchpad.net/cloud-init/',
@@ -277,4 +287,5 @@ setuptools.setup(
     }
 )
 
+
 # vi: ts=4 expandtab
diff --git a/systemd/cloud-config.service.tmpl b/systemd/cloud-config.service.tmpl
index bdee3ce..9d928ca 100644
--- a/systemd/cloud-config.service.tmpl
+++ b/systemd/cloud-config.service.tmpl
@@ -2,6 +2,7 @@
 [Unit]
 Description=Apply the settings specified in cloud-config
 After=network-online.target cloud-config.target
+After=snapd.seeded.service
 Wants=network-online.target cloud-config.target
 
 [Service]
diff --git a/tests/cloud_tests/args.py b/tests/cloud_tests/args.py
index c6c1877..ab34549 100644
--- a/tests/cloud_tests/args.py
+++ b/tests/cloud_tests/args.py
@@ -62,6 +62,9 @@ ARG_SETS = {
         (('-d', '--data-dir'),
          {'help': 'directory to store test data in',
           'action': 'store', 'metavar': 'DIR', 'required': False}),
+        (('--preserve-instance',),
+         {'help': 'do not destroy the instance under test',
+          'action': 'store_true', 'default': False, 'required': False}),
         (('--preserve-data',),
          {'help': 'do not remove collected data after successful run',
           'action': 'store_true', 'default': False, 'required': False}),),
diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py
index 1ba7285..75b5061 100644
--- a/tests/cloud_tests/collect.py
+++ b/tests/cloud_tests/collect.py
@@ -42,7 +42,7 @@ def collect_console(instance, base_dir):
     @param base_dir: directory to write console log to
     """
     logfile = os.path.join(base_dir, 'console.log')
-    LOG.debug('getting console log for %s to %s', instance, logfile)
+    LOG.debug('getting console log for %s to %s', instance.name, logfile)
     try:
         data = instance.console_log()
     except NotImplementedError as e:
@@ -93,7 +93,8 @@ def collect_test_data(args, snapshot, os_name, test_name):
     # create test instance
     component = PlatformComponent(
         partial(platforms.get_instance, snapshot, user_data,
-                block=True, start=False, use_desc=test_name))
+                block=True, start=False, use_desc=test_name),
+        preserve_instance=args.preserve_instance)
 
     LOG.info('collecting test data for test: %s', test_name)
     with component as instance:
diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py
index cc439d2..95bc3b1 100644
--- a/tests/cloud_tests/platforms/instances.py
+++ b/tests/cloud_tests/platforms/instances.py
@@ -87,7 +87,12 @@ class Instance(TargetBase):
             self._ssh_client = None
 
     def _ssh_connect(self):
-        """Connect via SSH."""
+        """Connect via SSH.
+
+        Attempt to SSH to the client on the specific IP and port. If it
+        fails in some manner, then retry 2 more times for a total of 3
+        attempts; sleeping a few seconds between attempts.
+        """
         if self._ssh_client:
             return self._ssh_client
 
@@ -98,21 +103,22 @@ class Instance(TargetBase):
         client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
         private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file)
 
-        retries = 30
+        retries = 3
         while retries:
             try:
                 client.connect(username=self.ssh_username,
                                hostname=self.ssh_ip, port=self.ssh_port,
-                               pkey=private_key, banner_timeout=30)
+                               pkey=private_key)
                 self._ssh_client = client
                 return client
             except (ConnectionRefusedError, AuthenticationException,
                     BadHostKeyException, ConnectionResetError, SSHException,
                     OSError):
                 retries -= 1
-                time.sleep(10)
+                LOG.debug('Retrying ssh connection on connect failure')
+                time.sleep(3)
 
-        ssh_cmd = 'Failed ssh connection to %s@%s:%s after 300 seconds' % (
+        ssh_cmd = 'Failed ssh connection to %s@%s:%s after 3 retries' % (
             self.ssh_username, self.ssh_ip, self.ssh_port
         )
         raise util.InTargetExecuteError(b'', b'', 1, ssh_cmd, 'ssh')
@@ -128,18 +134,31 @@ class Instance(TargetBase):
             return ' '.join(l for l in test.strip().splitlines()
                             if not l.lstrip().startswith('#'))
 
-        time = self.config['boot_timeout']
+        boot_timeout = self.config['boot_timeout']
         tests = [self.config['system_ready_script']]
         if wait_for_cloud_init:
             tests.append(self.config['cloud_init_ready_script'])
 
         formatted_tests = ' && '.join(clean_test(t) for t in tests)
         cmd = ('i=0; while [ $i -lt {time} ] && i=$(($i+1)); do {test} && '
-               'exit 0; sleep 1; done; exit 1').format(time=time,
+               'exit 0; sleep 1; done; exit 1').format(time=boot_timeout,
                                                        test=formatted_tests)
 
-        if self.execute(cmd, rcs=(0, 1))[-1] != 0:
-            raise OSError('timeout: after {}s system not started'.format(time))
-
+        end_time = time.time() + boot_timeout
+        while True:
+            try:
+                return_code = self.execute(
+                    cmd, rcs=(0, 1), description='wait for instance start'
+                )[-1]
+                if return_code == 0:
+                    break
+            except util.InTargetExecuteError:
+                LOG.warning("failed to connect via SSH")
+
+            if time.time() < end_time:
+                time.sleep(3)
+            else:
+                raise util.PlatformError('ssh', 'after %ss instance is not '
+                                         'reachable' % boot_timeout)
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/platforms/lxd/instance.py b/tests/cloud_tests/platforms/lxd/instance.py
index 1c17c78..d396519 100644
--- a/tests/cloud_tests/platforms/lxd/instance.py
+++ b/tests/cloud_tests/platforms/lxd/instance.py
@@ -208,7 +208,7 @@ def _has_proper_console_support():
     if 'console' not in info.get('api_extensions', []):
         reason = "LXD server does not support console api extension"
     else:
-        dver = info.get('environment', {}).get('driver_version', "")
+        dver = str(info.get('environment', {}).get('driver_version', ""))
         if dver.startswith("2.") or dver.startswith("1."):
             reason = "LXD Driver version not 3.x+ (%s)" % dver
         else:
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index c7dcbe8..defae02 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -129,6 +129,22 @@ features:
 
 releases:
     # UBUNTU =================================================================
+    cosmic:
+        # EOL: Jul 2019
+        default:
+            enabled: true
+            release: cosmic
+            version: 18.10
+            os: ubuntu
+            feature_groups:
+                - base
+                - debian_base
+                - ubuntu_specific
+        lxd:
+            sstreams_server: https://cloud-images.ubuntu.com/daily
+            alias: cosmic
+            setup_overrides: null
+            override_templates: false
     bionic:
         # EOL: Apr 2023
         default:
diff --git a/tests/cloud_tests/stage.py b/tests/cloud_tests/stage.py
index 74a7d46..d64a1dc 100644
--- a/tests/cloud_tests/stage.py
+++ b/tests/cloud_tests/stage.py
@@ -12,9 +12,15 @@ from tests.cloud_tests import LOG
 class PlatformComponent(object):
     """Context manager to safely handle platform components."""
 
-    def __init__(self, get_func):
-        """Store get_<platform component> function as partial with no args."""
+    def __init__(self, get_func, preserve_instance=False):
+        """Store get_<platform component> function as partial with no args.
+
+        @param get_func: Callable returning an instance from the platform.
+        @param preserve_instance: Boolean, when True, do not destroy instance
+            after test. Used for test development.
+        """
         self.get_func = get_func
+        self.preserve_instance = preserve_instance
 
     def __enter__(self):
         """Create instance of platform component."""
@@ -24,7 +30,10 @@ class PlatformComponent(object):
     def __exit__(self, etype, value, trace):
         """Destroy instance."""
         if self.instance is not None:
-            self.instance.destroy()
+            if self.preserve_instance:
+                LOG.info('Preserving test instance %s', self.instance.name)
+            else:
+                self.instance.destroy()
 
 
 def run_single(name, call):
diff --git a/tests/cloud_tests/testcases.yaml b/tests/cloud_tests/testcases.yaml
index a3e2990..a16d1dd 100644
--- a/tests/cloud_tests/testcases.yaml
+++ b/tests/cloud_tests/testcases.yaml
@@ -24,9 +24,9 @@ base_test_data:
         status.json: |
             #!/bin/sh
             cat /run/cloud-init/status.json
-        cloud-init-version: |
+        package-versions: |
             #!/bin/sh
-            dpkg-query -W -f='${Version}' cloud-init
+            dpkg-query --show
         system.journal.gz: |
             #!/bin/sh
             [ -d /run/systemd ] || { echo "not systemd."; exit 0; }
diff --git a/tests/cloud_tests/testcases/base.py b/tests/cloud_tests/testcases/base.py
index 0d1916b..696db8d 100644
--- a/tests/cloud_tests/testcases/base.py
+++ b/tests/cloud_tests/testcases/base.py
@@ -31,6 +31,27 @@ class CloudTestCase(unittest.TestCase):
     def is_distro(self, distro_name):
         return self.os_cfg['os'] == distro_name
 
+    def assertPackageInstalled(self, name, version=None):
+        """Check dpkg-query --show output for matching package name.
+
+        @param name: package base name
+        @param version: string representing a package version or part of a
+            version.
+        """
+        pkg_out = self.get_data_file('package-versions')
+        pkg_match = re.search(
+            '^%s\t(?P<version>.*)$' % name, pkg_out, re.MULTILINE)
+        if pkg_match:
+            installed_version = pkg_match.group('version')
+            if not version:
+                return  # Success
+            if installed_version.startswith(version):
+                return  # Success
+            raise AssertionError(
+                'Expected package version %s-%s not found. Found %s' %
+                name, version, installed_version)
+        raise AssertionError('Package not installed: %s' % name)
+
     def os_version_cmp(self, cmp_version):
         """Compare the version of the test to comparison_version.
 
diff --git a/tests/cloud_tests/testcases/modules/byobu.py b/tests/cloud_tests/testcases/modules/byobu.py
index 005ca01..74d0529 100644
--- a/tests/cloud_tests/testcases/modules/byobu.py
+++ b/tests/cloud_tests/testcases/modules/byobu.py
@@ -9,8 +9,7 @@ class TestByobu(base.CloudTestCase):
 
     def test_byobu_installed(self):
         """Test byobu installed."""
-        out = self.get_data_file('byobu_installed')
-        self.assertIn('/usr/bin/byobu', out)
+        self.assertPackageInstalled('byobu')
 
     def test_byobu_profile_enabled(self):
         """Test byobu profile.d file exists."""
diff --git a/tests/cloud_tests/testcases/modules/byobu.yaml b/tests/cloud_tests/testcases/modules/byobu.yaml
index a9aa1f3..d002a61 100644
--- a/tests/cloud_tests/testcases/modules/byobu.yaml
+++ b/tests/cloud_tests/testcases/modules/byobu.yaml
@@ -7,9 +7,6 @@ cloud_config: |
   #cloud-config
   byobu_by_default: enable
 collect_scripts:
-  byobu_installed: |
-    #!/bin/bash
-    which byobu
   byobu_profile_enabled: |
     #!/bin/bash
     ls /etc/profile.d/Z97-byobu.sh
diff --git a/tests/cloud_tests/testcases/modules/ca_certs.py b/tests/cloud_tests/testcases/modules/ca_certs.py
index e75f041..6b56f63 100644
--- a/tests/cloud_tests/testcases/modules/ca_certs.py
+++ b/tests/cloud_tests/testcases/modules/ca_certs.py
@@ -7,10 +7,23 @@ from tests.cloud_tests.testcases import base
 class TestCaCerts(base.CloudTestCase):
     """Test ca certs module."""
 
-    def test_cert_count(self):
-        """Test the count is proper."""
-        out = self.get_data_file('cert_count')
-        self.assertEqual(5, int(out))
+    def test_certs_updated(self):
+        """Test certs have been updated in /etc/ssl/certs."""
+        out = self.get_data_file('cert_links')
+        # Bionic update-ca-certificates creates less links debian #895075
+        unlinked_files = []
+        links = {}
+        for cert_line in out.splitlines():
+            if '->' in cert_line:
+                fname, _sep, link = cert_line.split()
+                links[fname] = link
+            else:
+                unlinked_files.append(cert_line)
+        self.assertEqual(['ca-certificates.crt'], unlinked_files)
+        self.assertEqual('cloud-init-ca-certs.pem', links['a535c1f3.0'])
+        self.assertEqual(
+            '/usr/share/ca-certificates/cloud-init-ca-certs.crt',
+            links['cloud-init-ca-certs.pem'])
 
     def test_cert_installed(self):
         """Test line from our cert exists."""
diff --git a/tests/cloud_tests/testcases/modules/ca_certs.yaml b/tests/cloud_tests/testcases/modules/ca_certs.yaml
index d939f43..2cd9155 100644
--- a/tests/cloud_tests/testcases/modules/ca_certs.yaml
+++ b/tests/cloud_tests/testcases/modules/ca_certs.yaml
@@ -43,9 +43,13 @@ cloud_config: |
         DiH5uEqBXExjrj0FslxcVKdVj5glVcSmkLwZKbEU1OKwleT/iXFhvooWhQ==
         -----END CERTIFICATE-----
 collect_scripts:
-  cert_count: |
+  cert_links: |
     #!/bin/bash
-    ls -l /etc/ssl/certs | wc -l
+    # links printed <filename> -> <link target>
+    # non-links printed <filename>
+    for file in `ls /etc/ssl/certs`; do
+        [ -h /etc/ssl/certs/$file ] && echo -n $file ' -> ' && readlink /etc/ssl/certs/$file || echo $file;
+    done
   cert: |
     #!/bin/bash
     md5sum /etc/ssl/certs/ca-certificates.crt
diff --git a/tests/cloud_tests/testcases/modules/ntp.py b/tests/cloud_tests/testcases/modules/ntp.py
index b50e52f..c63cc15 100644
--- a/tests/cloud_tests/testcases/modules/ntp.py
+++ b/tests/cloud_tests/testcases/modules/ntp.py
@@ -9,15 +9,14 @@ class TestNtp(base.CloudTestCase):
 
     def test_ntp_installed(self):
         """Test ntp installed"""
-        out = self.get_data_file('ntp_installed')
-        self.assertEqual(0, int(out))
+        self.assertPackageInstalled('ntp')
 
     def test_ntp_dist_entries(self):
         """Test dist config file is empty"""
         out = self.get_data_file('ntp_conf_dist_empty')
         self.assertEqual(0, int(out))
 
-    def test_ntp_entires(self):
+    def test_ntp_entries(self):
         """Test config entries"""
         out = self.get_data_file('ntp_conf_pool_list')
         self.assertIn('pool.ntp.org iburst', out)
diff --git a/tests/cloud_tests/testcases/modules/ntp_chrony.py b/tests/cloud_tests/testcases/modules/ntp_chrony.py
index 461630a..7d34177 100644
--- a/tests/cloud_tests/testcases/modules/ntp_chrony.py
+++ b/tests/cloud_tests/testcases/modules/ntp_chrony.py
@@ -1,13 +1,24 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 """cloud-init Integration Test Verify Script."""
+import unittest
+
 from tests.cloud_tests.testcases import base
 
 
 class TestNtpChrony(base.CloudTestCase):
     """Test ntp module with chrony client"""
 
-    def test_chrony_entires(self):
+    def setUp(self):
+        """Skip this suite of tests on lxd and artful or older."""
+        if self.platform == 'lxd':
+            if self.is_distro('ubuntu') and self.os_version_cmp('artful') <= 0:
+                raise unittest.SkipTest(
+                    'No support for chrony on containers <= artful.'
+                    ' LP: #1589780')
+        return super(TestNtpChrony, self).setUp()
+
+    def test_chrony_entries(self):
         """Test chrony config entries"""
         out = self.get_data_file('chrony_conf')
         self.assertIn('.pool.ntp.org', out)
diff --git a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
index a92dec2..fecad76 100644
--- a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
+++ b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.py
@@ -7,15 +7,13 @@ from tests.cloud_tests.testcases import base
 class TestPackageInstallUpdateUpgrade(base.CloudTestCase):
     """Test package install update upgrade module."""
 
-    def test_installed_htop(self):
-        """Test htop got installed."""
-        out = self.get_data_file('dpkg_htop')
-        self.assertEqual(1, int(out))
+    def test_installed_sl(self):
+        """Test sl got installed."""
+        self.assertPackageInstalled('sl')
 
     def test_installed_tree(self):
         """Test tree got installed."""
-        out = self.get_data_file('dpkg_tree')
-        self.assertEqual(1, int(out))
+        self.assertPackageInstalled('tree')
 
     def test_apt_history(self):
         """Test apt history for update command."""
@@ -23,13 +21,13 @@ class TestPackageInstallUpdateUpgrade(base.CloudTestCase):
         self.assertIn(
             'Commandline: /usr/bin/apt-get --option=Dpkg::Options'
             '::=--force-confold --option=Dpkg::options::=--force-unsafe-io '
-            '--assume-yes --quiet install htop tree', out)
+            '--assume-yes --quiet install sl tree', out)
 
     def test_cloud_init_output(self):
         """Test cloud-init-output for install & upgrade stuff."""
         out = self.get_data_file('cloud-init-output.log')
         self.assertIn('Setting up tree (', out)
-        self.assertIn('Setting up htop (', out)
+        self.assertIn('Setting up sl (', out)
         self.assertIn('Reading package lists...', out)
         self.assertIn('Building dependency tree...', out)
         self.assertIn('Reading state information...', out)
diff --git a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
index 71d24b8..dd79e43 100644
--- a/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
+++ b/tests/cloud_tests/testcases/modules/package_update_upgrade_install.yaml
@@ -15,7 +15,7 @@ required_features:
 cloud_config: |
   #cloud-config
   packages:
-    - htop
+    - sl
     - tree
   package_update: true
   package_upgrade: true
@@ -23,11 +23,8 @@ collect_scripts:
   apt_history_cmdline: |
     #!/bin/bash
     grep ^Commandline: /var/log/apt/history.log
-  dpkg_htop: |
+  dpkg_show: |
     #!/bin/bash
-    dpkg -l | grep htop | wc -l
-  dpkg_tree: |
-    #!/bin/bash
-    dpkg -l | grep tree | wc -l
+    dpkg-query --show
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/salt_minion.py b/tests/cloud_tests/testcases/modules/salt_minion.py
index 70917a4..fc9688e 100644
--- a/tests/cloud_tests/testcases/modules/salt_minion.py
+++ b/tests/cloud_tests/testcases/modules/salt_minion.py
@@ -33,7 +33,6 @@ class Test(base.CloudTestCase):
 
     def test_minion_installed(self):
         """Test if the salt-minion package is installed"""
-        out = self.get_data_file('minion_installed')
-        self.assertEqual(1, int(out))
+        self.assertPackageInstalled('salt-minion')
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/salt_minion.yaml b/tests/cloud_tests/testcases/modules/salt_minion.yaml
index f20b976..9227147 100644
--- a/tests/cloud_tests/testcases/modules/salt_minion.yaml
+++ b/tests/cloud_tests/testcases/modules/salt_minion.yaml
@@ -28,15 +28,22 @@ collect_scripts:
     cat /etc/salt/minion_id
   minion.pem: |
     #!/bin/bash
-    cat /etc/salt/pki/minion/minion.pem
+    PRIV_KEYFILE=/etc/salt/pki/minion/minion.pem
+    if [ ! -f $PRIV_KEYFILE ]; then
+        # Bionic and later automatically moves /etc/salt/pki/minion/*
+        PRIV_KEYFILE=/var/lib/salt/pki/minion/minion.pem
+    fi
+    cat $PRIV_KEYFILE
   minion.pub: |
     #!/bin/bash
-    cat /etc/salt/pki/minion/minion.pub
+    PUB_KEYFILE=/etc/salt/pki/minion/minion.pub
+    if [ ! -f $PUB_KEYFILE ]; then
+        # Bionic and later automatically moves /etc/salt/pki/minion/*
+        PUB_KEYFILE=/var/lib/salt/pki/minion/minion.pub
+    fi
+    cat $PUB_KEYFILE
   grains: |
     #!/bin/bash
     cat /etc/salt/grains
-  minion_installed: |
-    #!/bin/bash
-    dpkg -l | grep salt-minion | grep ii | wc -l
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/verify.py b/tests/cloud_tests/verify.py
index 5a68a48..bfb2744 100644
--- a/tests/cloud_tests/verify.py
+++ b/tests/cloud_tests/verify.py
@@ -56,6 +56,51 @@ def verify_data(data_dir, platform, os_name, tests):
     return res
 
 
+def format_test_failures(test_result):
+    """Return a human-readable printable format of test failures."""
+    if not test_result['failures']:
+        return ''
+    failure_hdr = '    test failures:'
+    failure_fmt = '    * {module}.{class}.{function}\n          {error}'
+    output = []
+    for failure in test_result['failures']:
+        if not output:
+            output = [failure_hdr]
+        output.append(failure_fmt.format(**failure))
+    return '\n'.join(output)
+
+
+def format_results(res):
+    """Return human-readable results as a string"""
+    platform_hdr = 'Platform: {platform}'
+    distro_hdr = '  Distro: {distro}'
+    distro_summary_fmt = (
+        '    test modules passed:{passed} tests failed:{failed}')
+    output = ['']
+    counts = {}
+    for platform, platform_data in res.items():
+        output.append(platform_hdr.format(platform=platform))
+        counts[platform] = {}
+        for distro, distro_data in platform_data.items():
+            distro_failure_output = []
+            output.append(distro_hdr.format(distro=distro))
+            counts[platform][distro] = {'passed': 0, 'failed': 0}
+            for _, test_result in distro_data.items():
+                if test_result['passed']:
+                    counts[platform][distro]['passed'] += 1
+                else:
+                    counts[platform][distro]['failed'] += len(
+                        test_result['failures'])
+                    failure_output = format_test_failures(test_result)
+                    if failure_output:
+                        distro_failure_output.append(failure_output)
+            output.append(
+                distro_summary_fmt.format(**counts[platform][distro]))
+            if distro_failure_output:
+                output.extend(distro_failure_output)
+    return '\n'.join(output)
+
+
 def verify(args):
     """Verify test data.
 
@@ -90,7 +135,7 @@ def verify(args):
             failed += len(fail_list)
 
     # dump results
-    LOG.debug('verify results: %s', res)
+    LOG.debug('\n---- Verify summarized results:\n%s', format_results(res))
     if args.result:
         util.merge_results({'verify': res}, args.result)
 
diff --git a/tests/data/netinfo/netdev-formatted-output-down b/tests/data/netinfo/netdev-formatted-output-down
new file mode 100644
index 0000000..038dfb4
--- /dev/null
+++ b/tests/data/netinfo/netdev-formatted-output-down
@@ -0,0 +1,8 @@
++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
++--------+-------+-----------+-----------+-------+-------------------+
+| Device |   Up  |  Address  |    Mask   | Scope |     Hw-Address    |
++--------+-------+-----------+-----------+-------+-------------------+
+|  eth0  | False |     .     |     .     |   .   | 00:16:3e:de:51:a6 |
+|   lo   |  True | 127.0.0.1 | 255.0.0.0 |  host |         .         |
+|   lo   |  True |  ::1/128  |     .     |  host |         .         |
++--------+-------+-----------+-----------+-------+-------------------+
diff --git a/tests/data/netinfo/new-ifconfig-output-down b/tests/data/netinfo/new-ifconfig-output-down
new file mode 100644
index 0000000..5d12e35
--- /dev/null
+++ b/tests/data/netinfo/new-ifconfig-output-down
@@ -0,0 +1,15 @@
+eth0: flags=4098<BROADCAST,MULTICAST>  mtu 1500
+        ether 00:16:3e:de:51:a6  txqueuelen 1000  (Ethernet)
+        RX packets 126229  bytes 158139342 (158.1 MB)
+        RX errors 0  dropped 0  overruns 0  frame 0
+        TX packets 59317  bytes 4839008 (4.8 MB)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
+
+lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
+        inet 127.0.0.1  netmask 255.0.0.0
+        inet6 ::1  prefixlen 128  scopeid 0x10<host>
+        loop  txqueuelen 1000  (Local Loopback)
+        RX packets 260  bytes 20092 (20.0 KB)
+        RX errors 0  dropped 0  overruns 0  frame 0
+        TX packets 260  bytes 20092 (20.0 KB)
+        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
diff --git a/tests/data/netinfo/sample-ipaddrshow-output-down b/tests/data/netinfo/sample-ipaddrshow-output-down
new file mode 100644
index 0000000..cb516d6
--- /dev/null
+++ b/tests/data/netinfo/sample-ipaddrshow-output-down
@@ -0,0 +1,8 @@
+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+    inet 127.0.0.1/8 scope host lo
+       valid_lft forever preferred_lft forever
+    inet6 ::1/128 scope host
+       valid_lft forever preferred_lft forever
+44: eth0@if45: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
+    link/ether 00:16:3e:de:51:a6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
diff --git a/tests/unittests/test__init__.py b/tests/unittests/test__init__.py
index f1ab02e..739bbeb 100644
--- a/tests/unittests/test__init__.py
+++ b/tests/unittests/test__init__.py
@@ -182,7 +182,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertEqual(
             ('url', 'http://example.com'), main.parse_cmdline_url(cmdline))
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_invalid_content(self, m_read):
         key = "cloud-config-url"
         url = 'http://example.com/foo'
@@ -196,7 +196,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertIn(url, msg)
         self.assertFalse(os.path.exists(fpath))
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_valid_content(self, m_read):
         url = "http://example.com/foo";
         payload = b"#cloud-config\nmydata: foo\nbar: wark\n"
@@ -210,7 +210,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertEqual(logging.INFO, lvl)
         self.assertIn(url, msg)
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_no_key_found(self, m_read):
         cmdline = "ro mykey=http://example.com/foo root=foo"
         fpath = self.tmp_path("ccpath")
@@ -221,7 +221,7 @@ class TestCmdlineUrl(CiTestCase):
         self.assertFalse(os.path.exists(fpath))
         self.assertEqual(logging.DEBUG, lvl)
 
-    @mock.patch('cloudinit.cmd.main.util.read_file_or_url')
+    @mock.patch('cloudinit.cmd.main.url_helper.read_file_or_url')
     def test_exception_warns(self, m_read):
         url = "http://example.com/foo";
         cmdline = "ro cloud-config-url=%s root=LABEL=bar" % url
diff --git a/tests/unittests/test_data.py b/tests/unittests/test_data.py
index 275b16d..3efe7ad 100644
--- a/tests/unittests/test_data.py
+++ b/tests/unittests/test_data.py
@@ -524,7 +524,17 @@ c: 4
         self.assertEqual(cfg.get('password'), 'gocubs')
         self.assertEqual(cfg.get('locale'), 'chicago')
 
-    @httpretty.activate
+
+class TestConsumeUserDataHttp(TestConsumeUserData, helpers.HttprettyTestCase):
+
+    def setUp(self):
+        TestConsumeUserData.setUp(self)
+        helpers.HttprettyTestCase.setUp(self)
+
+    def tearDown(self):
+        TestConsumeUserData.tearDown(self)
+        helpers.HttprettyTestCase.tearDown(self)
+
     @mock.patch('cloudinit.url_helper.time.sleep')
     def test_include(self, mock_sleep):
         """Test #include."""
@@ -543,7 +553,6 @@ c: 4
         cc = util.load_yaml(cc_contents)
         self.assertTrue(cc.get('included'))
 
-    @httpretty.activate
     @mock.patch('cloudinit.url_helper.time.sleep')
     def test_include_bad_url(self, mock_sleep):
         """Test #include with a bad URL."""
@@ -597,8 +606,10 @@ class TestUDProcess(helpers.ResourceUsingTestCase):
 
 
 class TestConvertString(helpers.TestCase):
+
     def test_handles_binary_non_utf8_decodable(self):
-        blob = b'\x32\x99'
+        """Printable unicode (not utf8-decodable) is safely converted."""
+        blob = b'#!/bin/bash\necho \xc3\x84\n'
         msg = ud.convert_string(blob)
         self.assertEqual(blob, msg.get_payload(decode=True))
 
@@ -612,6 +623,13 @@ class TestConvertString(helpers.TestCase):
         msg = ud.convert_string(text)
         self.assertEqual(text, msg.get_payload(decode=False))
 
+    def test_handle_mime_parts(self):
+        """Mime parts are properly returned as a mime message."""
+        message = MIMEBase("text", "plain")
+        message.set_payload("Just text")
+        msg = ud.convert_string(str(message))
+        self.assertEqual("Just text", msg.get_payload(decode=False))
+
 
 class TestFetchBaseConfig(helpers.TestCase):
     def test_only_builtin_gets_builtin(self):
diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
index 4fa9616..1e77842 100644
--- a/tests/unittests/test_datasource/test_aliyun.py
+++ b/tests/unittests/test_datasource/test_aliyun.py
@@ -130,7 +130,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
                          self.ds.get_hostname())
 
     @mock.patch("cloudinit.sources.DataSourceAliYun._is_aliyun")
-    @httpretty.activate
     def test_with_mock_server(self, m_is_aliyun):
         m_is_aliyun.return_value = True
         self.regist_default_server()
@@ -143,7 +142,6 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
         self._test_host_name()
 
     @mock.patch("cloudinit.sources.DataSourceAliYun._is_aliyun")
-    @httpretty.activate
     def test_returns_false_when_not_on_aliyun(self, m_is_aliyun):
         """If is_aliyun returns false, then get_data should return False."""
         m_is_aliyun.return_value = False
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 88fe76c..e82716e 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -1,10 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import helpers
-from cloudinit.util import b64e, decode_binary, load_file, write_file
 from cloudinit.sources import DataSourceAzure as dsaz
-from cloudinit.util import find_freebsd_part
-from cloudinit.util import get_path_dev_freebsd
+from cloudinit.util import (b64e, decode_binary, load_file, write_file,
+                            find_freebsd_part, get_path_dev_freebsd,
+                            MountFailedError)
 from cloudinit.version import version_string as vs
 from cloudinit.tests.helpers import (CiTestCase, TestCase, populate_dir, mock,
                                      ExitStack, PY26, SkipTest)
@@ -95,6 +95,8 @@ class TestAzureDataSource(CiTestCase):
         self.patches = ExitStack()
         self.addCleanup(self.patches.close)
 
+        self.patches.enter_context(mock.patch.object(dsaz, '_get_random_seed'))
+
         super(TestAzureDataSource, self).setUp()
 
     def apply_patches(self, patches):
@@ -335,6 +337,18 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         self.assertTrue(ret)
         self.assertEqual(data['agent_invoked'], '_COMMAND')
 
+    def test_sys_cfg_set_never_destroy_ntfs(self):
+        sys_cfg = {'datasource': {'Azure': {
+            'never_destroy_ntfs': 'user-supplied-value'}}}
+        data = {'ovfcontent': construct_valid_ovf_env(data={}),
+                'sys_cfg': sys_cfg}
+
+        dsrc = self._get_ds(data)
+        ret = self._get_and_setup(dsrc)
+        self.assertTrue(ret)
+        self.assertEqual(dsrc.ds_cfg.get(dsaz.DS_CFG_KEY_PRESERVE_NTFS),
+                         'user-supplied-value')
+
     def test_username_used(self):
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
@@ -676,6 +690,8 @@ class TestAzureBounce(CiTestCase):
                               mock.MagicMock(return_value={})))
         self.patches.enter_context(
             mock.patch.object(dsaz.util, 'which', lambda x: True))
+        self.patches.enter_context(
+            mock.patch.object(dsaz, '_get_random_seed'))
 
         def _dmi_mocks(key):
             if key == 'system-uuid':
@@ -957,7 +973,9 @@ class TestCanDevBeReformatted(CiTestCase):
             # return sorted by partition number
             return sorted(ret, key=lambda d: d[0])
 
-        def mount_cb(device, callback):
+        def mount_cb(device, callback, mtype, update_env_for_mount):
+            self.assertEqual('ntfs', mtype)
+            self.assertEqual('C', update_env_for_mount.get('LANG'))
             p = self.tmp_dir()
             for f in bypath.get(device).get('files', []):
                 write_file(os.path.join(p, f), content=f)
@@ -988,14 +1006,16 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda2': {'num': 2},
                     '/dev/sda3': {'num': 3},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("3 or more", msg.lower())
 
     def test_no_partitions_is_false(self):
         """A disk with no partitions can not be formatted."""
         self.patchup({'/dev/sda': {}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not partitioned", msg.lower())
 
@@ -1007,7 +1027,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1},
                     '/dev/sda2': {'num': 2, 'fs': 'ext4', 'files': []},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not ntfs", msg.lower())
 
@@ -1020,7 +1041,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda2': {'num': 2, 'fs': 'ntfs',
                                   'files': ['secret.txt']},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("files on it", msg.lower())
 
@@ -1032,7 +1054,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1},
                     '/dev/sda2': {'num': 2, 'fs': 'ntfs', 'files': []},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1043,7 +1066,8 @@ class TestCanDevBeReformatted(CiTestCase):
                 'partitions': {
                     '/dev/sda1': {'num': 1, 'fs': 'zfs'},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("not ntfs", msg.lower())
 
@@ -1055,9 +1079,14 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs',
                                   'files': ['file1.txt', 'file2.exe']},
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
-        self.assertFalse(value)
-        self.assertIn("files on it", msg.lower())
+        with mock.patch.object(dsaz.LOG, 'warning') as warning:
+            value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                     preserve_ntfs=False)
+            wmsg = warning.call_args[0][0]
+            self.assertIn("looks like you're using NTFS on the ephemeral disk",
+                          wmsg)
+            self.assertFalse(value)
+            self.assertIn("files on it", msg.lower())
 
     def test_one_partition_ntfs_empty_is_true(self):
         """1 mountable ntfs partition and no files can be formatted."""
@@ -1066,7 +1095,8 @@ class TestCanDevBeReformatted(CiTestCase):
                 'partitions': {
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1078,7 +1108,8 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs',
                                   'files': ['dataloss_warning_readme.txt']}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted("/dev/sda")
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1093,7 +1124,8 @@ class TestCanDevBeReformatted(CiTestCase):
                         'num': 1, 'fs': 'ntfs', 'files': [self.warning_file],
                         'realpath': '/dev/sdb1'}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted(epath)
+        value, msg = dsaz.can_dev_be_reformatted(epath,
+                                                 preserve_ntfs=False)
         self.assertTrue(value)
         self.assertIn("safe for", msg.lower())
 
@@ -1112,10 +1144,49 @@ class TestCanDevBeReformatted(CiTestCase):
                     epath + '-part3': {'num': 3, 'fs': 'ext',
                                        'realpath': '/dev/sdb3'}
                 }}})
-        value, msg = dsaz.can_dev_be_reformatted(epath)
+        value, msg = dsaz.can_dev_be_reformatted(epath,
+                                                 preserve_ntfs=False)
         self.assertFalse(value)
         self.assertIn("3 or more", msg.lower())
 
+    def test_ntfs_mount_errors_true(self):
+        """can_dev_be_reformatted does not fail if NTFS is unknown fstype."""
+        self.patchup({
+            '/dev/sda': {
+                'partitions': {
+                    '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
+                }}})
+
+        err = ("Unexpected error while running command.\n",
+               "Command: ['mount', '-o', 'ro,sync', '-t', 'auto', ",
+               "'/dev/sda1', '/fake-tmp/dir']\n"
+               "Exit code: 32\n"
+               "Reason: -\n"
+               "Stdout: -\n"
+               "Stderr: mount: unknown filesystem type 'ntfs'")
+        self.m_mount_cb.side_effect = MountFailedError(
+            'Failed mounting %s to %s due to: %s' %
+            ('/dev/sda', '/fake-tmp/dir', err))
+
+        value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
+                                                 preserve_ntfs=False)
+        self.assertTrue(value)
+        self.assertIn('cannot mount NTFS, assuming', msg)
+
+    def test_never_destroy_ntfs_config_false(self):
+        """Normally formattable situation with never_destroy_ntfs set."""
+        self.patchup({
+            '/dev/sda': {
+                'partitions': {
+                    '/dev/sda1': {'num': 1, 'fs': 'ntfs',
+                                  'files': ['dataloss_warning_readme.txt']}
+                }}})
+        value, msg = dsaz.can_dev_be_reformatted("/dev/sda",
+                                                 preserve_ntfs=True)
+        self.assertFalse(value)
+        self.assertIn("config says to never destroy NTFS "
+                      "(datasource.Azure.never_destroy_ntfs)", msg)
+
 
 class TestAzureNetExists(CiTestCase):
 
@@ -1125,19 +1196,9 @@ class TestAzureNetExists(CiTestCase):
         self.assertTrue(hasattr(dsaz, "DataSourceAzureNet"))
 
 
-@mock.patch('cloudinit.sources.DataSourceAzure.util.subp')
-@mock.patch.object(dsaz, 'get_hostname')
-@mock.patch.object(dsaz, 'set_hostname')
-class TestAzureDataSourcePreprovisioning(CiTestCase):
-
-    def setUp(self):
-        super(TestAzureDataSourcePreprovisioning, self).setUp()
-        tmp = self.tmp_dir()
-        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
-        self.paths = helpers.Paths({'cloud_dir': tmp})
-        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+class TestPreprovisioningReadAzureOvfFlag(CiTestCase):
 
-    def test_read_azure_ovf_with_true_flag(self, *args):
+    def test_read_azure_ovf_with_true_flag(self):
         """The read_azure_ovf method should set the PreprovisionedVM
            cfg flag if the proper setting is present."""
         content = construct_valid_ovf_env(
@@ -1146,7 +1207,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertTrue(cfg['PreprovisionedVm'])
 
-    def test_read_azure_ovf_with_false_flag(self, *args):
+    def test_read_azure_ovf_with_false_flag(self):
         """The read_azure_ovf method should set the PreprovisionedVM
            cfg flag to false if the proper setting is false."""
         content = construct_valid_ovf_env(
@@ -1155,7 +1216,7 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertFalse(cfg['PreprovisionedVm'])
 
-    def test_read_azure_ovf_without_flag(self, *args):
+    def test_read_azure_ovf_without_flag(self):
         """The read_azure_ovf method should not set the
            PreprovisionedVM cfg flag."""
         content = construct_valid_ovf_env()
@@ -1163,12 +1224,121 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         cfg = ret[2]
         self.assertFalse(cfg['PreprovisionedVm'])
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
-    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
-    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
-    @mock.patch('requests.Session.request')
+
+@mock.patch('os.path.isfile')
+class TestPreprovisioningShouldReprovision(CiTestCase):
+
+    def setUp(self):
+        super(TestPreprovisioningShouldReprovision, self).setUp()
+        tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
+        self.paths = helpers.Paths({'cloud_dir': tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    def test__should_reprovision_with_true_cfg(self, isfile, write_f):
+        """The _should_reprovision method should return true with config
+           flag present."""
+        isfile.return_value = False
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertTrue(dsa._should_reprovision(
+            (None, None, {'PreprovisionedVm': True}, None)))
+
+    def test__should_reprovision_with_file_existing(self, isfile):
+        """The _should_reprovision method should return True if the sentinal
+           exists."""
+        isfile.return_value = True
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertTrue(dsa._should_reprovision(
+            (None, None, {'preprovisionedvm': False}, None)))
+
+    def test__should_reprovision_returns_false(self, isfile):
+        """The _should_reprovision method should return False
+           if config and sentinal are not present."""
+        isfile.return_value = False
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        self.assertFalse(dsa._should_reprovision((None, None, {}, None)))
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.DataSourceAzure._poll_imds')
+    def test_reprovision_calls__poll_imds(self, _poll_imds, isfile):
+        """_reprovision will poll IMDS."""
+        isfile.return_value = False
+        hostname = "myhost"
+        username = "myuser"
+        odata = {'HostName': hostname, 'UserName': username}
+        _poll_imds.return_value = construct_valid_ovf_env(data=odata)
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        dsa._reprovision()
+        _poll_imds.assert_called_with()
+
+
+@mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+@mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+@mock.patch('requests.Session.request')
+@mock.patch(
+    'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
+class TestPreprovisioningPollIMDS(CiTestCase):
+
+    def setUp(self):
+        super(TestPreprovisioningPollIMDS, self).setUp()
+        self.tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', self.tmp)
+        self.paths = helpers.Paths({'cloud_dir': self.tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    def test_poll_imds_calls_report_ready(self, write_f, report_ready_func,
+                                          fake_resp, m_dhcp, m_net):
+        """The poll_imds will call report_ready after creating marker file."""
+        report_marker = self.tmp_path('report_marker', self.tmp)
+        lease = {
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'unknown-245': '624c3620'}
+        m_dhcp.return_value = [lease]
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        mock_path = (
+            'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
+        with mock.patch(mock_path, report_marker):
+            dsa._poll_imds()
+        self.assertEqual(report_ready_func.call_count, 1)
+        report_ready_func.assert_called_with(lease=lease)
+
+    def test_poll_imds_report_ready_false(self, report_ready_func,
+                                          fake_resp, m_dhcp, m_net):
+        """The poll_imds should not call reporting ready
+           when flag is false"""
+        report_marker = self.tmp_path('report_marker', self.tmp)
+        write_file(report_marker, content='dont run report_ready :)')
+        m_dhcp.return_value = [{
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'unknown-245': '624c3620'}]
+        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
+        mock_path = (
+            'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
+        with mock.patch(mock_path, report_marker):
+            dsa._poll_imds()
+        self.assertEqual(report_ready_func.call_count, 0)
+
+
+@mock.patch('cloudinit.sources.DataSourceAzure.util.subp')
+@mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+@mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
+@mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+@mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+@mock.patch('requests.Session.request')
+class TestAzureDataSourcePreprovisioning(CiTestCase):
+
+    def setUp(self):
+        super(TestAzureDataSourcePreprovisioning, self).setUp()
+        tmp = self.tmp_dir()
+        self.waagent_d = self.tmp_path('/var/lib/waagent', tmp)
+        self.paths = helpers.Paths({'cloud_dir': tmp})
+        dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
+
     def test_poll_imds_returns_ovf_env(self, fake_resp, m_dhcp, m_net,
-                                       m_is_bsd, *args):
+                                       m_is_bsd, write_f, subp):
         """The _poll_imds method should return the ovf_env.xml."""
         m_is_bsd.return_value = False
         m_dhcp.return_value = [{
@@ -1194,12 +1364,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
         self.assertEqual(m_net.call_count, 1)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
-    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
-    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
-    @mock.patch('requests.Session.request')
     def test__reprovision_calls__poll_imds(self, fake_resp, m_dhcp, m_net,
-                                           m_is_bsd, *args):
+                                           m_is_bsd, write_f, subp):
         """The _reprovision method should call poll IMDS."""
         m_is_bsd.return_value = False
         m_dhcp.return_value = [{
@@ -1231,32 +1397,5 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
         self.assertEqual(m_net.call_count, 1)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_with_true_cfg(self, isfile, write_f, *args):
-        """The _should_reprovision method should return true with config
-           flag present."""
-        isfile.return_value = False
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertTrue(dsa._should_reprovision(
-            (None, None, {'PreprovisionedVm': True}, None)))
-
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_with_file_existing(self, isfile, *args):
-        """The _should_reprovision method should return True if the sentinal
-           exists."""
-        isfile.return_value = True
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertTrue(dsa._should_reprovision(
-            (None, None, {'preprovisionedvm': False}, None)))
-
-    @mock.patch('os.path.isfile')
-    def test__should_reprovision_returns_false(self, isfile, *args):
-        """The _should_reprovision method should return False
-           if config and sentinal are not present."""
-        isfile.return_value = False
-        dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        self.assertFalse(dsa._should_reprovision((None, None, {}, None)))
-
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index b42b073..af9d3e1 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -195,7 +195,7 @@ class TestAzureEndpointHttpClient(CiTestCase):
         self.addCleanup(patches.close)
 
         self.read_file_or_url = patches.enter_context(
-            mock.patch.object(azure_helper.util, 'read_file_or_url'))
+            mock.patch.object(azure_helper.url_helper, 'read_file_or_url'))
 
     def test_non_secure_get(self):
         client = azure_helper.AzureEndpointHttpClient(mock.MagicMock())
diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py
index ec33388..0d35dc2 100644
--- a/tests/unittests/test_datasource/test_common.py
+++ b/tests/unittests/test_datasource/test_common.py
@@ -40,6 +40,7 @@ DEFAULT_LOCAL = [
     OVF.DataSourceOVF,
     SmartOS.DataSourceSmartOS,
     Ec2.DataSourceEc2Local,
+    OpenStack.DataSourceOpenStackLocal,
 ]
 
 DEFAULT_NETWORK = [
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index dff8b1e..497e761 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -191,7 +191,6 @@ def register_mock_metaserver(base_url, data):
             register(base_url, 'not found', status=404)
 
     def myreg(*argc, **kwargs):
-        # print("register_url(%s, %s)" % (argc, kwargs))
         return httpretty.register_uri(httpretty.GET, *argc, **kwargs)
 
     register_helper(myreg, base_url, data)
@@ -236,7 +235,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                 return_value=platform_data)
 
         if md:
-            httpretty.HTTPretty.allow_net_connect = False
             all_versions = (
                 [ds.min_metadata_version] + ds.extended_metadata_versions)
             for version in all_versions:
@@ -255,7 +253,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                         register_mock_metaserver(instance_id_url, None)
         return ds
 
-    @httpretty.activate
     def test_network_config_property_returns_version_1_network_data(self):
         """network_config property returns network version 1 for metadata.
 
@@ -288,7 +285,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                     m_get_mac.return_value = mac1
                     self.assertEqual(expected, ds.network_config)
 
-    @httpretty.activate
     def test_network_config_property_set_dhcp4_on_private_ipv4(self):
         """network_config property configures dhcp4 on private ipv4 nics.
 
@@ -330,7 +326,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds._network_config = {'cached': 'data'}
         self.assertEqual({'cached': 'data'}, ds.network_config)
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
     def test_network_config_cached_property_refreshed_on_upgrade(self, m_dhcp):
         """Refresh the network_config Ec2 cache if network key is absent.
@@ -364,7 +359,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
              'type': 'physical'}]}
         self.assertEqual(expected, ds.network_config)
 
-    @httpretty.activate
     def test_ec2_get_instance_id_refreshes_identity_on_upgrade(self):
         """get_instance-id gets DataSourceEc2Local.identity if not present.
 
@@ -397,7 +391,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds.metadata = DEFAULT_METADATA
         self.assertEqual('my-identity-id', ds.get_instance_id())
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
     def test_valid_platform_with_strict_true(self, m_dhcp):
         """Valid platform data should return true with strict_id true."""
@@ -409,7 +402,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         self.assertTrue(ret)
         self.assertEqual(0, m_dhcp.call_count)
 
-    @httpretty.activate
     def test_valid_platform_with_strict_false(self):
         """Valid platform data should return true with strict_id false."""
         ds = self._setup_ds(
@@ -419,7 +411,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ret = ds.get_data()
         self.assertTrue(ret)
 
-    @httpretty.activate
     def test_unknown_platform_with_strict_true(self):
         """Unknown platform data with strict_id true should return False."""
         uuid = 'ab439480-72bf-11d3-91fc-b8aded755F9a'
@@ -430,7 +421,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ret = ds.get_data()
         self.assertFalse(ret)
 
-    @httpretty.activate
     def test_unknown_platform_with_strict_false(self):
         """Unknown platform data with strict_id false should return True."""
         uuid = 'ab439480-72bf-11d3-91fc-b8aded755F9a'
@@ -462,7 +452,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
                     ' not {0}'.format(platform_name))
                 self.assertIn(message, self.logs.getvalue())
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceEc2.util.is_FreeBSD')
     def test_ec2_local_returns_false_on_bsd(self, m_is_freebsd):
         """DataSourceEc2Local returns False on BSD.
@@ -481,7 +470,6 @@ class TestEc2(test_helpers.HttprettyTestCase):
             "FreeBSD doesn't support running dhclient with -sf",
             self.logs.getvalue())
 
-    @httpretty.activate
     @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
     @mock.patch('cloudinit.net.find_fallback_nic')
     @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
diff --git a/tests/unittests/test_datasource/test_gce.py b/tests/unittests/test_datasource/test_gce.py
index eb3cec4..41176c6 100644
--- a/tests/unittests/test_datasource/test_gce.py
+++ b/tests/unittests/test_datasource/test_gce.py
@@ -78,7 +78,6 @@ def _set_mock_metadata(gce_meta=None):
             return (404, headers, '')
 
     # reset is needed. https://github.com/gabrielfalcao/HTTPretty/issues/316
-    httpretty.reset()
     httpretty.register_uri(httpretty.GET, MD_URL_RE, body=_request_callback)
 
 
diff --git a/tests/unittests/test_datasource/test_openstack.py b/tests/unittests/test_datasource/test_openstack.py
index 42c3155..585acc3 100644
--- a/tests/unittests/test_datasource/test_openstack.py
+++ b/tests/unittests/test_datasource/test_openstack.py
@@ -16,7 +16,7 @@ from six import StringIO
 
 from cloudinit import helpers
 from cloudinit import settings
-from cloudinit.sources import convert_vendordata
+from cloudinit.sources import convert_vendordata, UNSET
 from cloudinit.sources import DataSourceOpenStack as ds
 from cloudinit.sources.helpers import openstack
 from cloudinit import util
@@ -69,6 +69,8 @@ EC2_VERSIONS = [
     'latest',
 ]
 
+MOCK_PATH = 'cloudinit.sources.DataSourceOpenStack.'
+
 
 # TODO _register_uris should leverage test_ec2.register_mock_metaserver.
 def _register_uris(version, ec2_files, ec2_meta, os_files):
@@ -129,13 +131,14 @@ def _read_metadata_service():
 
 
 class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
+
+    with_logs = True
     VERSION = 'latest'
 
     def setUp(self):
         super(TestOpenStackDataSource, self).setUp()
         self.tmp = self.tmp_dir()
 
-    @hp.activate
     def test_successful(self):
         _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
         f = _read_metadata_service()
@@ -157,7 +160,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual('b0fa911b-69d4-4476-bbe2-1c92bff6535c',
                          metadata.get('instance-id'))
 
-    @hp.activate
     def test_no_ec2(self):
         _register_uris(self.VERSION, {}, {}, OS_FILES)
         f = _read_metadata_service()
@@ -168,7 +170,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual({}, f.get('ec2-metadata'))
         self.assertEqual(2, f.get('version'))
 
-    @hp.activate
     def test_bad_metadata(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -177,7 +178,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.NonReadable, _read_metadata_service)
 
-    @hp.activate
     def test_bad_uuid(self):
         os_files = copy.deepcopy(OS_FILES)
         os_meta = copy.deepcopy(OSTACK_META)
@@ -188,7 +188,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
     def test_userdata_empty(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -201,7 +200,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(CONTENT_1, f['files']['/etc/bar/bar.cfg'])
         self.assertFalse(f.get('userdata'))
 
-    @hp.activate
     def test_vendordata_empty(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -213,7 +211,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(CONTENT_1, f['files']['/etc/bar/bar.cfg'])
         self.assertFalse(f.get('vendordata'))
 
-    @hp.activate
     def test_vendordata_invalid(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -222,7 +219,6 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
     def test_metadata_invalid(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -231,14 +227,16 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         _register_uris(self.VERSION, {}, {}, os_files)
         self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
 
-    @hp.activate
-    def test_datasource(self):
+    @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_datasource(self, m_dhcp):
         _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
-        ds_os = ds.DataSourceOpenStack(settings.CFG_BUILTIN,
-                                       None,
-                                       helpers.Paths({'run_dir': self.tmp}))
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertTrue(found)
         self.assertEqual(2, ds_os.version)
         md = dict(ds_os.metadata)
@@ -250,8 +248,40 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
         self.assertEqual(2, len(ds_os.files))
         self.assertEqual(VENDOR_DATA, ds_os.vendordata_pure)
         self.assertIsNone(ds_os.vendordata_raw)
+        m_dhcp.assert_not_called()
 
     @hp.activate
+    @test_helpers.mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+    @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_local_datasource(self, m_dhcp, m_net):
+        """OpenStackLocal calls EphemeralDHCPNetwork and gets instance data."""
+        _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
+        ds_os_local = ds.DataSourceOpenStackLocal(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os_local._fallback_interface = 'eth9'  # Monkey patch for dhcp
+        m_dhcp.return_value = [{
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'broadcast-address': '192.168.2.255'}]
+
+        self.assertIsNone(ds_os_local.version)
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os_local.get_data()
+        self.assertTrue(found)
+        self.assertEqual(2, ds_os_local.version)
+        md = dict(ds_os_local.metadata)
+        md.pop('instance-id', None)
+        md.pop('local-hostname', None)
+        self.assertEqual(OSTACK_META, md)
+        self.assertEqual(EC2_META, ds_os_local.ec2_metadata)
+        self.assertEqual(USER_DATA, ds_os_local.userdata_raw)
+        self.assertEqual(2, len(ds_os_local.files))
+        self.assertEqual(VENDOR_DATA, ds_os_local.vendordata_pure)
+        self.assertIsNone(ds_os_local.vendordata_raw)
+        m_dhcp.assert_called_with('eth9')
+
     def test_bad_datasource_meta(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -262,11 +292,17 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
                                        None,
                                        helpers.Paths({'run_dir': self.tmp}))
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
+        self.assertIn(
+            'InvalidMetaDataException: Broken metadata address'
+            ' http://169.254.169.25',
+            self.logs.getvalue())
 
-    @hp.activate
     def test_no_datasource(self):
         os_files = copy.deepcopy(OS_FILES)
         for k in list(os_files.keys()):
@@ -281,11 +317,53 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             'timeout': 0,
         }
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
 
-    @hp.activate
+    def test_network_config_disabled_by_datasource_config(self):
+        """The network_config can be disabled from datasource config."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os.ds_cfg = {'apply_network_config': False}
+        sample_json = {'links': [{'ethernet_mac_address': 'mymac'}],
+                       'networks': [], 'services': []}
+        ds_os.network_json = sample_json  # Ignore this content from metadata
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            self.assertIsNone(ds_os.network_config)
+        m_convert_json.assert_not_called()
+
+    def test_network_config_from_network_json(self):
+        """The datasource gets network_config from network_data.json."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        example_cfg = {'version': 1, 'config': []}
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        sample_json = {'links': [{'ethernet_mac_address': 'mymac'}],
+                       'networks': [], 'services': []}
+        ds_os.network_json = sample_json
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            m_convert_json.return_value = example_cfg
+            self.assertEqual(example_cfg, ds_os.network_config)
+        self.assertIn(
+            'network config provided via network_json', self.logs.getvalue())
+        m_convert_json.assert_called_with(sample_json, known_macs=None)
+
+    def test_network_config_cached(self):
+        """The datasource caches the network_config property."""
+        mock_path = MOCK_PATH + 'openstack.convert_net_json'
+        example_cfg = {'version': 1, 'config': []}
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        ds_os._network_config = example_cfg
+        with test_helpers.mock.patch(mock_path) as m_convert_json:
+            self.assertEqual(example_cfg, ds_os.network_config)
+        m_convert_json.assert_not_called()
+
     def test_disabled_datasource(self):
         os_files = copy.deepcopy(OS_FILES)
         os_meta = copy.deepcopy(OSTACK_META)
@@ -304,10 +382,42 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             'timeout': 0,
         }
         self.assertIsNone(ds_os.version)
-        found = ds_os.get_data()
+        mock_path = MOCK_PATH + 'detect_openstack'
+        with test_helpers.mock.patch(mock_path) as m_detect_os:
+            m_detect_os.return_value = True
+            found = ds_os.get_data()
         self.assertFalse(found)
         self.assertIsNone(ds_os.version)
 
+    @hp.activate
+    def test_wb__crawl_metadata_does_not_persist(self):
+        """_crawl_metadata returns current metadata and does not cache."""
+        _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
+        ds_os = ds.DataSourceOpenStack(
+            settings.CFG_BUILTIN, None, helpers.Paths({'run_dir': self.tmp}))
+        crawled_data = ds_os._crawl_metadata()
+        self.assertEqual(UNSET, ds_os.ec2_metadata)
+        self.assertIsNone(ds_os.userdata_raw)
+        self.assertEqual(0, len(ds_os.files))
+        self.assertIsNone(ds_os.vendordata_raw)
+        self.assertEqual(
+            ['dsmode', 'ec2-metadata', 'files', 'metadata', 'networkdata',
+             'userdata', 'vendordata', 'version'],
+            sorted(crawled_data.keys()))
+        self.assertEqual('local', crawled_data['dsmode'])
+        self.assertEqual(EC2_META, crawled_data['ec2-metadata'])
+        self.assertEqual(2, len(crawled_data['files']))
+        md = copy.deepcopy(crawled_data['metadata'])
+        md.pop('instance-id')
+        md.pop('local-hostname')
+        self.assertEqual(OSTACK_META, md)
+        self.assertEqual(
+            json.loads(OS_FILES['openstack/latest/network_data.json']),
+            crawled_data['networkdata'])
+        self.assertEqual(USER_DATA, crawled_data['userdata'])
+        self.assertEqual(VENDOR_DATA, crawled_data['vendordata'])
+        self.assertEqual(2, crawled_data['version'])
+
 
 class TestVendorDataLoading(test_helpers.TestCase):
     def cvj(self, data):
@@ -339,4 +449,89 @@ class TestVendorDataLoading(test_helpers.TestCase):
         data = {'foo': 'bar', 'cloud-init': ['VD_1', 'VD_2']}
         self.assertEqual(self.cvj(data), data['cloud-init'])
 
+
+@test_helpers.mock.patch(MOCK_PATH + 'util.is_x86')
+class TestDetectOpenStack(test_helpers.CiTestCase):
+
+    def test_detect_openstack_non_intel_x86(self, m_is_x86):
+        """Return True on non-intel platforms because dmi isn't conclusive."""
+        m_is_x86.return_value = False
+        self.assertTrue(
+            ds.detect_openstack(), 'Expected detect_openstack == True')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_not_detect_openstack_intel_x86_ec2(self, m_dmi, m_proc_env,
+                                                m_is_x86):
+        """Return False on EC2 platforms."""
+        m_is_x86.return_value = True
+        # No product_name in proc/1/environ
+        m_proc_env.return_value = {'HOME': '/'}
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish' on EC2
+            if dmi_key == 'chassis-asset-tag':
+                return ''  # Empty string on EC2
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertFalse(
+            ds.detect_openstack(), 'Expected detect_openstack == False on EC2')
+        m_proc_env.assert_called_with(1)
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_intel_product_name_compute(self, m_dmi,
+                                                         m_is_x86):
+        """Return True on OpenStack compute and nova instances."""
+        m_is_x86.return_value = True
+        openstack_product_names = ['OpenStack Nova', 'OpenStack Compute']
+
+        for product_name in openstack_product_names:
+            m_dmi.return_value = product_name
+            self.assertTrue(
+                ds.detect_openstack(), 'Failed to detect_openstack')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_opentelekomcloud_chassis_asset_tag(self, m_dmi,
+                                                                 m_is_x86):
+        """Return True on OpenStack reporting OpenTelekomCloud asset-tag."""
+        m_is_x86.return_value = True
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish' on OpenTelekomCloud
+            if dmi_key == 'chassis-asset-tag':
+                return 'OpenTelekomCloud'
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertTrue(
+            ds.detect_openstack(),
+            'Expected detect_openstack == True on OpenTelekomCloud')
+
+    @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_by_proc_1_environ(self, m_dmi, m_proc_env,
+                                                m_is_x86):
+        """Return True when nova product_name specified in /proc/1/environ."""
+        m_is_x86.return_value = True
+        # Nova product_name in proc/1/environ
+        m_proc_env.return_value = {
+            'HOME': '/', 'product_name': 'OpenStack Nova'}
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'HVM domU'  # Nothing 'openstackish'
+            if dmi_key == 'chassis-asset-tag':
+                return ''  # Nothin 'openstackish'
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertTrue(
+            ds.detect_openstack(),
+            'Expected detect_openstack == True on OpenTelekomCloud')
+        m_proc_env.assert_called_with(1)
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index 8dec06b..e4e9bb2 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -176,7 +176,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.vendordata_url = \
             DataSourceScaleway.BUILTIN_DS_CONFIG['vendordata_url']
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
@@ -212,7 +211,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.region)
         self.assertEqual(sleep.call_count, 0)
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
@@ -236,7 +234,6 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.get_vendordata_raw())
         self.assertEqual(sleep.call_count, 0)
 
-    @httpretty.activate
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
index 706e8eb..dca0b3d 100644
--- a/tests/unittests/test_datasource/test_smartos.py
+++ b/tests/unittests/test_datasource/test_smartos.py
@@ -1027,6 +1027,32 @@ class TestNetworkConversion(TestCase):
         found = convert_net(SDC_NICS_SINGLE_GATEWAY)
         self.assertEqual(expected, found)
 
+    def test_routes_on_all_nics(self):
+        routes = [
+            {'linklocal': False, 'dst': '3.0.0.0/8', 'gateway': '8.12.42.3'},
+            {'linklocal': False, 'dst': '4.0.0.0/8', 'gateway': '10.210.1.4'}]
+        expected = {
+            'version': 1,
+            'config': [
+                {'mac_address': '90:b8:d0:d8:82:b4', 'mtu': 1500,
+                 'name': 'net0', 'type': 'physical',
+                 'subnets': [{'address': '8.12.42.26/24',
+                              'gateway': '8.12.42.1', 'type': 'static',
+                              'routes': [{'network': '3.0.0.0/8',
+                                          'gateway': '8.12.42.3'},
+                                         {'network': '4.0.0.0/8',
+                                         'gateway': '10.210.1.4'}]}]},
+                {'mac_address': '90:b8:d0:0a:51:31', 'mtu': 1500,
+                 'name': 'net1', 'type': 'physical',
+                 'subnets': [{'address': '10.210.1.27/24', 'type': 'static',
+                              'routes': [{'network': '3.0.0.0/8',
+                                          'gateway': '8.12.42.3'},
+                                         {'network': '4.0.0.0/8',
+                                         'gateway': '10.210.1.4'}]}]}]}
+        found = convert_net(SDC_NICS_SINGLE_GATEWAY, routes=routes)
+        self.maxDiff = None
+        self.assertEqual(expected, found)
+
 
 @unittest2.skipUnless(get_smartos_environ() == SMARTOS_ENV_KVM,
                       "Only supported on KVM and bhyve guests under SmartOS")
diff --git a/tests/unittests/test_distros/test_create_users.py b/tests/unittests/test_distros/test_create_users.py
index 5670904..07176ca 100644
--- a/tests/unittests/test_distros/test_create_users.py
+++ b/tests/unittests/test_distros/test_create_users.py
@@ -145,4 +145,12 @@ class TestCreateUser(TestCase):
             mock.call(['passwd', '-l', user])]
         self.assertEqual(m_subp.call_args_list, expected)
 
+    def test_explicit_sudo_false(self, m_subp, m_is_snappy):
+        user = 'foouser'
+        self.dist.create_user(user, sudo=False)
+        self.assertEqual(
+            m_subp.call_args_list,
+            [self._useradd2call([user, '-m']),
+             mock.call(['passwd', '-l', user])])
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index ad7fe41..64d9f9f 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -10,7 +10,8 @@ from cloudinit import util
 from cloudinit.tests.helpers import (
     CiTestCase, dir2dict, populate_dir, populate_dir_with_ts)
 
-from cloudinit.sources import DataSourceIBMCloud as dsibm
+from cloudinit.sources import DataSourceIBMCloud as ds_ibm
+from cloudinit.sources import DataSourceSmartOS as ds_smartos
 
 UNAME_MYSYS = ("Linux bart 4.4.0-62-generic #83-Ubuntu "
                "SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 GNU/Linux")
@@ -69,8 +70,12 @@ P_DSID_CFG = "etc/cloud/ds-identify.cfg"
 
 IBM_CONFIG_UUID = "9796-932E"
 
+MOCK_VIRT_IS_CONTAINER_OTHER = {'name': 'detect_virt',
+                                'RET': 'container-other', 'ret': 0}
 MOCK_VIRT_IS_KVM = {'name': 'detect_virt', 'RET': 'kvm', 'ret': 0}
 MOCK_VIRT_IS_VMWARE = {'name': 'detect_virt', 'RET': 'vmware', 'ret': 0}
+# currenty' SmartOS hypervisor "bhyve" is unknown by systemd-detect-virt.
+MOCK_VIRT_IS_VM_OTHER = {'name': 'detect_virt', 'RET': 'vm-other', 'ret': 0}
 MOCK_VIRT_IS_XEN = {'name': 'detect_virt', 'RET': 'xen', 'ret': 0}
 MOCK_UNAME_IS_PPC64 = {'name': 'uname', 'out': UNAME_PPC64EL, 'ret': 0}
 
@@ -170,7 +175,9 @@ class DsIdentifyBase(CiTestCase):
     def _call_via_dict(self, data, rootd=None, **kwargs):
         # return output of self.call with a dict input like VALID_CFG[item]
         xwargs = {'rootd': rootd}
-        for k in ('mocks', 'args', 'policy_dmi', 'policy_no_dmi', 'files'):
+        passthrough = ('mocks', 'func', 'args', 'policy_dmi',
+                       'policy_no_dmi', 'files')
+        for k in passthrough:
             if k in data:
                 xwargs[k] = data[k]
             if k in kwargs:
@@ -184,17 +191,18 @@ class DsIdentifyBase(CiTestCase):
             data, RC_FOUND, dslist=[data.get('ds'), DS_NONE])
 
     def _check_via_dict(self, data, rc, dslist=None, **kwargs):
-        found_rc, out, err, cfg, files = self._call_via_dict(data, **kwargs)
+        ret = self._call_via_dict(data, **kwargs)
         good = False
         try:
-            self.assertEqual(rc, found_rc)
+            self.assertEqual(rc, ret.rc)
             if dslist is not None:
-                self.assertEqual(dslist, cfg['datasource_list'])
+                self.assertEqual(dslist, ret.cfg['datasource_list'])
             good = True
         finally:
             if not good:
-                _print_run_output(rc, out, err, cfg, files)
-        return rc, out, err, cfg, files
+                _print_run_output(ret.rc, ret.stdout, ret.stderr, ret.cfg,
+                                  ret.files)
+        return ret
 
 
 class TestDsIdentify(DsIdentifyBase):
@@ -245,13 +253,40 @@ class TestDsIdentify(DsIdentifyBase):
     def test_config_drive(self):
         """ConfigDrive datasource has a disk with LABEL=config-2."""
         self._test_ds_found('ConfigDrive')
-        return
 
     def test_config_drive_upper(self):
         """ConfigDrive datasource has a disk with LABEL=CONFIG-2."""
         self._test_ds_found('ConfigDriveUpper')
         return
 
+    def test_config_drive_seed(self):
+        """Config Drive seed directory."""
+        self._test_ds_found('ConfigDrive-seed')
+
+    def test_config_drive_interacts_with_ibmcloud_config_disk(self):
+        """Verify ConfigDrive interaction with IBMCloud.
+
+        If ConfigDrive is enabled and not IBMCloud, then ConfigDrive
+        should claim the ibmcloud 'config-2' disk.
+        If IBMCloud is enabled, then ConfigDrive should skip."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        cfgpath = 'etc/cloud/cloud.cfg.d/99_networklayer_common.cfg'
+
+        # with list including IBMCloud, config drive should be not found.
+        files[cfgpath] = 'datasource_list: [ ConfigDrive, IBMCloud ]\n'
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ret.cfg.get('datasource_list'), ['IBMCloud', 'None'])
+
+        # But if IBMCloud is not enabled, config drive should claim this.
+        files[cfgpath] = 'datasource_list: [ ConfigDrive, NoCloud ]\n'
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ret.cfg.get('datasource_list'), ['ConfigDrive', 'None'])
+
     def test_ibmcloud_template_userdata_in_provisioning(self):
         """Template provisioned with user-data during provisioning stage.
 
@@ -302,11 +337,42 @@ class TestDsIdentify(DsIdentifyBase):
                 break
         if not offset:
             raise ValueError("Expected to find 'blkid' mock, but did not.")
-        data['mocks'][offset]['out'] = d['out'].replace(dsibm.IBM_CONFIG_UUID,
+        data['mocks'][offset]['out'] = d['out'].replace(ds_ibm.IBM_CONFIG_UUID,
                                                         "DEAD-BEEF")
         self._check_via_dict(
             data, rc=RC_FOUND, dslist=['ConfigDrive', DS_NONE])
 
+    def test_ibmcloud_with_nocloud_seed(self):
+        """NoCloud seed should be preferred over IBMCloud.
+
+        A nocloud seed should be preferred over IBMCloud even if enabled.
+        Ubuntu 16.04 images have <vlc>/seed/nocloud-net. LP: #1766401."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        files.update(VALID_CFG['NoCloud-seed']['files'])
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ['NoCloud', 'IBMCloud', 'None'],
+            ret.cfg.get('datasource_list'))
+
+    def test_ibmcloud_with_configdrive_seed(self):
+        """ConfigDrive seed should be preferred over IBMCloud.
+
+        A ConfigDrive seed should be preferred over IBMCloud even if enabled.
+        Ubuntu 16.04 images have a fstab entry that mounts the
+        METADATA disk into <vlc>/seed/config_drive. LP: ##1766401."""
+        data = copy.deepcopy(VALID_CFG['IBMCloud-config-2'])
+        files = data.get('files', {})
+        if not files:
+            data['files'] = files
+        files.update(VALID_CFG['ConfigDrive-seed']['files'])
+        ret = self._check_via_dict(data, shell_true)
+        self.assertEqual(
+            ['ConfigDrive', 'IBMCloud', 'None'],
+            ret.cfg.get('datasource_list'))
+
     def test_policy_disabled(self):
         """A Builtin policy of 'disabled' should return not found.
 
@@ -457,6 +523,39 @@ class TestDsIdentify(DsIdentifyBase):
         """Hetzner cloud is identified in sys_vendor."""
         self._test_ds_found('Hetzner')
 
+    def test_smartos_bhyve(self):
+        """SmartOS cloud identified by SmartDC in dmi."""
+        self._test_ds_found('SmartOS-bhyve')
+
+    def test_smartos_lxbrand(self):
+        """SmartOS cloud identified on lxbrand container."""
+        self._test_ds_found('SmartOS-lxbrand')
+
+    def test_smartos_lxbrand_requires_socket(self):
+        """SmartOS cloud should not be identified if no socket file."""
+        mycfg = copy.deepcopy(VALID_CFG['SmartOS-lxbrand'])
+        del mycfg['files'][ds_smartos.METADATA_SOCKFILE]
+        self._check_via_dict(mycfg, rc=RC_NOT_FOUND, policy_dmi="disabled")
+
+    def test_path_env_gets_set_from_main(self):
+        """PATH environment should always have some tokens when main is run.
+
+        We explicitly call main as we want to ensure it updates PATH."""
+        cust = copy.deepcopy(VALID_CFG['NoCloud'])
+        rootd = self.tmp_dir()
+        mpp = 'main-printpath'
+        pre = "MYPATH="
+        cust['files'][mpp] = (
+            'PATH="/mycust/path"; main; r=$?; echo ' + pre + '$PATH; exit $r;')
+        ret = self._check_via_dict(
+            cust, RC_FOUND,
+            func=".", args=[os.path.join(rootd, mpp)], rootd=rootd)
+        line = [l for l in ret.stdout.splitlines() if l.startswith(pre)][0]
+        toks = line.replace(pre, "").split(":")
+        expected = ["/sbin", "/bin", "/usr/sbin", "/usr/bin", "/mycust/path"]
+        self.assertEqual(expected, [p for p in expected if p in toks],
+                         "path did not have expected tokens")
+
 
 class TestIsIBMProvisioning(DsIdentifyBase):
     """Test the is_ibm_provisioning method in ds-identify."""
@@ -684,6 +783,12 @@ VALID_CFG = {
              },
         ],
     },
+    'ConfigDrive-seed': {
+        'ds': 'ConfigDrive',
+        'files': {
+            os.path.join(P_SEED_DIR, 'config_drive', 'openstack',
+                         'latest', 'meta_data.json'): 'md\n'},
+    },
     'Hetzner': {
         'ds': 'Hetzner',
         'files': {P_SYS_VENDOR: 'Hetzner\n'},
@@ -712,7 +817,7 @@ VALID_CFG = {
                  [{'DEVNAME': 'xvda1', 'TYPE': 'ext3', 'PARTUUID': uuid4(),
                    'UUID': uuid4(), 'LABEL': 'cloudimg-bootfs'},
                   {'DEVNAME': 'xvdb', 'TYPE': 'vfat', 'LABEL': 'config-2',
-                   'UUID': dsibm.IBM_CONFIG_UUID},
+                   'UUID': ds_ibm.IBM_CONFIG_UUID},
                   {'DEVNAME': 'xvda2', 'TYPE': 'ext4',
                    'LABEL': 'cloudimg-rootfs', 'PARTUUID': uuid4(),
                    'UUID': uuid4()},
@@ -733,6 +838,32 @@ VALID_CFG = {
              },
         ],
     },
+    'SmartOS-bhyve': {
+        'ds': 'SmartOS',
+        'mocks': [
+            MOCK_VIRT_IS_VM_OTHER,
+            {'name': 'blkid', 'ret': 0,
+             'out': blkid_out(
+                 [{'DEVNAME': 'vda1', 'TYPE': 'ext4',
+                   'PARTUUID': '49ec635a-01'},
+                  {'DEVNAME': 'vda2', 'TYPE': 'swap',
+                   'LABEL': 'cloudimg-swap', 'PARTUUID': '49ec635a-02'}]),
+             },
+        ],
+        'files': {P_PRODUCT_NAME: 'SmartDC HVM\n'},
+    },
+    'SmartOS-lxbrand': {
+        'ds': 'SmartOS',
+        'mocks': [
+            MOCK_VIRT_IS_CONTAINER_OTHER,
+            {'name': 'uname', 'ret': 0,
+             'out': ("Linux d43da87a-daca-60e8-e6d4-d2ed372662a3 4.3.0 "
+                     "BrandZ virtual linux x86_64 GNU/Linux")},
+            {'name': 'blkid', 'ret': 2, 'out': ''},
+        ],
+        'files': {ds_smartos.METADATA_SOCKFILE: 'would be a socket\n'},
+    }
+
 }
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_ec2_util.py b/tests/unittests/test_ec2_util.py
index af78997..3f50f57 100644
--- a/tests/unittests/test_ec2_util.py
+++ b/tests/unittests/test_ec2_util.py
@@ -11,7 +11,6 @@ from cloudinit import url_helper as uh
 class TestEc2Util(helpers.HttprettyTestCase):
     VERSION = 'latest'
 
-    @hp.activate
     def test_userdata_fetch(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -20,7 +19,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION)
         self.assertEqual('stuff', userdata.decode('utf-8'))
 
-    @hp.activate
     def test_userdata_fetch_fail_not_found(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -28,7 +26,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION, retries=0)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_userdata_fetch_fail_server_dead(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -36,7 +33,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION, retries=0)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_userdata_fetch_fail_server_not_found(self):
         hp.register_uri(hp.GET,
                         'http://169.254.169.254/%s/user-data' % (self.VERSION),
@@ -44,7 +40,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         userdata = eu.get_instance_userdata(self.VERSION)
         self.assertEqual('', userdata)
 
-    @hp.activate
     def test_metadata_fetch_no_keys(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -62,7 +57,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(md['ami-launch-index'], '1')
 
-    @hp.activate
     def test_metadata_fetch_key(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -83,7 +77,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(1, len(md['public-keys']))
 
-    @hp.activate
     def test_metadata_fetch_with_2_keys(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -108,7 +101,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(md['instance-id'], '123')
         self.assertEqual(2, len(md['public-keys']))
 
-    @hp.activate
     def test_metadata_fetch_bdm(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
@@ -140,7 +132,6 @@ class TestEc2Util(helpers.HttprettyTestCase):
         self.assertEqual(bdm['ami'], 'sdb')
         self.assertEqual(bdm['ephemeral0'], 'sdc')
 
-    @hp.activate
     def test_metadata_no_security_credentials(self):
         base_url = 'http://169.254.169.254/%s/meta-data/' % (self.VERSION)
         hp.register_uri(hp.GET, base_url, status=200,
diff --git a/tests/unittests/test_handler/test_handler_apt_conf_v1.py b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
index 83f962a..6a4b03e 100644
--- a/tests/unittests/test_handler/test_handler_apt_conf_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_conf_v1.py
@@ -12,10 +12,6 @@ import shutil
 import tempfile
 
 
-def load_tfile_or_url(*args, **kwargs):
-    return(util.decode_binary(util.read_file_or_url(*args, **kwargs).contents))
-
-
 class TestAptProxyConfig(TestCase):
     def setUp(self):
         super(TestAptProxyConfig, self).setUp()
@@ -36,7 +32,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "myproxy"))
 
     def test_apt_http_proxy_written(self):
@@ -46,7 +42,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "myproxy"))
 
     def test_apt_all_proxy_written(self):
@@ -64,7 +60,7 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.pfile))
         self.assertFalse(os.path.isfile(self.cfile))
 
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
 
         for ptype, pval in values.items():
             self.assertTrue(self._search_apt_config(contents, ptype, pval))
@@ -80,7 +76,7 @@ class TestAptProxyConfig(TestCase):
         cc_apt_configure.apply_apt_config({'proxy': "foo"},
                                           self.pfile, self.cfile)
         self.assertTrue(os.path.isfile(self.pfile))
-        contents = load_tfile_or_url(self.pfile)
+        contents = util.load_file(self.pfile)
         self.assertTrue(self._search_apt_config(contents, "http", "foo"))
 
     def test_config_written(self):
@@ -92,14 +88,14 @@ class TestAptProxyConfig(TestCase):
         self.assertTrue(os.path.isfile(self.cfile))
         self.assertFalse(os.path.isfile(self.pfile))
 
-        self.assertEqual(load_tfile_or_url(self.cfile), payload)
+        self.assertEqual(util.load_file(self.cfile), payload)
 
     def test_config_replaced(self):
         util.write_file(self.pfile, "content doesnt matter")
         cc_apt_configure.apply_apt_config({'conf': "foo"},
                                           self.pfile, self.cfile)
         self.assertTrue(os.path.isfile(self.cfile))
-        self.assertEqual(load_tfile_or_url(self.cfile), "foo")
+        self.assertEqual(util.load_file(self.cfile), "foo")
 
     def test_config_deleted(self):
         # if no 'conf' is provided, delete any previously written file
diff --git a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
index d2b96f0..23bd6e1 100644
--- a/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_configure_sources_list_v1.py
@@ -64,13 +64,6 @@ deb-src http://archive.ubuntu.com/ubuntu/ fakerelease main restricted
 """)
 
 
-def load_tfile_or_url(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class TestAptSourceConfigSourceList(t_help.FilesystemMockingTestCase):
     """TestAptSourceConfigSourceList
     Main Class to test sources list rendering
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v1.py b/tests/unittests/test_handler/test_handler_apt_source_v1.py
index 46ca4ce..a3132fb 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v1.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v1.py
@@ -39,13 +39,6 @@ S0ORP6HXET3+jC8BMG4tBWCTK/XEZw==
 ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
 
 
-def load_tfile_or_url(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class FakeDistro(object):
     """Fake Distro helper object"""
     def update_package_sources(self):
@@ -125,7 +118,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "karmic-backports",
@@ -157,13 +150,13 @@ class TestAptSourceConfig(TestCase):
         self.apt_src_basic(self.aptlistfile, cfg)
 
         # extra verify on two extra files of this test
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "precise-backports",
                                    "main universe multiverse restricted"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://archive.ubuntu.com/ubuntu";,
                                    "lucid-backports",
@@ -220,7 +213,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "multiverse"),
@@ -241,12 +234,12 @@ class TestAptSourceConfig(TestCase):
 
         # extra verify on two extra files of this test
         params = self._get_default_params()
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "main"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "universe"),
@@ -296,7 +289,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -336,14 +329,14 @@ class TestAptSourceConfig(TestCase):
                 'filename': self.aptlistfile3}
 
         self.apt_src_keyid(self.aptlistfile, [cfg1, cfg2, cfg3], 3)
-        contents = load_tfile_or_url(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
                                     'cloud-init-test/ubuntu'),
                                    "xenial", "universe"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile_or_url(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -375,7 +368,7 @@ class TestAptSourceConfig(TestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile_or_url(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index e486862..7a64c23 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -49,13 +49,6 @@ ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
 TARGET = None
 
 
-def load_tfile(*args, **kwargs):
-    """load_tfile_or_url
-    load file and return content after decoding
-    """
-    return util.decode_binary(util.read_file_or_url(*args, **kwargs).contents)
-
-
 class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
     """TestAptSourceConfig
     Main Class to test apt configs
@@ -119,7 +112,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "karmic-backports",
@@ -151,13 +144,13 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
         self._apt_src_basic(self.aptlistfile, cfg)
 
         # extra verify on two extra files of this test
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "precise-backports",
                                    "main universe multiverse restricted"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", "http://test.ubuntu.com/ubuntu";,
                                    "lucid-backports",
@@ -174,7 +167,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "multiverse"),
@@ -201,12 +194,12 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         # extra verify on two extra files of this test
         params = self._get_default_params()
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "main"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb", params['MIRROR'], params['RELEASE'],
                                    "universe"),
@@ -240,7 +233,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(filename))
 
-        contents = load_tfile(filename)
+        contents = util.load_file(filename)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -277,14 +270,14 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
                                    'keyid': "03683F77"}}
 
         self._apt_src_keyid(self.aptlistfile, cfg, 3)
-        contents = load_tfile(self.aptlistfile2)
+        contents = util.load_file(self.aptlistfile2)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
                                     'cloud-init-test/ubuntu'),
                                    "xenial", "universe"),
                                   contents, flags=re.IGNORECASE))
-        contents = load_tfile(self.aptlistfile3)
+        contents = util.load_file(self.aptlistfile3)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
@@ -310,7 +303,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
 
         self.assertTrue(os.path.isfile(self.aptlistfile))
 
-        contents = load_tfile(self.aptlistfile)
+        contents = util.load_file(self.aptlistfile)
         self.assertTrue(re.search(r"%s %s %s %s\n" %
                                   ("deb",
                                    ('http://ppa.launchpad.net/smoser/'
diff --git a/tests/unittests/test_handler/test_handler_chef.py b/tests/unittests/test_handler/test_handler_chef.py
index 0136a93..f4bbd66 100644
--- a/tests/unittests/test_handler/test_handler_chef.py
+++ b/tests/unittests/test_handler/test_handler_chef.py
@@ -14,19 +14,27 @@ from cloudinit.sources import DataSourceNone
 from cloudinit import util
 
 from cloudinit.tests.helpers import (
-    CiTestCase, FilesystemMockingTestCase, mock, skipIf)
+    HttprettyTestCase, FilesystemMockingTestCase, mock, skipIf)
 
 LOG = logging.getLogger(__name__)
 
 CLIENT_TEMPL = os.path.sep.join(["templates", "chef_client.rb.tmpl"])
 
+# This is adjusted to use http because using with https causes issue
+# in some openssl/httpretty combinations.
+#   https://github.com/gabrielfalcao/HTTPretty/issues/242
+# We saw issue in opensuse 42.3 with
+#    httpretty=0.8.8-7.1 ndg-httpsclient=0.4.0-3.2 pyOpenSSL=16.0.0-4.1
+OMNIBUS_URL_HTTP = cc_chef.OMNIBUS_URL.replace("https:", "http:")
 
-class TestInstallChefOmnibus(CiTestCase):
+
+class TestInstallChefOmnibus(HttprettyTestCase):
 
     def setUp(self):
+        super(TestInstallChefOmnibus, self).setUp()
         self.new_root = self.tmp_dir()
 
-    @httpretty.activate
+    @mock.patch("cloudinit.config.cc_chef.OMNIBUS_URL", OMNIBUS_URL_HTTP)
     def test_install_chef_from_omnibus_runs_chef_url_content(self):
         """install_chef_from_omnibus runs downloaded OMNIBUS_URL as script."""
         chef_outfile = self.tmp_path('chef.out', self.new_root)
@@ -65,7 +73,7 @@ class TestInstallChefOmnibus(CiTestCase):
             expected_subp_kwargs,
             m_subp_blob.call_args_list[0][1])
 
-    @httpretty.activate
+    @mock.patch("cloudinit.config.cc_chef.OMNIBUS_URL", OMNIBUS_URL_HTTP)
     @mock.patch('cloudinit.config.cc_chef.util.subp_blob_in_tempfile')
     def test_install_chef_from_omnibus_has_omnibus_version(self, m_subp_blob):
         """install_chef_from_omnibus provides version arg to OMNIBUS_URL."""
diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
index a205498..4dd7e09 100644
--- a/tests/unittests/test_handler/test_handler_lxd.py
+++ b/tests/unittests/test_handler/test_handler_lxd.py
@@ -33,12 +33,16 @@ class TestLxd(t_help.CiTestCase):
         cc = cloud.Cloud(ds, paths, {}, d, None)
         return cc
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_lxd_init(self, mock_util):
+    def test_lxd_init(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         mock_util.which.return_value = True
+        m_maybe_clean.return_value = None
         cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
         self.assertTrue(mock_util.which.called)
+        # no bridge config, so maybe_cleanup should not be called.
+        self.assertFalse(m_maybe_clean.called)
         init_call = mock_util.subp.call_args_list[0][0][0]
         self.assertEqual(init_call,
                          ['lxd', 'init', '--auto',
@@ -46,32 +50,39 @@ class TestLxd(t_help.CiTestCase):
                           '--storage-backend=zfs',
                           '--storage-pool=poolname'])
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_lxd_install(self, mock_util):
+    def test_lxd_install(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         mock_util.which.return_value = None
         cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
         self.assertNotIn('WARN', self.logs.getvalue())
         self.assertTrue(cc.distro.install_packages.called)
+        cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
+        self.assertFalse(m_maybe_clean.called)
         install_pkg = cc.distro.install_packages.call_args_list[0][0][0]
         self.assertEqual(sorted(install_pkg), ['lxd', 'zfs'])
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_no_init_does_nothing(self, mock_util):
+    def test_no_init_does_nothing(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         cc_lxd.handle('cc_lxd', {'lxd': {}}, cc, self.logger, [])
         self.assertFalse(cc.distro.install_packages.called)
         self.assertFalse(mock_util.subp.called)
+        self.assertFalse(m_maybe_clean.called)
 
+    @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
-    def test_no_lxd_does_nothing(self, mock_util):
+    def test_no_lxd_does_nothing(self, mock_util, m_maybe_clean):
         cc = self._get_cloud('ubuntu')
         cc.distro = mock.MagicMock()
         cc_lxd.handle('cc_lxd', {'package_update': True}, cc, self.logger, [])
         self.assertFalse(cc.distro.install_packages.called)
         self.assertFalse(mock_util.subp.called)
+        self.assertFalse(m_maybe_clean.called)
 
     def test_lxd_debconf_new_full(self):
         data = {"mode": "new",
@@ -147,14 +158,13 @@ class TestLxd(t_help.CiTestCase):
                 "domain": "lxd"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (["lxc", "network", "create", "testbr0",
+            (["network", "create", "testbr0",
               "ipv4.address=10.0.8.1/24", "ipv4.nat=true",
               "ipv4.dhcp.ranges=10.0.8.2-10.0.8.254",
               "ipv6.address=fd98:9e0:3744::1/64",
-              "ipv6.nat=true", "dns.domain=lxd",
-              "--force-local"],
-             ["lxc", "network", "attach-profile",
-              "testbr0", "default", "eth0", "--force-local"]))
+              "ipv6.nat=true", "dns.domain=lxd"],
+             ["network", "attach-profile",
+              "testbr0", "default", "eth0"]))
 
     def test_lxd_cmd_new_partial(self):
         data = {"mode": "new",
@@ -163,19 +173,18 @@ class TestLxd(t_help.CiTestCase):
                 "ipv6_nat": "true"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (["lxc", "network", "create", "lxdbr0", "ipv4.address=none",
-              "ipv6.address=fd98:9e0:3744::1/64", "ipv6.nat=true",
-              "--force-local"],
-             ["lxc", "network", "attach-profile",
-              "lxdbr0", "default", "eth0", "--force-local"]))
+            (["network", "create", "lxdbr0", "ipv4.address=none",
+              "ipv6.address=fd98:9e0:3744::1/64", "ipv6.nat=true"],
+             ["network", "attach-profile",
+              "lxdbr0", "default", "eth0"]))
 
     def test_lxd_cmd_existing(self):
         data = {"mode": "existing",
                 "name": "testbr0"}
         self.assertEqual(
             cc_lxd.bridge_to_cmd(data),
-            (None, ["lxc", "network", "attach-profile",
-                    "testbr0", "default", "eth0", "--force-local"]))
+            (None, ["network", "attach-profile",
+                    "testbr0", "default", "eth0"]))
 
     def test_lxd_cmd_none(self):
         data = {"mode": "none"}
@@ -183,4 +192,43 @@ class TestLxd(t_help.CiTestCase):
             cc_lxd.bridge_to_cmd(data),
             (None, None))
 
+
+class TestLxdMaybeCleanupDefault(t_help.CiTestCase):
+    """Test the implementation of maybe_cleanup_default."""
+
+    defnet = cc_lxd._DEFAULT_NETWORK_NAME
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_network_other_than_default_not_deleted(self, m_lxc):
+        """deletion or removal should only occur if bridge is default."""
+        cc_lxd.maybe_cleanup_default(
+            net_name="lxdbr1", did_init=True, create=True, attach=True)
+        m_lxc.assert_not_called()
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_did_init_false_does_not_delete(self, m_lxc):
+        """deletion or removal should only occur if did_init is True."""
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=False, create=True, attach=True)
+        m_lxc.assert_not_called()
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_network_deleted_if_create_true(self, m_lxc):
+        """deletion of network should occur if create is True."""
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=True, create=True, attach=False)
+        m_lxc.assert_called_once_with(["network", "delete", self.defnet])
+
+    @mock.patch("cloudinit.config.cc_lxd._lxc")
+    def test_device_removed_if_attach_true(self, m_lxc):
+        """deletion of network should occur if create is True."""
+        nic_name = "my_nic"
+        profile = "my_profile"
+        cc_lxd.maybe_cleanup_default(
+            net_name=self.defnet, did_init=True, create=False, attach=True,
+            profile=profile, nic_name=nic_name)
+        m_lxc.assert_called_once_with(
+            ["profile", "device", "remove", profile, nic_name])
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index fe492d4..8fea6c2 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -1,8 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import os.path
-import shutil
-import tempfile
 
 from cloudinit.config import cc_mounts
 
@@ -18,8 +16,7 @@ class TestSanitizeDevname(test_helpers.FilesystemMockingTestCase):
 
     def setUp(self):
         super(TestSanitizeDevname, self).setUp()
-        self.new_root = tempfile.mkdtemp()
-        self.addCleanup(shutil.rmtree, self.new_root)
+        self.new_root = self.tmp_dir()
         self.patchOS(self.new_root)
 
     def _touch(self, path):
@@ -134,4 +131,103 @@ class TestSanitizeDevname(test_helpers.FilesystemMockingTestCase):
             cc_mounts.sanitize_devname(
                 'ephemeral0.1', lambda x: disk_path, mock.Mock()))
 
+
+class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
+
+    swap_path = '/dev/sdb1'
+
+    def setUp(self):
+        super(TestFstabHandling, self).setUp()
+        self.new_root = self.tmp_dir()
+        self.patchOS(self.new_root)
+
+        self.fstab_path = os.path.join(self.new_root, 'etc/fstab')
+        self._makedirs('/etc')
+
+        self.add_patch('cloudinit.config.cc_mounts.FSTAB_PATH',
+                       'mock_fstab_path',
+                       self.fstab_path,
+                       autospec=False)
+
+        self.add_patch('cloudinit.config.cc_mounts._is_block_device',
+                       'mock_is_block_device',
+                       return_value=True)
+
+        self.add_patch('cloudinit.config.cc_mounts.util.subp',
+                       'mock_util_subp')
+
+        self.mock_cloud = mock.Mock()
+        self.mock_log = mock.Mock()
+        self.mock_cloud.device_name_to_device = self.device_name_to_device
+
+    def _makedirs(self, directory):
+        directory = os.path.join(self.new_root, directory.lstrip('/'))
+        if not os.path.exists(directory):
+            os.makedirs(directory)
+
+    def device_name_to_device(self, path):
+        if path == 'swap':
+            return self.swap_path
+        else:
+            dev = None
+
+        return dev
+
+    def test_fstab_no_swap_device(self):
+        '''Ensure that cloud-init adds a discovered swap partition
+        to /etc/fstab.'''
+
+        fstab_original_content = ''
+        fstab_expected_content = (
+            '%s\tnone\tswap\tsw,comment=cloudconfig\t'
+            '0\t0\n' % (self.swap_path,)
+        )
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
+    def test_fstab_same_swap_device_already_configured(self):
+        '''Ensure that cloud-init will not add a swap device if the same
+        device already exists in /etc/fstab.'''
+
+        fstab_original_content = '%s swap swap defaults 0 0\n' % (
+            self.swap_path,)
+        fstab_expected_content = fstab_original_content
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
+    def test_fstab_alternate_swap_device_already_configured(self):
+        '''Ensure that cloud-init will add a discovered swap device to
+        /etc/fstab even when there exists a swap definition on another
+        device.'''
+
+        fstab_original_content = '/dev/sdc1 swap swap defaults 0 0\n'
+        fstab_expected_content = (
+            fstab_original_content +
+            '%s\tnone\tswap\tsw,comment=cloudconfig\t'
+            '0\t0\n' % (self.swap_path,)
+        )
+
+        with open(cc_mounts.FSTAB_PATH, 'w') as fd:
+            fd.write(fstab_original_content)
+
+        cc_mounts.handle(None, {}, self.mock_cloud, self.mock_log, [])
+
+        with open(cc_mounts.FSTAB_PATH, 'r') as fd:
+            fstab_new_content = fd.read()
+            self.assertEqual(fstab_expected_content, fstab_new_content)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 17c5355..6fe3659 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -155,9 +155,9 @@ class TestNtp(FilesystemMockingTestCase):
                                              path=confpath,
                                              template_fn=template_fn,
                                              template=None)
-        content = util.read_file_or_url('file://' + confpath).contents
         self.assertEqual(
-            "servers []\npools ['10.0.0.1', '10.0.0.2']\n", content.decode())
+            "servers []\npools ['10.0.0.1', '10.0.0.2']\n",
+            util.load_file(confpath))
 
     def test_write_ntp_config_template_defaults_pools_w_empty_lists(self):
         """write_ntp_config_template defaults pools servers upon empty config.
@@ -176,10 +176,9 @@ class TestNtp(FilesystemMockingTestCase):
                                              path=confpath,
                                              template_fn=template_fn,
                                              template=None)
-        content = util.read_file_or_url('file://' + confpath).contents
         self.assertEqual(
             "servers []\npools {0}\n".format(pools),
-            content.decode())
+            util.load_file(confpath))
 
     def test_defaults_pools_empty_lists_sles(self):
         """write_ntp_config_template defaults opensuse pools upon empty config.
@@ -196,11 +195,11 @@ class TestNtp(FilesystemMockingTestCase):
                                          path=confpath,
                                          template_fn=template_fn,
                                          template=None)
-        content = util.read_file_or_url('file://' + confpath).contents
         for pool in default_pools:
             self.assertIn('opensuse', pool)
         self.assertEqual(
-            "servers []\npools {0}\n".format(default_pools), content.decode())
+            "servers []\npools {0}\n".format(default_pools),
+            util.load_file(confpath))
         self.assertIn(
             "Adding distro default ntp pool servers: {0}".format(
                 ",".join(default_pools)),
@@ -217,10 +216,9 @@ class TestNtp(FilesystemMockingTestCase):
                                          path=confpath,
                                          template_fn=template_fn,
                                          template=None)
-        content = util.read_file_or_url('file://' + confpath).contents
         self.assertEqual(
             "[Time]\nNTP=%s %s \n" % (" ".join(servers), " ".join(pools)),
-            content.decode())
+            util.load_file(confpath))
 
     def test_distro_ntp_client_configs(self):
         """Test we have updated ntp client configs on different distros"""
@@ -267,17 +265,17 @@ class TestNtp(FilesystemMockingTestCase):
                 cc_ntp.write_ntp_config_template(distro, servers=servers,
                                                  pools=pools, path=confpath,
                                                  template_fn=template_fn)
-                content = util.read_file_or_url('file://' + confpath).contents
+                content = util.load_file(confpath)
                 if client in ['ntp', 'chrony']:
                     expected_servers = '\n'.join([
                         'server {0} iburst'.format(srv) for srv in servers])
                     print('distro=%s client=%s' % (distro, client))
-                    self.assertIn(expected_servers, content.decode('utf-8'),
+                    self.assertIn(expected_servers, content,
                                   ('failed to render {0} conf'
                                    ' for distro:{1}'.format(client, distro)))
                     expected_pools = '\n'.join([
                         'pool {0} iburst'.format(pool) for pool in pools])
-                    self.assertIn(expected_pools, content.decode('utf-8'),
+                    self.assertIn(expected_pools, content,
                                   ('failed to render {0} conf'
                                    ' for distro:{1}'.format(client, distro)))
                 elif client == 'systemd-timesyncd':
@@ -286,7 +284,7 @@ class TestNtp(FilesystemMockingTestCase):
                         "# See timesyncd.conf(5) for details.\n\n" +
                         "[Time]\nNTP=%s %s \n" % (" ".join(servers),
                                                   " ".join(pools)))
-                    self.assertEqual(expected_content, content.decode())
+                    self.assertEqual(expected_content, content)
 
     def test_no_ntpcfg_does_nothing(self):
         """When no ntp section is defined handler logs a warning and noops."""
@@ -308,10 +306,10 @@ class TestNtp(FilesystemMockingTestCase):
                 confpath = ntpconfig['confpath']
                 m_select.return_value = ntpconfig
                 cc_ntp.handle('cc_ntp', valid_empty_config, mycloud, None, [])
-                content = util.read_file_or_url('file://' + confpath).contents
                 pools = cc_ntp.generate_server_names(mycloud.distro.name)
                 self.assertEqual(
-                    "servers []\npools {0}\n".format(pools), content.decode())
+                    "servers []\npools {0}\n".format(pools),
+                    util.load_file(confpath))
             self.assertNotIn('Invalid config:', self.logs.getvalue())
 
     @skipUnlessJsonSchema()
@@ -333,9 +331,8 @@ class TestNtp(FilesystemMockingTestCase):
                 "Invalid config:\nntp.pools.0: 123 is not of type 'string'\n"
                 "ntp.servers.1: None is not of type 'string'",
                 self.logs.getvalue())
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual("servers ['valid', None]\npools [123]\n",
-                             content.decode())
+                             util.load_file(confpath))
 
     @skipUnlessJsonSchema()
     @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
@@ -357,9 +354,8 @@ class TestNtp(FilesystemMockingTestCase):
                 "Invalid config:\nntp.pools: 123 is not of type 'array'\n"
                 "ntp.servers: 'non-array' is not of type 'array'",
                 self.logs.getvalue())
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual("servers non-array\npools 123\n",
-                             content.decode())
+                             util.load_file(confpath))
 
     @skipUnlessJsonSchema()
     @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
@@ -381,10 +377,9 @@ class TestNtp(FilesystemMockingTestCase):
                 "Invalid config:\nntp: Additional properties are not allowed "
                 "('invalidkey' was unexpected)",
                 self.logs.getvalue())
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "servers []\npools ['0.mycompany.pool.ntp.org']\n",
-                content.decode())
+                util.load_file(confpath))
 
     @skipUnlessJsonSchema()
     @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
@@ -407,10 +402,10 @@ class TestNtp(FilesystemMockingTestCase):
                 " has non-unique elements\nntp.servers: "
                 "['10.0.0.1', '10.0.0.1'] has non-unique elements",
                 self.logs.getvalue())
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "servers ['10.0.0.1', '10.0.0.1']\n"
-                "pools ['0.mypool.org', '0.mypool.org']\n", content.decode())
+                "pools ['0.mypool.org', '0.mypool.org']\n",
+                util.load_file(confpath))
 
     @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
     def test_ntp_handler_timesyncd(self, m_select):
@@ -426,10 +421,9 @@ class TestNtp(FilesystemMockingTestCase):
             confpath = ntpconfig['confpath']
             m_select.return_value = ntpconfig
             cc_ntp.handle('cc_ntp', cfg, mycloud, None, [])
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "[Time]\nNTP=192.168.2.1 192.168.2.2 0.mypool.org \n",
-                content.decode())
+                util.load_file(confpath))
 
     @mock.patch('cloudinit.config.cc_ntp.select_ntp_client')
     def test_ntp_handler_enabled_false(self, m_select):
@@ -466,10 +460,9 @@ class TestNtp(FilesystemMockingTestCase):
                 m_util.subp.assert_called_with(
                     ['systemctl', 'reload-or-restart',
                      service_name], capture=True)
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "servers []\npools {0}\n".format(pools),
-                content.decode())
+                util.load_file(confpath))
 
     def test_opensuse_picks_chrony(self):
         """Test opensuse picks chrony or ntp on certain distro versions"""
@@ -638,10 +631,9 @@ class TestNtp(FilesystemMockingTestCase):
             mock_path = 'cloudinit.config.cc_ntp.temp_utils._TMPDIR'
             with mock.patch(mock_path, self.new_root):
                 cc_ntp.handle('notimportant', cfg, mycloud, None, None)
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "servers []\npools ['mypool.org']\n%s" % custom,
-                content.decode())
+                util.load_file(confpath))
 
     @mock.patch('cloudinit.config.cc_ntp.supplemental_schema_validation')
     @mock.patch('cloudinit.config.cc_ntp.reload_ntp')
@@ -675,10 +667,9 @@ class TestNtp(FilesystemMockingTestCase):
             with mock.patch(mock_path, self.new_root):
                 cc_ntp.handle('notimportant',
                               {'ntp': cfg}, mycloud, None, None)
-            content = util.read_file_or_url('file://' + confpath).contents
             self.assertEqual(
                 "servers []\npools ['mypool.org']\n%s" % custom,
-                content.decode())
+                util.load_file(confpath))
         m_schema.assert_called_with(expected_merged_cfg)
 
 
@@ -706,7 +697,7 @@ class TestSupplementalSchemaValidation(CiTestCase):
         cfg = {'confpath': 'someconf', 'check_exe': '', 'service_name': '',
                'template': 'asdf', 'template_name': None, 'packages': 'NOPE'}
         match = (r'Invalid ntp configuration:\\nExpected a list of required'
-                 ' package names for ntp:config:packages. Found \(NOPE\)')
+                 ' package names for ntp:config:packages. Found \\(NOPE\\)')
         with self.assertRaisesRegex(ValueError, match):
             cc_ntp.supplemental_schema_validation(cfg)
 
diff --git a/tests/unittests/test_handler/test_handler_resizefs.py b/tests/unittests/test_handler/test_handler_resizefs.py
index 7a7ba1f..f92175f 100644
--- a/tests/unittests/test_handler/test_handler_resizefs.py
+++ b/tests/unittests/test_handler/test_handler_resizefs.py
@@ -147,7 +147,7 @@ class TestResizefs(CiTestCase):
     def test_resize_ufs_cmd_return(self):
         mount_point = '/'
         devpth = '/dev/sda2'
-        self.assertEqual(('growfs', devpth),
+        self.assertEqual(('growfs', '-y', devpth),
                          _resize_ufs(mount_point, devpth))
 
     @mock.patch('cloudinit.util.get_mount_info')
diff --git a/tests/unittests/test_handler/test_schema.py b/tests/unittests/test_handler/test_schema.py
index ac41f12..fb266fa 100644
--- a/tests/unittests/test_handler/test_schema.py
+++ b/tests/unittests/test_handler/test_schema.py
@@ -134,22 +134,35 @@ class ValidateCloudConfigFileTest(CiTestCase):
         with self.assertRaises(SchemaValidationError) as context_mgr:
             validate_cloudconfig_file(self.config_file, {})
         self.assertEqual(
-            'Cloud config schema errors: header: File {0} needs to begin with '
-            '"{1}"'.format(self.config_file, CLOUD_CONFIG_HEADER.decode()),
+            'Cloud config schema errors: format-l1.c1: File {0} needs to begin'
+            ' with "{1}"'.format(
+                self.config_file, CLOUD_CONFIG_HEADER.decode()),
             str(context_mgr.exception))
 
-    def test_validateconfig_file_error_on_non_yaml_format(self):
-        """On non-yaml format, validate_cloudconfig_file errors."""
+    def test_validateconfig_file_error_on_non_yaml_scanner_error(self):
+        """On non-yaml scan issues, validate_cloudconfig_file errors."""
+        # Generate a scanner error by providing text on a single line with
+        # improper indent.
+        write_file(self.config_file, '#cloud-config\nasdf:\nasdf')
+        with self.assertRaises(SchemaValidationError) as context_mgr:
+            validate_cloudconfig_file(self.config_file, {})
+        self.assertIn(
+            'schema errors: format-l3.c1: File {0} is not valid yaml.'.format(
+                self.config_file),
+            str(context_mgr.exception))
+
+    def test_validateconfig_file_error_on_non_yaml_parser_error(self):
+        """On non-yaml parser issues, validate_cloudconfig_file errors."""
         write_file(self.config_file, '#cloud-config\n{}}')
         with self.assertRaises(SchemaValidationError) as context_mgr:
             validate_cloudconfig_file(self.config_file, {})
         self.assertIn(
-            'schema errors: format: File {0} is not valid yaml.'.format(
+            'schema errors: format-l2.c3: File {0} is not valid yaml.'.format(
                 self.config_file),
             str(context_mgr.exception))
 
     @skipUnlessJsonSchema()
-    def test_validateconfig_file_sctricty_validates_schema(self):
+    def test_validateconfig_file_sctrictly_validates_schema(self):
         """validate_cloudconfig_file raises errors on invalid schema."""
         schema = {
             'properties': {'p1': {'type': 'string', 'format': 'hostname'}}}
@@ -342,6 +355,20 @@ class MainTest(CiTestCase):
             'Expected either --config-file argument or --doc\n',
             m_stderr.getvalue())
 
+    def test_main_absent_config_file(self):
+        """Main exits non-zero when config file is absent."""
+        myargs = ['mycmd', '--annotate', '--config-file', 'NOT_A_FILE']
+        with mock.patch('sys.exit', side_effect=self.sys_exit):
+            with mock.patch('sys.argv', myargs):
+                with mock.patch('sys.stderr', new_callable=StringIO) as \
+                        m_stderr:
+                    with self.assertRaises(SystemExit) as context_manager:
+                        main()
+        self.assertEqual(1, context_manager.exception.code)
+        self.assertEqual(
+            'Configfile NOT_A_FILE does not exist\n',
+            m_stderr.getvalue())
+
     def test_main_prints_docs(self):
         """When --doc parameter is provided, main generates documentation."""
         myargs = ['mycmd', '--doc']
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index fac8267..5ab61cf 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -2,12 +2,9 @@
 
 from cloudinit import net
 from cloudinit.net import cmdline
-from cloudinit.net import eni
-from cloudinit.net import natural_sort_key
-from cloudinit.net import netplan
-from cloudinit.net import network_state
-from cloudinit.net import renderers
-from cloudinit.net import sysconfig
+from cloudinit.net import (
+    eni, interface_has_own_mac, natural_sort_key, netplan, network_state,
+    renderers, sysconfig)
 from cloudinit.sources.helpers import openstack
 from cloudinit import temp_utils
 from cloudinit import util
@@ -528,6 +525,7 @@ NETWORK_CONFIGS = {
             config:
               - type: 'physical'
                 name: 'iface0'
+                mtu: 8999
                 subnets:
                   - type: static
                     address: 192.168.14.2/24
@@ -663,8 +661,8 @@ iface eth0.101 inet static
     dns-nameservers 192.168.0.10 10.23.23.134
     dns-search barley.maas sacchromyces.maas brettanomyces.maas
     gateway 192.168.0.1
-    hwaddress aa:bb:cc:dd:ee:11
     mtu 1500
+    hwaddress aa:bb:cc:dd:ee:11
     vlan-raw-device eth0
     vlan_id 101
 
@@ -760,6 +758,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                         id: 101
                         link: eth0
                         macaddress: aa:bb:cc:dd:ee:11
+                        mtu: 1500
                         nameservers:
                             addresses:
                             - 192.168.0.10
@@ -923,6 +922,8 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                   mtu: 1500
                   subnets:
                     - type: static
+                      # When 'mtu' matches device-level mtu, no warnings
+                      mtu: 1500
                       address: 192.168.0.2/24
                       gateway: 192.168.0.1
                       dns_nameservers:
@@ -1031,6 +1032,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
               - type: bond
                 name: bond0
                 mac_address: "aa:bb:cc:dd:e8:ff"
+                mtu: 9000
                 bond_interfaces:
                   - bond0s0
                   - bond0s1
@@ -1073,6 +1075,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                      interfaces:
                      - bond0s0
                      - bond0s1
+                     mtu: 9000
                      parameters:
                          mii-monitor-interval: 100
                          mode: active-backup
@@ -1160,6 +1163,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
         IPADDR1=192.168.1.2
         IPV6ADDR=2001:1::1/92
         IPV6INIT=yes
+        MTU=9000
         NETMASK=255.255.255.0
         NETMASK1=255.255.255.0
         NM_CONTROLLED=no
@@ -1206,6 +1210,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                 name: en0
                 mac_address: "aa:bb:cc:dd:e8:00"
               - type: vlan
+                mtu: 2222
                 name: en0.99
                 vlan_link: en0
                 vlan_id: 99
@@ -1241,6 +1246,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                 IPV6ADDR=2001:1::bbbb/96
                 IPV6INIT=yes
                 IPV6_DEFAULTGW=2001:1::1
+                MTU=2222
                 NETMASK=255.255.255.0
                 NETMASK1=255.255.255.0
                 NM_CONTROLLED=no
@@ -1608,12 +1614,13 @@ iface eth1 inet dhcp
         ]
         self.assertEqual(", ".join(expected_rule) + '\n', contents.lstrip())
 
+    @mock.patch("cloudinit.util.get_cmdline")
     @mock.patch("cloudinit.util.udevadm_settle")
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
     @mock.patch("cloudinit.net.get_devicelist")
     def test_unstable_names(self, mock_get_devicelist, mock_read_sys_net,
-                            mock_sys_dev_path, mock_settle):
+                            mock_sys_dev_path, mock_settle, m_get_cmdline):
         """verify that udevadm settle is called when we find unstable names"""
         devices = {
             'eth0': {
@@ -1629,6 +1636,7 @@ iface eth1 inet dhcp
 
         }
 
+        m_get_cmdline.return_value = ''
         tmp_dir = self.tmp_dir()
         _setup_test(tmp_dir, mock_get_devicelist,
                     mock_read_sys_net, mock_sys_dev_path,
@@ -1670,6 +1678,8 @@ iface eth1 inet dhcp
 
 class TestSysConfigRendering(CiTestCase):
 
+    with_logs = True
+
     scripts_dir = '/etc/sysconfig/network-scripts'
     header = ('# Created by cloud-init on instance boot automatically, '
               'do not edit.\n#\n')
@@ -1918,6 +1928,9 @@ USERCTL=no
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
         self._compare_files_to_expected(entry['expected_sysconfig'], found)
         self._assert_headers(found)
+        self.assertNotIn(
+            'WARNING: Network config: ignoring eth0.101 device-level mtu',
+            self.logs.getvalue())
 
     def test_small_config(self):
         entry = NETWORK_CONFIGS['small']
@@ -1930,6 +1943,10 @@ USERCTL=no
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
         self._compare_files_to_expected(entry['expected_sysconfig'], found)
         self._assert_headers(found)
+        expected_msg = (
+            'WARNING: Network config: ignoring iface0 device-level mtu:8999'
+            ' because ipv4 subnet-level mtu:9000 provided.')
+        self.assertIn(expected_msg, self.logs.getvalue())
 
     def test_dhcpv6_only_config(self):
         entry = NETWORK_CONFIGS['dhcpv6_only']
@@ -2411,6 +2428,7 @@ class TestNetplanRoundTrip(CiTestCase):
 
 
 class TestEniRoundTrip(CiTestCase):
+
     def _render_and_read(self, network_config=None, state=None, eni_path=None,
                          netrules_path=None, dir=None):
         if dir is None:
@@ -2691,6 +2709,43 @@ class TestGetInterfaces(CiTestCase):
             any_order=True)
 
 
+class TestInterfaceHasOwnMac(CiTestCase):
+    """Test interface_has_own_mac.  This is admittedly a bit whitebox."""
+
+    @mock.patch('cloudinit.net.read_sys_net_int', return_value=None)
+    def test_non_strict_with_no_addr_assign_type(self, m_read_sys_net_int):
+        """If nic does not have addr_assign_type, it is not "stolen".
+
+        SmartOS containers do not provide the addr_assign_type in /sys.
+
+            $ ( cd /sys/class/net/eth0/ && grep -r . *)
+            address:90:b8:d0:20:e1:b0
+            addr_len:6
+            flags:0x1043
+            ifindex:2
+            mtu:1500
+            tx_queue_len:1
+            type:1
+        """
+        self.assertTrue(interface_has_own_mac("eth0"))
+
+    @mock.patch('cloudinit.net.read_sys_net_int', return_value=None)
+    def test_strict_with_no_addr_assign_type_raises(self, m_read_sys_net_int):
+        with self.assertRaises(ValueError):
+            interface_has_own_mac("eth0", True)
+
+    @mock.patch('cloudinit.net.read_sys_net_int')
+    def test_expected_values(self, m_read_sys_net_int):
+        msg = "address_assign_type=%d said to not have own mac"
+        for address_assign_type in (0, 1, 3):
+            m_read_sys_net_int.return_value = address_assign_type
+            self.assertTrue(
+                interface_has_own_mac("eth0", msg % address_assign_type))
+
+        m_read_sys_net_int.return_value = 2
+        self.assertFalse(interface_has_own_mac("eth0"))
+
+
 class TestGetInterfacesByMac(CiTestCase):
     _data = {'bonds': ['bond1'],
              'bridges': ['bridge1'],
diff --git a/tests/unittests/test_runs/test_simple_run.py b/tests/unittests/test_runs/test_simple_run.py
index 762974e..d67c422 100644
--- a/tests/unittests/test_runs/test_simple_run.py
+++ b/tests/unittests/test_runs/test_simple_run.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import copy
 import os
 
 
@@ -127,8 +128,9 @@ class TestSimpleRun(helpers.FilesystemMockingTestCase):
         """run_section forced skipped modules by using unverified_modules."""
 
         # re-write cloud.cfg with unverified_modules override
-        self.cfg['unverified_modules'] = ['spacewalk']  # Would have skipped
-        cloud_cfg = util.yaml_dumps(self.cfg)
+        cfg = copy.deepcopy(self.cfg)
+        cfg['unverified_modules'] = ['spacewalk']  # Would have skipped
+        cloud_cfg = util.yaml_dumps(cfg)
         util.ensure_dir(os.path.join(self.new_root, 'etc', 'cloud'))
         util.write_file(os.path.join(self.new_root, 'etc',
                                      'cloud', 'cloud.cfg'), cloud_cfg)
@@ -150,4 +152,30 @@ class TestSimpleRun(helpers.FilesystemMockingTestCase):
             "running unverified_modules: 'spacewalk'",
             self.logs.getvalue())
 
+    def test_none_ds_run_with_no_config_modules(self):
+        """run_section will report no modules run when none are configured."""
+
+        # re-write cloud.cfg with unverified_modules override
+        cfg = copy.deepcopy(self.cfg)
+        # Represent empty configuration in /etc/cloud/cloud.cfg
+        cfg['cloud_init_modules'] = None
+        cloud_cfg = util.yaml_dumps(cfg)
+        util.ensure_dir(os.path.join(self.new_root, 'etc', 'cloud'))
+        util.write_file(os.path.join(self.new_root, 'etc',
+                                     'cloud', 'cloud.cfg'), cloud_cfg)
+
+        initer = stages.Init()
+        initer.read_cfg()
+        initer.initialize()
+        initer.fetch()
+        initer.instancify()
+        initer.update()
+        initer.cloudify().run('consume_data', initer.consume_data,
+                              args=[PER_INSTANCE], freq=PER_INSTANCE)
+
+        mods = stages.Modules(initer)
+        (which_ran, failures) = mods.run_section('cloud_init_modules')
+        self.assertTrue(len(failures) == 0)
+        self.assertEqual([], which_ran)
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 84941c7..7a203ce 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -4,6 +4,7 @@ from __future__ import print_function
 
 import logging
 import os
+import re
 import shutil
 import stat
 import tempfile
@@ -265,26 +266,49 @@ class TestGetCmdline(helpers.TestCase):
         self.assertEqual("abcd 123", ret)
 
 
-class TestLoadYaml(helpers.TestCase):
+class TestLoadYaml(helpers.CiTestCase):
     mydefault = "7b03a8ebace993d806255121073fed52"
+    with_logs = True
 
     def test_simple(self):
         mydata = {'1': "one", '2': "two"}
         self.assertEqual(util.load_yaml(yaml.dump(mydata)), mydata)
 
     def test_nonallowed_returns_default(self):
+        '''Any unallowed types result in returning default; log the issue.'''
         # for now, anything not in the allowed list just returns the default.
         myyaml = yaml.dump({'1': "one"})
         self.assertEqual(util.load_yaml(blob=myyaml,
                                         default=self.mydefault,
                                         allowed=(str,)),
                          self.mydefault)
-
-    def test_bogus_returns_default(self):
+        regex = re.compile(
+            r'Yaml load allows \(<(class|type) \'str\'>,\) root types, but'
+            r' got dict')
+        self.assertTrue(regex.search(self.logs.getvalue()),
+                        msg='Missing expected yaml load error')
+
+    def test_bogus_scan_error_returns_default(self):
+        '''On Yaml scan error, load_yaml returns the default and logs issue.'''
         badyaml = "1\n 2:"
         self.assertEqual(util.load_yaml(blob=badyaml,
                                         default=self.mydefault),
                          self.mydefault)
+        self.assertIn(
+            'Failed loading yaml blob. Invalid format at line 2 column 3:'
+            ' "mapping values are not allowed here',
+            self.logs.getvalue())
+
+    def test_bogus_parse_error_returns_default(self):
+        '''On Yaml parse error, load_yaml returns default and logs issue.'''
+        badyaml = "{}}"
+        self.assertEqual(util.load_yaml(blob=badyaml,
+                                        default=self.mydefault),
+                         self.mydefault)
+        self.assertIn(
+            'Failed loading yaml blob. Invalid format at line 1 column 3:'
+            " \"expected \'<document start>\', but found \'}\'",
+            self.logs.getvalue())
 
     def test_unsafe_types(self):
         # should not load complex types
@@ -444,6 +468,29 @@ class TestMountinfoParsing(helpers.ResourceUsingTestCase):
         self.assertIsNone(ret)
 
 
+class TestIsX86(helpers.CiTestCase):
+
+    def test_is_x86_matches_x86_types(self):
+        """is_x86 returns True if CPU architecture matches."""
+        matched_arches = ['x86_64', 'i386', 'i586', 'i686']
+        for arch in matched_arches:
+            self.assertTrue(
+                util.is_x86(arch), 'Expected is_x86 for arch "%s"' % arch)
+
+    def test_is_x86_unmatched_types(self):
+        """is_x86 returns Fale on non-intel x86 architectures."""
+        unmatched_arches = ['ia64', '9000/800', 'arm64v71']
+        for arch in unmatched_arches:
+            self.assertFalse(
+                util.is_x86(arch), 'Expected not is_x86 for arch "%s"' % arch)
+
+    @mock.patch('cloudinit.util.os.uname')
+    def test_is_x86_calls_uname_for_architecture(self, m_uname):
+        """is_x86 returns True if platform from uname matches."""
+        m_uname.return_value = [0, 1, 2, 3, 'x86_64']
+        self.assertTrue(util.is_x86())
+
+
 class TestReadDMIData(helpers.FilesystemMockingTestCase):
 
     def setUp(self):
@@ -805,6 +852,14 @@ class TestSubp(helpers.CiTestCase):
                                r'Missing #! in script\?',
                                util.subp, (noshebang,))
 
+    def test_subp_combined_stderr_stdout(self):
+        """Providing combine_capture as True redirects stderr to stdout."""
+        data = b'hello world'
+        (out, err) = util.subp(self.stdin2err, capture=True,
+                               combine_capture=True, decode=False, data=data)
+        self.assertEqual(b'', err)
+        self.assertEqual(data, out)
+
     def test_returns_none_if_no_capture(self):
         (out, err) = util.subp(self.stdin2out, data=b'', capture=False)
         self.assertIsNone(err)
@@ -1057,4 +1112,60 @@ class TestLoadShellContent(helpers.TestCase):
                 ''])))
 
 
+class TestGetProcEnv(helpers.TestCase):
+    """test get_proc_env."""
+    null = b'\x00'
+    simple1 = b'HOME=/'
+    simple2 = b'PATH=/bin:/sbin'
+    bootflag = b'BOOTABLE_FLAG=\x80'  # from LP: #1775371
+    mixed = b'MIXED=' + b'ab\xccde'
+
+    def _val_decoded(self, blob, encoding='utf-8', errors='replace'):
+        # return the value portion of key=val decoded.
+        return blob.split(b'=', 1)[1].decode(encoding, errors)
+
+    @mock.patch("cloudinit.util.load_file")
+    def test_non_utf8_in_environment(self, m_load_file):
+        """env may have non utf-8 decodable content."""
+        content = self.null.join(
+            (self.bootflag, self.simple1, self.simple2, self.mixed))
+        m_load_file.return_value = content
+
+        self.assertEqual(
+            {'BOOTABLE_FLAG': self._val_decoded(self.bootflag),
+             'HOME': '/', 'PATH': '/bin:/sbin',
+             'MIXED': self._val_decoded(self.mixed)},
+            util.get_proc_env(1))
+        self.assertEqual(1, m_load_file.call_count)
+
+    @mock.patch("cloudinit.util.load_file")
+    def test_encoding_none_returns_bytes(self, m_load_file):
+        """encoding none returns bytes."""
+        lines = (self.bootflag, self.simple1, self.simple2, self.mixed)
+        content = self.null.join(lines)
+        m_load_file.return_value = content
+
+        self.assertEqual(
+            dict([t.split(b'=') for t in lines]),
+            util.get_proc_env(1, encoding=None))
+        self.assertEqual(1, m_load_file.call_count)
+
+    @mock.patch("cloudinit.util.load_file")
+    def test_all_utf8_encoded(self, m_load_file):
+        """common path where only utf-8 decodable content."""
+        content = self.null.join((self.simple1, self.simple2))
+        m_load_file.return_value = content
+        self.assertEqual(
+            {'HOME': '/', 'PATH': '/bin:/sbin'},
+            util.get_proc_env(1))
+        self.assertEqual(1, m_load_file.call_count)
+
+    @mock.patch("cloudinit.util.load_file")
+    def test_non_existing_file_returns_empty_dict(self, m_load_file):
+        """as implemented, a non-existing pid returns empty dict.
+        This is how it was originally implemented."""
+        m_load_file.side_effect = OSError("File does not exist.")
+        self.assertEqual({}, util.get_proc_env(1))
+        self.assertEqual(1, m_load_file.call_count)
+
 # vi: ts=4 expandtab
diff --git a/tools/ds-identify b/tools/ds-identify
index 7fff5d1..ce0477a 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -1,4 +1,5 @@
 #!/bin/sh
+# shellcheck disable=2015,2039,2162,2166
 #
 # ds-identify is configured via /etc/cloud/ds-identify.cfg
 # or on the kernel command line. It takes primarily 2 inputs:
@@ -125,7 +126,6 @@ DI_ON_NOTFOUND=""
 DI_EC2_STRICT_ID_DEFAULT="true"
 
 _IS_IBM_CLOUD=""
-_IS_IBM_PROVISIONING=""
 
 error() {
     set -- "ERROR:" "$@";
@@ -187,6 +187,16 @@ block_dev_with_label() {
     return 0
 }
 
+ensure_sane_path() {
+    local t
+    for t in /sbin /usr/sbin /bin /usr/bin; do
+        case ":$PATH:" in
+            *:$t:*|*:$t/:*) continue;;
+        esac
+        PATH="${PATH:+${PATH}:}$t"
+    done
+}
+
 read_fs_info() {
     cached "${DI_BLKID_OUTPUT}" && return 0
     # do not rely on links in /dev/disk which might not be present yet.
@@ -211,7 +221,9 @@ read_fs_info() {
     # 'set --' will collapse multiple consecutive entries in IFS for
     # whitespace characters (\n, tab, " ") so we cannot rely on getting
     # empty lines in "$@" below.
-    IFS="$CR"; set -- $out; IFS="$oifs"
+
+    # shellcheck disable=2086
+    { IFS="$CR"; set -- $out; IFS="$oifs"; }
 
     for line in "$@"; do
         case "${line}" in
@@ -259,7 +271,7 @@ read_virt() {
 
 is_container() {
     case "${DI_VIRT}" in
-        lxc|lxc-libvirt|systemd-nspawn|docker|rkt) return 0;;
+        container-other|lxc|lxc-libvirt|systemd-nspawn|docker|rkt) return 0;;
         *) return 1;;
     esac
 }
@@ -311,6 +323,7 @@ read_dmi_product_serial() {
     DI_DMI_PRODUCT_SERIAL="$_RET"
 }
 
+# shellcheck disable=2034
 read_uname_info() {
     # run uname, and parse output.
     # uname is tricky to parse as it outputs always in a given order
@@ -330,6 +343,7 @@ read_uname_info() {
             return $ret
         }
     fi
+    # shellcheck disable=2086
     set -- $out
     DI_UNAME_KERNEL_NAME="$1"
     DI_UNAME_NODENAME="$2"
@@ -357,7 +371,8 @@ parse_yaml_array() {
     # the fix was to quote the open bracket (val=${val#"["}) (LP: #1689648)
     val=${val#"["}
     val=${val%"]"}
-    IFS=","; set -- $val; IFS="$oifs"
+    # shellcheck disable=2086
+    { IFS=","; set -- $val; IFS="$oifs"; }
     for tok in "$@"; do
         trim "$tok"
         unquote "$_RET"
@@ -393,7 +408,7 @@ read_datasource_list() {
     fi
     if [ -z "$dslist" ]; then
         dslist=${DI_DSLIST_DEFAULT}
-        debug 1 "no datasource_list found, using default:" $dslist
+        debug 1 "no datasource_list found, using default: $dslist"
     fi
     DI_DSLIST=$dslist
     return 0
@@ -404,7 +419,8 @@ read_pid1_product_name() {
     cached "${DI_PID_1_PRODUCT_NAME}" && return
     [ -r "${PATH_PROC_1_ENVIRON}" ] || return
     out=$(tr '\0' '\n' <"${PATH_PROC_1_ENVIRON}")
-    IFS="$CR"; set -- $out; IFS="$oifs"
+    # shellcheck disable=2086
+    { IFS="$CR"; set -- $out; IFS="$oifs"; }
     for tok in "$@"; do
         key=${tok%%=*}
         [ "$key" != "$tok" ] || continue
@@ -471,6 +487,7 @@ nocase_equal() {
     [ "$1" = "$2" ] && return 0
 
     local delim="-delim-"
+    # shellcheck disable=2018,2019
     out=$(echo "$1${delim}$2" | tr A-Z a-z)
     [ "${out#*${delim}}" = "${out%${delim}*}" ]
 }
@@ -547,11 +564,13 @@ check_config() {
     else
         files="$*"
     fi
-    set +f; set -- $files; set -f;
+    # shellcheck disable=2086
+    { set +f; set -- $files; set -f; }
     if [ "$1" = "$files" -a ! -f "$1" ]; then
         return 1
     fi
     local fname="" line="" ret="" found=0 found_fn=""
+    # shellcheck disable=2094
     for fname in "$@"; do
         [ -f "$fname" ] || continue
         while read line; do
@@ -601,7 +620,6 @@ dscheck_NoCloud() {
         *\ ds=nocloud*) return ${DS_FOUND};;
     esac
 
-    is_ibm_cloud && return ${DS_NOT_FOUND}
     for d in nocloud nocloud-net; do
         check_seed_dir "$d" meta-data user-data && return ${DS_FOUND}
         check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND}
@@ -612,11 +630,12 @@ dscheck_NoCloud() {
     return ${DS_NOT_FOUND}
 }
 
+is_ds_enabled() {
+    local name="$1" pad=" ${DI_DSLIST} "
+    [ "${pad#* $name }" != "${pad}" ]
+}
+
 check_configdrive_v2() {
-    is_ibm_cloud && return ${DS_NOT_FOUND}
-    if has_fs_with_label CONFIG-2 config-2; then
-        return ${DS_FOUND}
-    fi
     # look in /config-drive <vlc>/seed/config_drive for a directory
     # openstack/YYYY-MM-DD format with a file meta_data.json
     local d=""
@@ -631,6 +650,15 @@ check_configdrive_v2() {
         debug 1 "config drive seeded directory had only 'latest'"
         return ${DS_FOUND}
     fi
+
+    local ibm_enabled=false
+    is_ds_enabled "IBMCloud" && ibm_enabled=true
+    debug 1 "is_ds_enabled(IBMCloud) = $ibm_enabled."
+    [ "$ibm_enabled" = "true" ] && is_ibm_cloud && return ${DS_NOT_FOUND}
+
+    if has_fs_with_label CONFIG-2 config-2; then
+        return ${DS_FOUND}
+    fi
     return ${DS_NOT_FOUND}
 }
 
@@ -787,7 +815,7 @@ ec2_read_strict_setting() {
     # 3. look for the key 'strict_id' (datasource/Ec2/strict_id)
     # only in cloud.cfg or cloud.cfg.d/EC2.cfg (case insensitive)
     local cfg="${PATH_ETC_CI_CFG}" cfg_d="${PATH_ETC_CI_CFG_D}"
-    if check_config strict_id $cfg "$cfg_d/*[Ee][Cc]2*.cfg"; then
+    if check_config strict_id "$cfg" "$cfg_d/*[Ee][Cc]2*.cfg"; then
         debug 2 "${_RET_fname} set strict_id to $_RET"
         return 0
     fi
@@ -972,12 +1000,14 @@ dscheck_SmartOS() {
     # joyent cloud has two virt types: kvm and container
     # on kvm, product name on joyent public cloud shows 'SmartDC HVM'
     # on the container platform, uname's version has: BrandZ virtual linux
+    # for container, we also verify that the socketfile exists to protect
+    # against embedded containers (lxd running on brandz)
     local smartdc_kver="BrandZ virtual linux"
+    local metadata_sockfile="${PATH_ROOT}/native/.zonecontrol/metadata.sock"
     dmi_product_name_matches "SmartDC*" && return $DS_FOUND
-    if [ "${DI_UNAME_KERNEL_VERSION}" = "${smartdc_kver}" ] &&
-       [ "${DI_VIRT}" = "container-other" ]; then
-       return ${DS_FOUND}
-    fi
+    [ "${DI_UNAME_KERNEL_VERSION}" = "${smartdc_kver}" ] &&
+        [ -e "${metadata_sockfile}" ] &&
+        return ${DS_FOUND}
     return ${DS_NOT_FOUND}
 }
 
@@ -994,7 +1024,7 @@ dscheck_Scaleway() {
         *\ scaleway\ *) return ${DS_FOUND};;
     esac
 
-    if [ -f ${PATH_ROOT}/var/run/scaleway ]; then
+    if [ -f "${PATH_ROOT}/var/run/scaleway" ]; then
         return ${DS_FOUND}
     fi
 
@@ -1149,6 +1179,7 @@ found() {
 }
 
 trim() {
+    # shellcheck disable=2048,2086
     set -- $*
     _RET="$*"
 }
@@ -1169,7 +1200,7 @@ _read_config() {
     # if no parameters are set, modifies _rc scoped environment vars.
     # if keyname is provided, then returns found value of that key.
     local keyname="${1:-_unset}"
-    local line="" hash="#" ckey="" key="" val=""
+    local line="" hash="#" key="" val=""
     while read line; do
         line=${line%%${hash}*}
         key="${line%%:*}"
@@ -1247,7 +1278,8 @@ parse_policy() {
 
     local mode="" report="" found="" maybe="" notfound=""
     local oifs="$IFS" tok="" val=""
-    IFS=","; set -- $policy; IFS="$oifs"
+    # shellcheck disable=2086
+    { IFS=","; set -- $policy; IFS="$oifs"; }
     for tok in "$@"; do
         val=${tok#*=}
         case "$tok" in
@@ -1314,15 +1346,15 @@ manual_clean_and_existing() {
 }
 
 read_uptime() {
-    local up idle
+    local up _
     _RET="${UNAVAILABLE}"
-    [ -f "$PATH_PROC_UPTIME" ] &&
-        read up idle < "$PATH_PROC_UPTIME" && _RET="$up"
+    [ -f "$PATH_PROC_UPTIME" ] && read up _ < "$PATH_PROC_UPTIME" &&
+        _RET="$up"
     return
 }
 
 _main() {
-    local dscheck="" ret_dis=1 ret_en=0
+    local dscheck_fn="" ret_dis=1 ret_en=0
 
     read_uptime
     debug 1 "[up ${_RET}s]" "ds-identify $*"
@@ -1357,8 +1389,9 @@ _main() {
         return
     fi
 
-    # if there is only a single entry in $DI_DSLIST
+    # shellcheck disable=2086
     set -- $DI_DSLIST
+    # if there is only a single entry in $DI_DSLIST
     if [ $# -eq 1 ] || [ $# -eq 2 -a "$2" = "None" ] ; then
         debug 1 "single entry in datasource_list ($DI_DSLIST) use that."
         found "$@"
@@ -1391,6 +1424,7 @@ _main() {
     done
 
     debug 2 "found=${found# } maybe=${maybe# }"
+    # shellcheck disable=2086
     set -- $found
     if [ $# -ne 0 ]; then
         if [ $# -eq 1 ]; then
@@ -1406,6 +1440,7 @@ _main() {
         return
     fi
 
+    # shellcheck disable=2086
     set -- $maybe
     if [ $# -ne 0 -a "${DI_ON_MAYBE}" != "none" ]; then
         debug 1 "$# datasources returned maybe: $*"
@@ -1434,18 +1469,19 @@ _main() {
         *) error "Unexpected result";;
     esac
     debug 1 "$msg"
-    return $ret
+    return "$ret"
 }
 
 main() {
     local ret=""
+    ensure_sane_path
     [ -d "$PATH_RUN_CI" ] || mkdir -p "$PATH_RUN_CI"
     if [ "${1:+$1}" != "--force" ] && [ -f "$PATH_RUN_CI_CFG" ] &&
         [ -f "$PATH_RUN_DI_RESULT" ]; then
         if read ret < "$PATH_RUN_DI_RESULT"; then
             if [ "$ret" = "0" ] || [ "$ret" = "1" ]; then
                 debug 2 "used cached result $ret. pass --force to re-run."
-                return $ret;
+                return "$ret";
             fi
             debug 1 "previous run returned unexpected '$ret'. Re-running."
         else
@@ -1457,7 +1493,7 @@ main() {
     echo "$ret" > "$PATH_RUN_DI_RESULT"
     read_uptime
     debug 1 "[up ${_RET}s]" "returning $ret"
-    return $ret
+    return "$ret"
 }
 
 noop() {
diff --git a/tools/read-dependencies b/tools/read-dependencies
index 421f470..b4656e6 100755
--- a/tools/read-dependencies
+++ b/tools/read-dependencies
@@ -51,6 +51,10 @@ MAYBE_RELIABLE_YUM_INSTALL = [
     """,
     'reliable-yum-install']
 
+ZYPPER_INSTALL = [
+    'zypper', '--non-interactive', '--gpg-auto-import-keys', 'install',
+    '--auto-agree-with-licenses']
+
 DRY_DISTRO_INSTALL_PKG_CMD = {
     'centos': ['yum', 'install', '--assumeyes'],
     'redhat': ['yum', 'install', '--assumeyes'],
@@ -61,8 +65,8 @@ DISTRO_INSTALL_PKG_CMD = {
     'redhat': MAYBE_RELIABLE_YUM_INSTALL,
     'debian': ['apt', 'install', '-y'],
     'ubuntu': ['apt', 'install', '-y'],
-    'opensuse': ['zypper', 'install'],
-    'suse': ['zypper', 'install']
+    'opensuse': ZYPPER_INSTALL,
+    'suse': ZYPPER_INSTALL,
 }
 
 
diff --git a/tools/run-centos b/tools/run-centos
index cb241ee..4506b20 100755
--- a/tools/run-centos
+++ b/tools/run-centos
@@ -1,18 +1,17 @@
 #!/bin/bash
 # This file is part of cloud-init. See LICENSE file for license information.
 
-set -u
-
-VERBOSITY=0
-TEMP_D=""
-KEEP=false
-CONTAINER=""
-
-error() { echo "$@" 1>&2; }
-fail() { [ $# -eq 0 ] || error "$@"; exit 1; }
-errorrc() { local r=$?; error "$@" "ret=$r"; return $r; }
+deprecated() {
+cat <<EOF
+             ================ DEPRECATED ================
+             | run-centos is deprecated. Please replace |
+             | your usage with tools/run-container .    |
+             ================ DEPRECATED ================
+EOF
+}
 
 Usage() {
+    deprecated
     cat <<EOF
 Usage: ${0##*/} [ options ] version
 
@@ -34,319 +33,40 @@ Usage: ${0##*/} [ options ] version
     Example:
       * ${0##*/} --rpm --srpm --unittest 6
 EOF
+    deprecated
+EOF
 }
 
 bad_Usage() { Usage 1>&2; [ $# -eq 0 ] || error "$@"; return 1; }
-cleanup() {
-    if [ -n "$CONTAINER" -a "$KEEP" = "false" ]; then
-        delete_container "$CONTAINER"
-    fi
-    [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}"
-}
-
-debug() {
-    local level=${1}; shift;
-    [ "${level}" -gt "${VERBOSITY}" ] && return
-    error "${@}"
-}
-
-
-inside_as() {
-    # inside_as(container_name, user, cmd[, args])
-    # executes cmd with args inside container as user in users home dir.
-    local name="$1" user="$2"
-    shift 2
-    if [ "$user" = "root" ]; then
-        inside "$name" "$@"
-        return
-    fi
-    local stuffed="" b64=""
-    stuffed=$(getopt --shell sh --options "" -- -- "$@")
-    stuffed=${stuffed# -- }
-    b64=$(printf "%s\n" "$stuffed" | base64 --wrap=0)
-    inside "$name" su "$user" -c \
-        'cd; eval set -- "$(echo '$b64' | base64 --decode)" && exec "$@"'
-}
-
-inside_as_cd() {
-    local name="$1" user="$2" dir="$3"
-    shift 3
-    inside_as "$name" "$user" sh -c 'cd "$0" && exec "$@"' "$dir" "$@"
-}
-
-inside() {
-    local name="$1"
-    shift
-    lxc exec "$name" -- "$@"
-}
-
-inject_cloud_init(){
-    # take current cloud-init git dir and put it inside $name at
-    # ~$user/cloud-init.
-    local name="$1" user="$2" dirty="$3"
-    local changes="" top_d="" dname="cloud-init" pstat=""
-    local gitdir="" commitish=""
-    gitdir=$(git rev-parse --git-dir) || {
-        errorrc "Failed to get git dir in $PWD";
-        return
-    }
-    local t=${gitdir%/*}
-    case "$t" in
-        */worktrees) 
-            if [ -f "${t%worktrees}/config" ]; then
-                gitdir="${t%worktrees}"
-            fi
-    esac
-
-    # attempt to get branch name.
-    commitish=$(git rev-parse --abbrev-ref HEAD) || {
-        errorrc "Failed git rev-parse --abbrev-ref HEAD"
-        return
-    }
-    if [ "$commitish" = "HEAD" ]; then
-        # detached head
-        commitish=$(git rev-parse HEAD) || {
-            errorrc "failed git rev-parse HEAD"
-            return
-        }
-    fi
-
-    local local_changes=false
-    if ! git diff --quiet "$commitish"; then
-        # there are local changes not committed.
-        local_changes=true
-        if [ "$dirty" = "false" ]; then
-            error "WARNING: You had uncommitted changes.  Those changes will "
-            error "be put into 'local-changes.diff' inside the container. "
-            error "To test these changes you must pass --dirty."
-        fi
-    fi
-
-    debug 1 "collecting ${gitdir} ($dname) into user $user in $name."
-    tar -C "${gitdir}" -cpf - . |
-        inside_as "$name" "$user" sh -ec '
-            dname=$1
-            commitish=$2
-            rm -Rf "$dname"
-            mkdir -p $dname/.git
-            cd $dname/.git
-            tar -xpf -
-            cd ..
-            git config core.bare false
-            out=$(git checkout $commitish 2>&1) ||
-                { echo "failed git checkout $commitish: $out" 1>&2; exit 1; }
-            out=$(git checkout . 2>&1) ||
-                { echo "failed git checkout .: $out" 1>&2; exit 1; }
-            ' extract "$dname" "$commitish"
-    [ "${PIPESTATUS[*]}" = "0 0" ] || {
-        error "Failed to push tarball of '$gitdir' into $name" \
-            " for user $user (dname=$dname)"
-        return 1
-    }
 
-    echo "local_changes=$local_changes dirty=$dirty"
-    if [ "$local_changes" = "true" ]; then
-        git diff "$commitish" |
-            inside_as "$name" "$user" sh -exc '
-                cd "$1"
-                if [ "$2" = "true" ]; then
-                    git apply
-                else
-                    cat > local-changes.diff
-                fi
-                ' insert_changes "$dname" "$dirty"
-        [ "${PIPESTATUS[*]}" = "0 0" ] || {
-            error "Failed to apply local changes."
-            return 1
-        }
-    fi
-
-    return 0
-}
-
-prep() {
-    # we need some very basic things not present in the container.
-    #  - git
-    #  - tar (CentOS 6 lxc container does not have it)
-    #  - python-argparse (or python3)
-    local needed="" pair="" pkg="" cmd="" needed=""
-    for pair in tar:tar git:git; do
-        pkg=${pair#*:}
-        cmd=${pair%%:*}
-        command -v $cmd >/dev/null 2>&1 || needed="${needed} $pkg"
-    done
-    if ! command -v python3; then
-        python -c "import argparse" >/dev/null 2>&1 ||
-            needed="${needed} python-argparse"
-    fi
-    needed=${needed# }
-    if [ -z "$needed" ]; then
-        error "No prep packages needed"
-        return 0
+main() {
+    if [ "$1" = "-h" -o "$1" == "--help" ]; then
+        Usage 1>&2;
+        exit 0;
     fi
-    error "Installing prep packages: ${needed}"
-    set -- $needed
-    local n max r
-    n=0; max=10;
-    bcmd="yum install --downloadonly --assumeyes --setopt=keepcache=1"
-    while n=$(($n+1)); do
-       error ":: running $bcmd $* [$n/$max]"
-       $bcmd "$@"
-       r=$?
-       [ $r -eq 0 ] && break
-       [ $n -ge $max ] && { error "gave up on $bcmd"; exit $r; }
-       nap=$(($n*5))
-       error ":: failed [$r] ($n/$max). sleeping $nap."
-       sleep $nap
-    done
-    error ":: running yum install --cacheonly --assumeyes $*"
-    yum install --cacheonly --assumeyes "$@"
-}
-
-start_container() {
-    local src="$1" name="$2"
-    debug 1 "starting container $name from '$src'"
-    lxc launch "$src" "$name" || {
-        errorrc "Failed to start container '$name' from '$src'";
+    local pt="" mydir=$(dirname "$0")
+    local run_container="$mydir/run-container"
+    if [ ! -x "$run_container" ]; then
+        bad_Usage "Could not find run-container."
         return
-    }
-    CONTAINER=$name
-
-    local out="" ret=""
-    debug 1 "waiting for networking"
-    out=$(inside "$name" sh -c '
-        i=0
-        while [ $i -lt 60 ]; do
-            getent hosts mirrorlist.centos.org && exit 0
-            sleep 2
-        done' 2>&1)
-    ret=$?
-    if [ $ret -ne 0 ]; then
-        error "Waiting for network in container '$name' failed. [$ret]"
-        error "$out"
-        return $ret
-    fi
-
-    if [ ! -z "${http_proxy-}" ]; then
-        debug 1 "configuring proxy ${http_proxy}"
-        inside "$name" sh -c "echo proxy=$http_proxy >> /etc/yum.conf"
-        inside "$name" sed -i s/enabled=1/enabled=0/ /etc/yum/pluginconf.d/fastestmirror.conf
     fi
-}
-
-delete_container() {
-    debug 1 "removing container $1 [--keep to keep]"
-    lxc delete --force "$1"
-}
-
-main() {
-    local short_opts="ahkrsuv"
-    local long_opts="artifact,dirty,help,keep,rpm,srpm,unittest,verbose"
-    local getopt_out=""
-    getopt_out=$(getopt --name "${0##*/}" \
-        --options "${short_opts}" --long "${long_opts}" -- "$@") &&
-        eval set -- "${getopt_out}" ||
-        { bad_Usage; return; }
-
-    local cur="" next=""
-    local artifact="" keep="" rpm="" srpm="" unittest="" version=""
-    local dirty=false
-
+    
+    pt=( "$run_container" )
     while [ $# -ne 0 ]; do
         cur="${1:-}"; next="${2:-}";
         case "$cur" in
-            -a|--artifact) artifact=1;;
-               --dirty) dirty=true;;
-            -h|--help) Usage ; exit 0;;
-            -k|--keep) KEEP=true;;
-            -r|--rpm) rpm=1;;
-            -s|--srpm) srpm=1;;
-            -u|--unittest) unittest=1;;
-            -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));;
-            --) shift; break;;
+            -r|--rpm) cur="--package";;
+            -s|--srpm) cur="--source-package";;
+            -a|--artifact) cur="--artifacts=.";;
+            6|7) cur="centos/$cur";;
         esac
+        pt[${#pt[@]}]="$cur"
         shift;
     done
-
-    [ $# -eq 1 ] || { bad_Usage "ERROR: Must provide version!"; return; }
-    version="$1"
-    case "$version" in
-        6|7) :;;
-        *) error "Expected version of 6 or 7, not '$version'"; return;;
-    esac
-
-    TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") ||
-        fail "failed to make tempdir"
-    trap cleanup EXIT
-
-    # program starts here
-    local uuid="" name="" user="ci-test" cdir=""
-    cdir="/home/$user/cloud-init"
-    uuid=$(uuidgen -t) || { error "no uuidgen"; return 1; }
-    name="cloud-init-centos-${uuid%%-*}"
-
-    start_container "images:centos/$version" "$name"
-
-    # prep the container (install very basic dependencies)
-    inside "$name" bash -s prep <"$0" ||
-        { errorrc "Failed to prep container $name"; return; }
-
-    # add the user
-    inside "$name" useradd "$user"
-
-    debug 1 "inserting cloud-init"
-    inject_cloud_init "$name" "$user" "$dirty" || {
-        errorrc "FAIL: injecting cloud-init into $name failed."
-        return
-    }
-
-    inside_as_cd "$name" root "$cdir" \
-        ./tools/read-dependencies --distro=centos --test-distro || {
-        errorrc "FAIL: failed to install dependencies with read-dependencies"
-        return
-    }
-
-    local errors=0
-    inside_as_cd "$name" "$user" "$cdir" \
-        sh -ec "git status" ||
-            { errorrc "git checkout failed."; errors=$(($errors+1)); }
-
-    if [ -n "$unittest" ]; then
-        debug 1 "running unit tests."
-        inside_as_cd "$name" "$user" "$cdir" \
-            nosetests tests/unittests cloudinit ||
-            { errorrc "nosetests failed."; errors=$(($errors+1)); }
-    fi
-
-    if [ -n "$srpm" ]; then
-        debug 1 "building srpm."
-        inside_as_cd "$name" "$user" "$cdir" ./packages/brpm --srpm ||
-            { errorrc "brpm --srpm."; errors=$(($errors+1)); }
-    fi
-
-    if [ -n "$rpm" ]; then
-        debug 1 "building rpm."
-        inside_as_cd "$name" "$user" "$cdir" ./packages/brpm ||
-            { errorrc "brpm failed."; errors=$(($errors+1)); }
-    fi
-
-    if [ -n "$artifact" ]; then
-        for built_rpm in $(inside "$name" sh -c "echo $cdir/*.rpm"); do
-            lxc file pull "$name/$built_rpm" .
-        done
-    fi
-
-    if [ "$errors" != "0" ]; then
-        error "there were $errors errors."
-        return 1
-    fi
-    return 0
+    deprecated
+    exec "${pt[@]}"
 }
 
-if [ "${1:-}" = "prep" ]; then
-    shift
-    prep "$@"
-else
-    main "$@"
-fi
+main "$@"
+
 # vi: ts=4 expandtab
diff --git a/tools/run-container b/tools/run-container
new file mode 100755
index 0000000..499e85b
--- /dev/null
+++ b/tools/run-container
@@ -0,0 +1,590 @@
+#!/bin/bash
+# This file is part of cloud-init. See LICENSE file for license information.
+#
+# shellcheck disable=2015,2016,2039,2162,2166
+
+set -u
+
+VERBOSITY=0
+KEEP=false
+CONTAINER=""
+DEFAULT_WAIT_MAX=30
+
+error() { echo "$@" 1>&2; }
+fail() { [ $# -eq 0 ] || error "$@"; exit 1; }
+errorrc() { local r=$?; error "$@" "ret=$r"; return $r; }
+
+Usage() {
+    cat <<EOF
+Usage: ${0##*/} [ options ] [images:]image-ref
+
+    This utility can makes it easier to run tests, build rpm and source rpm
+        generation inside a LXC of the specified version of CentOS.
+
+    To see images available, run 'lxc image list images:'
+    Example input:
+       centos/7
+       opensuse/42.3
+       debian/10
+
+    options:
+      -a | --artifacts DIR   copy build artifacts out to DIR.
+                             by default artifacts are not copied out.
+           --dirty           apply local changes before running tests.
+                             If not provided, a clean checkout of branch is
+                             tested.  Inside container, changes are in
+                             local-changes.diff.
+      -k | --keep            keep container after tests
+           --pyexe V         python version to use.  Default=auto.
+                             Should be name of an executable.
+                             ('python2' or 'python3')
+      -p | --package         build a binary package (.deb or .rpm)
+      -s | --source-package  build source package (debuild -S or srpm)
+      -u | --unittest        run unit tests
+
+    Example:
+      * ${0##*/} --package --source-package --unittest centos/6
+EOF
+}
+
+bad_Usage() { Usage 1>&2; [ $# -eq 0 ] || error "$@"; return 1; }
+cleanup() {
+    if [ -n "$CONTAINER" ]; then
+        if [ "$KEEP" = "true" ]; then
+            error "not deleting container '$CONTAINER' due to --keep"
+        else
+            delete_container "$CONTAINER"
+        fi
+    fi
+}
+
+debug() {
+    local level=${1}; shift;
+    [ "${level}" -gt "${VERBOSITY}" ] && return
+    error "${@}"
+}
+
+
+inside_as() {
+    # inside_as(container_name, user, cmd[, args])
+    # executes cmd with args inside container as user in users home dir.
+    local name="$1" user="$2"
+    shift 2
+    if [ "$user" = "root" ]; then
+        inside "$name" "$@"
+        return
+    fi
+    local stuffed="" b64=""
+    stuffed=$(getopt --shell sh --options "" -- -- "$@")
+    stuffed=${stuffed# -- }
+    b64=$(printf "%s\n" "$stuffed" | base64 --wrap=0)
+    inside "$name" su "$user" -c \
+        'cd; eval set -- "$(echo '"$b64"' | base64 --decode)" && exec "$@"';
+}
+
+inside_as_cd() {
+    local name="$1" user="$2" dir="$3"
+    shift 3
+    inside_as "$name" "$user" sh -c 'cd "$0" && exec "$@"' "$dir" "$@"
+}
+
+inside() {
+    local name="$1"
+    shift
+    lxc exec "$name" -- "$@"
+}
+
+inject_cloud_init(){
+    # take current cloud-init git dir and put it inside $name at
+    # ~$user/cloud-init.
+    local name="$1" user="$2" dirty="$3"
+    local dname="cloud-init" gitdir="" commitish=""
+    gitdir=$(git rev-parse --git-dir) || {
+        errorrc "Failed to get git dir in $PWD";
+        return
+    }
+    local t=${gitdir%/*}
+    case "$t" in
+        */worktrees) 
+            if [ -f "${t%worktrees}/config" ]; then
+                gitdir="${t%worktrees}"
+            fi
+    esac
+
+    # attempt to get branch name.
+    commitish=$(git rev-parse --abbrev-ref HEAD) || {
+        errorrc "Failed git rev-parse --abbrev-ref HEAD"
+        return
+    }
+    if [ "$commitish" = "HEAD" ]; then
+        # detached head
+        commitish=$(git rev-parse HEAD) || {
+            errorrc "failed git rev-parse HEAD"
+            return
+        }
+    fi
+
+    local local_changes=false
+    if ! git diff --quiet "$commitish"; then
+        # there are local changes not committed.
+        local_changes=true
+        if [ "$dirty" = "false" ]; then
+            error "WARNING: You had uncommitted changes.  Those changes will "
+            error "be put into 'local-changes.diff' inside the container. "
+            error "To test these changes you must pass --dirty."
+        fi
+    fi
+
+    debug 1 "collecting ${gitdir} ($dname) into user $user in $name."
+    tar -C "${gitdir}" -cpf - . |
+        inside_as "$name" "$user" sh -ec '
+            dname=$1
+            commitish=$2
+            rm -Rf "$dname"
+            mkdir -p $dname/.git
+            cd $dname/.git
+            tar -xpf -
+            cd ..
+            git config core.bare false
+            out=$(git checkout $commitish 2>&1) ||
+                { echo "failed git checkout $commitish: $out" 1>&2; exit 1; }
+            out=$(git checkout . 2>&1) ||
+                { echo "failed git checkout .: $out" 1>&2; exit 1; }
+            ' extract "$dname" "$commitish"
+    [ "${PIPESTATUS[*]}" = "0 0" ] || {
+        error "Failed to push tarball of '$gitdir' into $name" \
+            " for user $user (dname=$dname)"
+        return 1
+    }
+
+    echo "local_changes=$local_changes dirty=$dirty"
+    if [ "$local_changes" = "true" ]; then
+        git diff "$commitish" |
+            inside_as "$name" "$user" sh -exc '
+                cd "$1"
+                if [ "$2" = "true" ]; then
+                    git apply
+                else
+                    cat > local-changes.diff
+                fi
+                ' insert_changes "$dname" "$dirty"
+        [ "${PIPESTATUS[*]}" = "0 0" ] || {
+            error "Failed to apply local changes."
+            return 1
+        }
+    fi
+
+    return 0
+}
+
+get_os_info_in() {
+    # prep the container (install very basic dependencies)
+    [ -n "${OS_VERSION:-}" -a -n "${OS_NAME:-}" ] && return 0
+    data=$(run_self_inside "$name" os_info) ||
+        { errorrc "Failed to get os-info in container $name"; return; }
+    eval "$data" && [ -n "${OS_VERSION:-}" -a -n "${OS_NAME:-}" ] || return
+    debug 1 "determined $name is $OS_NAME/$OS_VERSION"
+}
+
+os_info() {
+    get_os_info || return
+    echo "OS_NAME=$OS_NAME"
+    echo "OS_VERSION=$OS_VERSION"
+}
+
+get_os_info() {
+    # run inside container, set OS_NAME, OS_VERSION
+    # example OS_NAME are centos, debian, opensuse
+    [ -n "${OS_NAME:-}" -a -n "${OS_VERSION:-}" ] && return 0
+    if [ -f /etc/os-release ]; then
+        OS_NAME=$(sh -c '. /etc/os-release; echo $ID')
+        OS_VERSION=$(sh -c '. /etc/os-release; echo $VERSION_ID')
+        if [ -z "$OS_VERSION" ]; then
+            local pname=""
+            pname=$(sh -c '. /etc/os-release; echo $PRETTY_NAME')
+            case "$pname" in
+                *buster*) OS_VERSION=10;;
+                *sid*) OS_VERSION="sid";;
+            esac
+        fi
+    elif [ -f /etc/centos-release ]; then
+        local line=""
+        read line < /etc/centos-release
+        case "$line" in
+            CentOS\ *\ 6.*) OS_VERSION="6"; OS_NAME="centos";;
+        esac
+    fi
+    [ -n "${OS_NAME:-}" -a -n "${OS_VERSION:-}" ] ||
+        { error "Unable to determine OS_NAME/OS_VERSION"; return 1; }
+}
+
+yum_install() {
+    local n=0 max=10 ret
+    bcmd="yum install --downloadonly --assumeyes --setopt=keepcache=1"
+    while n=$((n+1)); do
+       error ":: running $bcmd $* [$n/$max]"
+       $bcmd "$@"
+       ret=$?
+       [ $ret -eq 0 ] && break
+       [ $n -ge $max ] && { error "gave up on $bcmd"; exit $ret; }
+       nap=$((n*5))
+       error ":: failed [$ret] ($n/$max). sleeping $nap."
+       sleep $nap
+    done
+    error ":: running yum install --cacheonly --assumeyes $*"
+    yum install --cacheonly --assumeyes "$@"
+}
+
+zypper_install() {
+    local pkgs="$*"
+    set -- zypper --non-interactive --gpg-auto-import-keys install \
+        --auto-agree-with-licenses "$@"
+    debug 1 ":: installing $pkgs with zypper: $*"
+    "$@"
+}
+
+apt_install() {
+    apt-get update -q && apt-get install --no-install-recommends "$@"
+}
+
+install_packages() {
+    get_os_info || return
+    case "$OS_NAME" in
+        centos) yum_install "$@";;
+        opensuse) zypper_install "$@";;
+        debian|ubuntu) apt_install "$@";;
+        *) error "Do not know how to install packages on ${OS_NAME}";
+           return 1;;
+    esac
+}
+
+prep() {
+    # we need some very basic things not present in the container.
+    #  - git
+    #  - tar (CentOS 6 lxc container does not have it)
+    #  - python-argparse (or python3)
+    local needed="" pair="" pkg="" cmd="" needed=""
+    local pairs="tar:tar git:git"
+    local pyexe="$1"
+    get_os_info
+    local py2pkg="python2" py3pkg="python3"
+    case "$OS_NAME" in
+        opensuse)
+            py2pkg="python-base"
+            py3pkg="python3-base";;
+    esac
+
+    case "$pyexe" in
+        python2) pairs="$pairs python2:$py2pkg";;
+        python3) pairs="$pairs python3:$py3pkg";;
+    esac
+
+    for pair in $pairs; do
+        pkg=${pair#*:}
+        cmd=${pair%%:*}
+        command -v "$cmd" >/dev/null 2>&1 || needed="${needed} $pkg"
+    done
+    if [ "$OS_NAME" = "centos" -a "$pyexe" = "python2" ]; then
+        python -c "import argparse" >/dev/null 2>&1 ||
+            needed="${needed} python-argparse"
+    fi
+    needed=${needed# }
+    if [ -z "$needed" ]; then
+        error "No prep packages needed"
+        return 0
+    fi
+    error "Installing prep packages: ${needed}"
+    # shellcheck disable=SC2086
+    set -- $needed
+    install_packages "$@"
+}
+
+nose() {
+    local pyexe="$1" cmd=""
+    shift
+    get_os_info
+    if [ "$OS_NAME/$OS_VERSION" = "centos/6" ]; then
+        cmd="nosetests"
+    else
+        cmd="$pyexe -m nose"
+    fi
+    ${cmd} "$@"
+}
+
+is_done_cloudinit() {
+    [ -e "/run/cloud-init/result.json" ]
+    _RET=""
+}
+
+is_done_systemd() {
+    local s="" num="$1"
+    s=$(systemctl is-system-running 2>&1);
+    _RET="$? $s"
+    case "$s" in
+        initializing|starting) return 1;;
+        *[Ff]ailed*connect*bus*)
+            # warn if not the first run.
+            [ "$num" -lt 5 ] ||
+                error "Failed to connect to systemd bus [${_RET%% *}]";
+            return 1;;
+    esac
+    return 0
+}
+
+is_done_other() {
+    local out=""
+    out=$(getent hosts ubuntu.com 2>&1)
+    return
+}
+
+wait_inside() {
+    local name="$1" max="${2:-${DEFAULT_WAIT_MAX}}" debug=${3:-0}
+    local i=0 check="is_done_other";
+    if [ -e /run/systemd ]; then
+        check=is_done_systemd
+    elif [ -x /usr/bin/cloud-init ]; then
+        check=is_done_cloudinit
+    fi
+    [ "$debug" != "0" ] && debug 1 "check=$check"
+    while ! $check $i && i=$((i+1)); do
+        [ "$i" -ge "$max" ] && exit 1
+        [ "$debug" = "0" ] || echo -n .
+        sleep 1
+    done
+    if [ "$debug" != "0" ]; then
+        read up _ </proc/uptime
+        debug 1 "[$name ${i:+done after $i }up=$up${_RET:+ ${_RET}}]"
+    fi
+}
+
+wait_for_boot() {
+    local name="$1"
+    local out="" ret="" wtime=$DEFAULT_WAIT_MAX
+    get_os_info_in "$name"
+    [ "$OS_NAME" = "debian" ] && wtime=300 &&
+        debug 1 "on debian we wait for ${wtime}s"
+    debug 1 "waiting for boot of $name"
+    run_self_inside "$name" wait_inside "$name" "$wtime" "$VERBOSITY" ||
+        { errorrc "wait inside $name failed."; return; }
+
+    if [ ! -z "${http_proxy-}" ]; then
+        if [ "$OS_NAME" = "centos" ]; then
+            debug 1 "configuring proxy ${http_proxy}"
+            inside "$name" sh -c "echo proxy=$http_proxy >> /etc/yum.conf"
+            inside "$name" sed -i s/enabled=1/enabled=0/ \
+                /etc/yum/pluginconf.d/fastestmirror.conf
+        else
+            debug 1 "do not know how to configure proxy on $OS_NAME"
+        fi
+    fi
+}
+
+start_container() {
+    local src="$1" name="$2"
+    debug 1 "starting container $name from '$src'"
+    lxc launch "$src" "$name" || {
+        errorrc "Failed to start container '$name' from '$src'";
+        return
+    }
+    CONTAINER=$name
+    wait_for_boot "$name"
+}
+
+delete_container() {
+    debug 1 "removing container $1 [--keep to keep]"
+    lxc delete --force "$1"
+}
+
+run_self_inside() {
+    # run_self_inside(container, args)
+    local name="$1"
+    shift
+    inside "$name" bash -s "$@" <"$0"
+}
+
+run_self_inside_as_cd() {
+    local name="$1" user="$2" dir="$3"
+    shift 3
+    inside_as_cd "$name" "$user" "$dir" bash -s "$@" <"$0"
+}
+
+main() {
+    local short_opts="a:hknpsuv"
+    local long_opts="artifacts:,dirty,help,keep,name:,pyexe:,package,source-package,unittest,verbose"
+    local getopt_out=""
+    getopt_out=$(getopt --name "${0##*/}" \
+        --options "${short_opts}" --long "${long_opts}" -- "$@") &&
+        eval set -- "${getopt_out}" ||
+        { bad_Usage; return; }
+
+    local cur="" next=""
+    local package="" source_package="" unittest="" name=""
+    local dirty=false pyexe="auto" artifact_d="."
+
+    while [ $# -ne 0 ]; do
+        cur="${1:-}"; next="${2:-}";
+        case "$cur" in
+            -a|--artifacts) artifact_d="$next";;
+               --dirty) dirty=true;;
+            -h|--help) Usage ; exit 0;;
+            -k|--keep) KEEP=true;;
+            -n|--name) name="$next"; shift;;
+               --pyexe) pyexe=$next; shift;;
+            -p|--package) package=1;;
+            -s|--source-package) source_package=1;;
+            -u|--unittest) unittest=1;;
+            -v|--verbose) VERBOSITY=$((VERBOSITY+1));;
+            --) shift; break;;
+        esac
+        shift;
+    done
+
+    [ $# -eq 1 ] || { bad_Usage "Expected 1 arg, got $# ($*)"; return; }
+    local img_ref_in="$1"
+    case "${img_ref_in}" in
+        *:*) img_ref="${img_ref_in}";;
+        *) img_ref="images:${img_ref_in}";;
+    esac
+
+    # program starts here
+    local out="" user="ci-test" cdir="" home=""
+    home="/home/$user"
+    cdir="$home/cloud-init"
+    if [ -z "$name" ]; then
+        if out=$(petname 2>&1); then
+            name="ci-${out}"
+        elif out=$(uuidgen -t 2>&1); then
+            name="ci-${out%%-*}"
+        else
+            error "Must provide name or have petname or uuidgen"
+            return 1
+        fi
+    fi
+
+    trap cleanup EXIT
+
+    start_container "$img_ref" "$name" ||
+        { errorrc "Failed to start container for $img_ref"; return; }
+
+    get_os_info_in "$name" ||
+        { errorrc "failed to get os_info in $name"; return; }
+
+    if [ "$pyexe" = "auto" ]; then
+        case "$OS_NAME/$OS_VERSION" in
+            centos/*|opensuse/*) pyexe=python2;;
+            *) pyexe=python3;;
+        esac
+        debug 1 "set pyexe=$pyexe for $OS_NAME/$OS_VERSION"
+    fi
+
+    # prep the container (install very basic dependencies)
+    run_self_inside "$name" prep "$pyexe" ||
+        { errorrc "Failed to prep container $name"; return; }
+
+    # add the user
+    inside "$name" useradd "$user" --create-home "--home-dir=$home" ||
+        { errorrc "Failed to add user '$user' in '$name'"; return 1; }
+
+    debug 1 "inserting cloud-init"
+    inject_cloud_init "$name" "$user" "$dirty" || {
+        errorrc "FAIL: injecting cloud-init into $name failed."
+        return
+    }
+
+    inside_as_cd "$name" root "$cdir" \
+        $pyexe ./tools/read-dependencies "--distro=${OS_NAME}" \
+            --test-distro || {
+        errorrc "FAIL: failed to install dependencies with read-dependencies"
+        return
+    }
+
+    local errors=( )
+    inside_as_cd "$name" "$user" "$cdir" git status || {
+        errorrc "git checkout failed."
+        errors[${#errors[@]}]="git checkout";
+    }
+
+    if [ -n "$unittest" ]; then
+        debug 1 "running unit tests."
+        run_self_inside_as_cd "$name" "$user" "$cdir" nose "$pyexe" \
+            tests/unittests cloudinit/ || {
+                errorrc "nosetests failed.";
+                errors[${#errors[@]}]="nosetests"
+            }
+    fi
+
+    local build_pkg="" build_srcpkg="" pkg_ext="" distflag=""
+    case "$OS_NAME" in
+        centos) distflag="--distro=redhat";;
+        opensuse) distflag="--distro=suse";;
+    esac
+
+    case "$OS_NAME" in
+        debian|ubuntu)
+            build_pkg="./packages/bddeb -d" 
+            build_srcpkg="./packages/bddeb -S -d"
+            pkg_ext=".deb";;
+        centos|opensuse)
+            build_pkg="./packages/brpm $distflag"
+            build_srcpkg="./packages/brpm $distflag --srpm"
+            pkg_ext=".rpm";;
+    esac
+    if [ -n "$source_package" ]; then
+        [ -n "$build_pkg" ] || {
+            error "Unknown package command for $OS_NAME"
+            return 1
+        }
+        debug 1 "building source package with $build_srcpkg."
+        # shellcheck disable=SC2086
+        inside_as_cd "$name" "$user" "$cdir" $pyexe $build_srcpkg || {
+            errorrc "failed: $build_srcpkg";
+            errors[${#errors[@]}]="source package"
+        }
+    fi
+
+    if [ -n "$package" ]; then
+        [ -n "$build_srcpkg" ] || {
+            error "Unknown build source command for $OS_NAME"
+            return 1
+        }
+        debug 1 "building binary package with $build_pkg."
+        inside_as_cd "$name" "$user" "$cdir" $pyexe $build_pkg || {
+            errorrc "failed: $build_pkg";
+            errors[${#errors[@]}]="binary package"
+        }
+    fi
+
+    if [ -n "$artifact_d" ]; then
+        local art=""
+        artifact_d="${artifact_d%/}/"
+        [ -d "${artifact_d}" ] || mkdir -p "$artifact_d" || {
+            errorrc "failed to create artifact dir '$artifact_d'"
+            return
+        }
+
+        for art in $(inside "$name" sh -c "echo $cdir/*${pkg_ext}"); do
+            lxc file pull "$name/$art" "$artifact_d" || {
+                errorrc "Failed to pull '$name/$art' to ${artifact_d}"
+                errors[${#errors[@]}]="artifact copy: $art"
+            }
+            debug 1 "wrote ${artifact_d}${art##*/}"
+        done
+    fi
+
+    if [ "${#errors[@]}" != "0" ]; then
+        local e=""
+        error "there were ${#errors[@]} errors."
+        for e in "${errors[@]}"; do
+            error "  $e"
+        done
+        return 1
+    fi
+    return 0
+}
+
+case "${1:-}" in
+    prep|os_info|wait_inside|nose) _n=$1; shift; "$_n" "$@";;
+    *) main "$@";;
+esac
+
+# vi: ts=4 expandtab
diff --git a/tox.ini b/tox.ini
index 818ade3..2fb3209 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
 [tox]
-envlist = py27, py3, flake8, xenial, pylint
+envlist = py27, py3, xenial, pycodestyle, pyflakes, pylint
 recreate = True
 
 [testenv]
@@ -7,14 +7,11 @@ commands = python -m nose {posargs:tests/unittests cloudinit}
 setenv =
     LC_ALL = en_US.utf-8
 
-[testenv:flake8]
+[testenv:pycodestyle]
 basepython = python3
 deps =
-    pycodestyle==2.3.1
-    pyflakes==1.5.0
-    flake8==3.3.0
-    hacking==0.13.0
-commands = {envpython} -m flake8 {posargs:cloudinit/ tests/ tools/}
+    pycodestyle==2.4.0
+commands = {envpython} -m pycodestyle {posargs:cloudinit/ tests/ tools/}
 
 # https://github.com/gabrielfalcao/HTTPretty/issues/223
 setenv =
@@ -118,6 +115,11 @@ deps =
 commands = {envpython} -m pycodestyle {posargs:cloudinit/ tests/ tools/}
 deps = pycodestyle
 
+[testenv:pyflakes]
+commands = {envpython} -m pyflakes {posargs:cloudinit/ tests/ tools/}
+deps =
+    pyflakes==1.6.0
+
 [testenv:tip-pyflakes]
 commands = {envpython} -m pyflakes {posargs:cloudinit/ tests/ tools/}
 deps = pyflakes

Follow ups