← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic.

Commit message:
New upstream snapshot for SRU into bionic per SRU

LP: #1795953

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1795741 in cloud-init: "Release 18.4"
  https://bugs.launchpad.net/cloud-init/+bug/1795741

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/356093
-- 
The attached diff has been truncated due to its size.
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic.
diff --git a/.pylintrc b/.pylintrc
index 3bfa0c8..e376b48 100644
--- a/.pylintrc
+++ b/.pylintrc
@@ -61,7 +61,8 @@ ignored-modules=
 # List of class names for which member attributes should not be checked (useful
 # for classes with dynamically set attributes). This supports the use of
 # qualified names.
-ignored-classes=optparse.Values,thread._local
+# argparse.Namespace from https://github.com/PyCQA/pylint/issues/2413
+ignored-classes=argparse.Namespace,optparse.Values,thread._local
 
 # List of members which are set dynamically and missed by pylint inference
 # system, and so shouldn't trigger E1101 when accessed. Python regular
diff --git a/ChangeLog b/ChangeLog
index 72c5287..9c043b0 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,86 @@
+18.4:
+ - add rtd example docs about new standardized keys
+ - use ds._crawled_metadata instance attribute if set when writing
+   instance-data.json
+ - ec2: update crawled metadata. add standardized keys
+ - tests: allow skipping an entire cloud_test without running.
+ - tests: disable lxd tests on cosmic
+ - cii-tests: use unittest2.SkipTest in ntp_chrony due to new deps
+ - lxd: adjust to snap installed lxd.
+ - docs: surface experimental doc in instance-data.json
+ - tests: fix ec2 integration tests. process meta_data instead of meta-data
+ - Add support for Infiniband network interfaces (IPoIB). [Mark Goddard]
+ - cli: add cloud-init query subcommand to query instance metadata
+ - tools/tox-venv: update for new features.
+ - pylint: ignore warning assignment-from-no-return for _write_network
+ - stages: Fix bug causing datasource to have incorrect sys_cfg.
+   (LP: #1787459)
+ - Remove dead-code _write_network distro implementations.
+ - net_util: ensure static configs have netmask in translate_network result
+   [Thomas Berger] (LP: #1792454)
+ - Fall back to root:root on syslog permissions if other options fail.
+   [Robert Schweikert]
+ - tests: Add mock for util.get_hostname. [Robert Schweikert] (LP: #1792799)
+ - ds-identify: doc string cleanup.
+ - OpenStack: Support setting mac address on bond.
+   [Fabian Wiesel] (LP: #1682064)
+ - bash_completion/cloud-init: fix shell syntax error.
+ - EphemeralIPv4Network: Be more explicit when adding default route.
+   (LP: #1792415)
+ - OpenStack: support reading of newer versions of metdata.
+ - OpenStack: fix bug causing 'latest' version to be used from network.
+   (LP: #1792157)
+ - user-data: jinja template to render instance-data.json in cloud-config
+   (LP: #1791781)
+ - config: disable ssh access to a configured user account
+ - tests: print failed testname instead of docstring upon failure
+ - tests: Disallow use of util.subp except for where needed.
+ - sysconfig: refactor sysconfig to accept distro specific templates paths
+ - Add unit tests for config/cc_ssh.py [Francis Ginther]
+ - Fix the built-in cloudinit/tests/helpers:skipIf
+ - read-version: enhance error message [Joshua Powers]
+ - hyperv_reporting_handler: simplify threaded publisher
+ - VMWare: Fix a network config bug in vm with static IPv4 and no gateway.
+   [Pengpeng Sun] (LP: #1766538)
+ - logging: Add logging config type hyperv for reporting via Azure KVP
+   [Andy Liu]
+ - tests: disable other snap test as well [Joshua Powers]
+ - tests: disable snap, fix write_files binary [Joshua Powers]
+ - Add datasource Oracle Compute Infrastructure (OCI).
+ - azure: allow azure to generate network configuration from IMDS per boot.
+ - Scaleway: Add network configuration to the DataSource [Louis Bouchard]
+ - docs: Fix example cloud-init analyze command to match output.
+   [Wesley Gao]
+ - netplan: Correctly render macaddress on a bonds and bridges when
+   provided. (LP: #1784699)
+ - tools: Add 'net-convert' subcommand command to 'cloud-init devel'.
+ - redhat: remove ssh keys on new instance. (LP: #1781094)
+ - Use typeset or local in profile.d scripts. (LP: #1784713)
+ - OpenNebula: Fix null gateway6 [Akihiko Ota] (LP: #1768547)
+ - oracle: fix detect_openstack to report True on OracleCloud.com DMI data
+   (LP: #1784685)
+ - tests: improve LXDInstance trying to workaround or catch bug.
+ - update_metadata re-config on every boot comments and tests not quite
+   right [Mike Gerdts]
+ - tests: Collect build_info from system if available.
+ - pylint: Fix pylint warnings reported in pylint 2.0.0.
+ - get_linux_distro: add support for rhel via redhat-release.
+ - get_linux_distro: add support for centos6 and rawhide flavors of redhat
+   (LP: #1781229)
+ - tools: add '--debug' to tools/net-convert.py
+ - tests: bump the version of paramiko to 2.4.1.
+ - docs: note in rtd about avoiding /tmp when writing files (LP: #1727876)
+ - ubuntu,centos,debian: get_linux_distro to align with platform.dist
+   (LP: #1780481)
+ - Fix boothook docs on environment variable name (INSTANCE_I ->
+   INSTANCE_ID) [Marc Tamsky]
+ - update_metadata: a datasource can support network re-config every boot
+ - tests: drop salt-minion integration test (LP: #1778737)
+ - Retry on failed import of gpg receive keys.
+ - tools: Fix run-container when neither source or binary package requested.
+ - docs: Fix a small spelling error. [Oz N Tiram]
+ - tox: use simplestreams from git repository rather than bzr.
+
 18.3:
  - docs: represent sudo:false in docs for user_groups config module
  - Explicitly prevent `sudo` access for user module
diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init
index 581432c..8c25032 100644
--- a/bash_completion/cloud-init
+++ b/bash_completion/cloud-init
@@ -10,7 +10,7 @@ _cloudinit_complete()
     cur_word="${COMP_WORDS[COMP_CWORD]}"
     prev_word="${COMP_WORDS[COMP_CWORD-1]}"
 
-    subcmds="analyze clean collect-logs devel dhclient-hook features init modules single status"
+    subcmds="analyze clean collect-logs devel dhclient-hook features init modules query single status"
     base_params="--help --file --version --debug --force"
     case ${COMP_CWORD} in
         1)
@@ -28,7 +28,7 @@ _cloudinit_complete()
                     COMPREPLY=($(compgen -W "--help --tarfile --include-userdata" -- $cur_word))
                     ;;
                 devel)
-                    COMPREPLY=($(compgen -W "--help schema" -- $cur_word))
+                    COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word))
                     ;;
                 dhclient-hook|features)
                     COMPREPLY=($(compgen -W "--help" -- $cur_word))
@@ -40,6 +40,8 @@ _cloudinit_complete()
                     COMPREPLY=($(compgen -W "--help --mode" -- $cur_word))
                     ;;
 
+                query)
+                    COMPREPLY=($(compgen -W "--all --help --instance-data --list-keys --user-data --vendor-data --debug" -- $cur_word));;
                 single)
                     COMPREPLY=($(compgen -W "--help --name --frequency --report" -- $cur_word))
                     ;;
@@ -59,6 +61,11 @@ _cloudinit_complete()
                 --frequency)
                     COMPREPLY=($(compgen -W "--help instance always once" -- $cur_word))
                     ;;
+                net-convert)
+                    COMPREPLY=($(compgen -W "--help --network-data --kind --directory --output-kind" -- $cur_word))
+                    ;;
+                render)
+                    COMPREPLY=($(compgen -W "--help --instance-data --debug" -- $cur_word));;
                 schema)
                     COMPREPLY=($(compgen -W "--help --config-file --doc --annotate" -- $cur_word))
                     ;;
@@ -74,4 +81,4 @@ _cloudinit_complete()
 }
 complete -F _cloudinit_complete cloud-init
 
-# vi: syntax=bash expandtab
+# vi: syntax=sh expandtab
diff --git a/cloudinit/analyze/tests/test_dump.py b/cloudinit/analyze/tests/test_dump.py
index f4c4284..db2a667 100644
--- a/cloudinit/analyze/tests/test_dump.py
+++ b/cloudinit/analyze/tests/test_dump.py
@@ -5,8 +5,8 @@ from textwrap import dedent
 
 from cloudinit.analyze.dump import (
     dump_events, parse_ci_logline, parse_timestamp)
-from cloudinit.util import subp, write_file
-from cloudinit.tests.helpers import CiTestCase
+from cloudinit.util import which, write_file
+from cloudinit.tests.helpers import CiTestCase, mock, skipIf
 
 
 class TestParseTimestamp(CiTestCase):
@@ -15,21 +15,9 @@ class TestParseTimestamp(CiTestCase):
         """Logs with cloud-init detailed formats will be properly parsed."""
         trusty_fmt = '%Y-%m-%d %H:%M:%S,%f'
         trusty_stamp = '2016-09-12 14:39:20,839'
-
-        parsed = parse_timestamp(trusty_stamp)
-
-        # convert ourselves
         dt = datetime.strptime(trusty_stamp, trusty_fmt)
-        expected = float(dt.strftime('%s.%f'))
-
-        # use date(1)
-        out, _err = subp(['date', '+%s.%3N', '-d', trusty_stamp])
-        timestamp = out.strip()
-        date_ts = float(timestamp)
-
-        self.assertEqual(expected, parsed)
-        self.assertEqual(expected, date_ts)
-        self.assertEqual(date_ts, parsed)
+        self.assertEqual(
+            float(dt.strftime('%s.%f')), parse_timestamp(trusty_stamp))
 
     def test_parse_timestamp_handles_syslog_adding_year(self):
         """Syslog timestamps lack a year. Add year and properly parse."""
@@ -39,17 +27,9 @@ class TestParseTimestamp(CiTestCase):
         # convert stamp ourselves by adding the missing year value
         year = datetime.now().year
         dt = datetime.strptime(syslog_stamp + " " + str(year), syslog_fmt)
-        expected = float(dt.strftime('%s.%f'))
-        parsed = parse_timestamp(syslog_stamp)
-
-        # use date(1)
-        out, _ = subp(['date', '+%s.%3N', '-d', syslog_stamp])
-        timestamp = out.strip()
-        date_ts = float(timestamp)
-
-        self.assertEqual(expected, parsed)
-        self.assertEqual(expected, date_ts)
-        self.assertEqual(date_ts, parsed)
+        self.assertEqual(
+            float(dt.strftime('%s.%f')),
+            parse_timestamp(syslog_stamp))
 
     def test_parse_timestamp_handles_journalctl_format_adding_year(self):
         """Journalctl precise timestamps lack a year. Add year and parse."""
@@ -59,37 +39,22 @@ class TestParseTimestamp(CiTestCase):
         # convert stamp ourselves by adding the missing year value
         year = datetime.now().year
         dt = datetime.strptime(journal_stamp + " " + str(year), journal_fmt)
-        expected = float(dt.strftime('%s.%f'))
-        parsed = parse_timestamp(journal_stamp)
-
-        # use date(1)
-        out, _ = subp(['date', '+%s.%6N', '-d', journal_stamp])
-        timestamp = out.strip()
-        date_ts = float(timestamp)
-
-        self.assertEqual(expected, parsed)
-        self.assertEqual(expected, date_ts)
-        self.assertEqual(date_ts, parsed)
+        self.assertEqual(
+            float(dt.strftime('%s.%f')), parse_timestamp(journal_stamp))
 
+    @skipIf(not which("date"), "'date' command not available.")
     def test_parse_unexpected_timestamp_format_with_date_command(self):
-        """Dump sends unexpected timestamp formats to data for processing."""
+        """Dump sends unexpected timestamp formats to date for processing."""
         new_fmt = '%H:%M %m/%d %Y'
         new_stamp = '17:15 08/08'
-
         # convert stamp ourselves by adding the missing year value
         year = datetime.now().year
         dt = datetime.strptime(new_stamp + " " + str(year), new_fmt)
-        expected = float(dt.strftime('%s.%f'))
-        parsed = parse_timestamp(new_stamp)
 
         # use date(1)
-        out, _ = subp(['date', '+%s.%6N', '-d', new_stamp])
-        timestamp = out.strip()
-        date_ts = float(timestamp)
-
-        self.assertEqual(expected, parsed)
-        self.assertEqual(expected, date_ts)
-        self.assertEqual(date_ts, parsed)
+        with self.allow_subp(["date"]):
+            self.assertEqual(
+                float(dt.strftime('%s.%f')), parse_timestamp(new_stamp))
 
 
 class TestParseCILogLine(CiTestCase):
@@ -135,7 +100,9 @@ class TestParseCILogLine(CiTestCase):
             'timestamp': timestamp}
         self.assertEqual(expected, parse_ci_logline(line))
 
-    def test_parse_logline_returns_event_for_finish_events(self):
+    @mock.patch("cloudinit.analyze.dump.parse_timestamp_from_date")
+    def test_parse_logline_returns_event_for_finish_events(self,
+                                                           m_parse_from_date):
         """parse_ci_logline returns a finish event for a parsed log line."""
         line = ('2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT]'
                 ' handlers.py[DEBUG]: finish: modules-final: SUCCESS: running'
@@ -147,7 +114,10 @@ class TestParseCILogLine(CiTestCase):
             'origin': 'cloudinit',
             'result': 'SUCCESS',
             'timestamp': 1472594005.972}
+        m_parse_from_date.return_value = "1472594005.972"
         self.assertEqual(expected, parse_ci_logline(line))
+        m_parse_from_date.assert_has_calls(
+            [mock.call("2016-08-30 21:53:25.972325+00:00")])
 
 
 SAMPLE_LOGS = dedent("""\
@@ -162,10 +132,16 @@ Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]:\
 class TestDumpEvents(CiTestCase):
     maxDiff = None
 
-    def test_dump_events_with_rawdata(self):
+    @mock.patch("cloudinit.analyze.dump.parse_timestamp_from_date")
+    def test_dump_events_with_rawdata(self, m_parse_from_date):
         """Rawdata is split and parsed into a tuple of events and data"""
+        m_parse_from_date.return_value = "1472594005.972"
         events, data = dump_events(rawdata=SAMPLE_LOGS)
         expected_data = SAMPLE_LOGS.splitlines()
+        self.assertEqual(
+            [mock.call("2016-08-30 21:53:25.972325+00:00")],
+            m_parse_from_date.call_args_list)
+        self.assertEqual(expected_data, data)
         year = datetime.now().year
         dt1 = datetime.strptime(
             'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y')
@@ -183,12 +159,14 @@ class TestDumpEvents(CiTestCase):
             'result': 'SUCCESS',
             'timestamp': 1472594005.972}]
         self.assertEqual(expected_events, events)
-        self.assertEqual(expected_data, data)
 
-    def test_dump_events_with_cisource(self):
+    @mock.patch("cloudinit.analyze.dump.parse_timestamp_from_date")
+    def test_dump_events_with_cisource(self, m_parse_from_date):
         """Cisource file is read and parsed into a tuple of events and data."""
         tmpfile = self.tmp_path('logfile')
         write_file(tmpfile, SAMPLE_LOGS)
+        m_parse_from_date.return_value = 1472594005.972
+
         events, data = dump_events(cisource=open(tmpfile))
         year = datetime.now().year
         dt1 = datetime.strptime(
@@ -208,3 +186,5 @@ class TestDumpEvents(CiTestCase):
             'timestamp': 1472594005.972}]
         self.assertEqual(expected_events, events)
         self.assertEqual(SAMPLE_LOGS.splitlines(), [d.strip() for d in data])
+        m_parse_from_date.assert_has_calls(
+            [mock.call("2016-08-30 21:53:25.972325+00:00")])
diff --git a/cloudinit/apport.py b/cloudinit/apport.py
index 130ff26..22cb7fd 100644
--- a/cloudinit/apport.py
+++ b/cloudinit/apport.py
@@ -30,6 +30,7 @@ KNOWN_CLOUD_NAMES = [
     'NoCloud',
     'OpenNebula',
     'OpenStack',
+    'Oracle',
     'OVF',
     'OpenTelekomCloud',
     'Scaleway',
diff --git a/cloudinit/cloud.py b/cloudinit/cloud.py
index 6d12c43..7ae98e1 100644
--- a/cloudinit/cloud.py
+++ b/cloudinit/cloud.py
@@ -47,7 +47,7 @@ class Cloud(object):
 
     @property
     def cfg(self):
-        # Ensure that not indirectly modified
+        # Ensure that cfg is not indirectly modified
         return copy.deepcopy(self._cfg)
 
     def run(self, name, functor, args, freq=None, clear_on_fail=False):
@@ -61,7 +61,7 @@ class Cloud(object):
             return None
         return fn
 
-    # The rest of thes are just useful proxies
+    # The rest of these are just useful proxies
     def get_userdata(self, apply_filter=True):
         return self.datasource.get_userdata(apply_filter)
 
diff --git a/cloudinit/cmd/devel/__init__.py b/cloudinit/cmd/devel/__init__.py
index e69de29..3ae28b6 100644
--- a/cloudinit/cmd/devel/__init__.py
+++ b/cloudinit/cmd/devel/__init__.py
@@ -0,0 +1,25 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Common cloud-init devel commandline utility functions."""
+
+
+import logging
+
+from cloudinit import log
+from cloudinit.stages import Init
+
+
+def addLogHandlerCLI(logger, log_level):
+    """Add a commandline logging handler to emit messages to stderr."""
+    formatter = logging.Formatter('%(levelname)s: %(message)s')
+    log.setupBasicLogging(log_level, formatter=formatter)
+    return logger
+
+
+def read_cfg_paths():
+    """Return a Paths object based on the system configuration on disk."""
+    init = Init(ds_deps=[])
+    init.read_cfg()
+    return init.paths
+
+# vi: ts=4 expandtab
diff --git a/tools/net-convert.py b/cloudinit/cmd/devel/net_convert.py
index 68559cb..a0f58a0 100755
--- a/tools/net-convert.py
+++ b/cloudinit/cmd/devel/net_convert.py
@@ -1,42 +1,70 @@
-#!/usr/bin/python3
 # This file is part of cloud-init. See LICENSE file for license information.
 
+"""Debug network config format conversions."""
 import argparse
 import json
 import os
+import sys
 import yaml
 
 from cloudinit.sources.helpers import openstack
+from cloudinit.sources import DataSourceAzure as azure
 
-from cloudinit.net import eni
-from cloudinit.net import netplan
-from cloudinit.net import network_state
-from cloudinit.net import sysconfig
+from cloudinit import distros
+from cloudinit.net import eni, netplan, network_state, sysconfig
+from cloudinit import log
 
+NAME = 'net-convert'
 
-def main():
-    parser = argparse.ArgumentParser()
-    parser.add_argument("--network-data", "-p", type=open,
+
+def get_parser(parser=None):
+    """Build or extend and arg parser for net-convert utility.
+
+    @param parser: Optional existing ArgumentParser instance representing the
+        subcommand which will be extended to support the args of this utility.
+
+    @returns: ArgumentParser with proper argument configuration.
+    """
+    if not parser:
+        parser = argparse.ArgumentParser(prog=NAME, description=__doc__)
+    parser.add_argument("-p", "--network-data", type=open,
                         metavar="PATH", required=True)
-    parser.add_argument("--kind", "-k",
-                        choices=['eni', 'network_data.json', 'yaml'],
+    parser.add_argument("-k", "--kind",
+                        choices=['eni', 'network_data.json', 'yaml',
+                                 'azure-imds'],
                         required=True)
     parser.add_argument("-d", "--directory",
                         metavar="PATH",
                         help="directory to place output in",
                         required=True)
+    parser.add_argument("-D", "--distro",
+                        choices=[item for sublist in
+                                 distros.OSFAMILIES.values()
+                                 for item in sublist],
+                        required=True)
     parser.add_argument("-m", "--mac",
                         metavar="name,mac",
                         action='append',
                         help="interface name to mac mapping")
-    parser.add_argument("--output-kind", "-ok",
+    parser.add_argument("--debug", action='store_true',
+                        help='enable debug logging to stderr.')
+    parser.add_argument("-O", "--output-kind",
                         choices=['eni', 'netplan', 'sysconfig'],
                         required=True)
-    args = parser.parse_args()
+    return parser
+
+
+def handle_args(name, args):
+    if not args.directory.endswith("/"):
+        args.directory += "/"
 
     if not os.path.isdir(args.directory):
         os.makedirs(args.directory)
 
+    if args.debug:
+        log.setupBasicLogging(level=log.DEBUG)
+    else:
+        log.setupBasicLogging(level=log.WARN)
     if args.mac:
         known_macs = {}
         for item in args.mac:
@@ -53,32 +81,52 @@ def main():
         pre_ns = yaml.load(net_data)
         if 'network' in pre_ns:
             pre_ns = pre_ns.get('network')
-        print("Input YAML")
-        print(yaml.dump(pre_ns, default_flow_style=False, indent=4))
+        if args.debug:
+            sys.stderr.write('\n'.join(
+                ["Input YAML",
+                 yaml.dump(pre_ns, default_flow_style=False, indent=4), ""]))
         ns = network_state.parse_net_config_data(pre_ns)
-    else:
+    elif args.kind == 'network_data.json':
         pre_ns = openstack.convert_net_json(
             json.loads(net_data), known_macs=known_macs)
         ns = network_state.parse_net_config_data(pre_ns)
+    elif args.kind == 'azure-imds':
+        pre_ns = azure.parse_network_config(json.loads(net_data))
+        ns = network_state.parse_net_config_data(pre_ns)
 
     if not ns:
         raise RuntimeError("No valid network_state object created from"
                            "input data")
 
-    print("\nInternal State")
-    print(yaml.dump(ns, default_flow_style=False, indent=4))
+    if args.debug:
+        sys.stderr.write('\n'.join([
+            "", "Internal State",
+            yaml.dump(ns, default_flow_style=False, indent=4), ""]))
+    distro_cls = distros.fetch(args.distro)
+    distro = distro_cls(args.distro, {}, None)
+    config = {}
     if args.output_kind == "eni":
         r_cls = eni.Renderer
+        config = distro.renderer_configs.get('eni')
     elif args.output_kind == "netplan":
         r_cls = netplan.Renderer
+        config = distro.renderer_configs.get('netplan')
     else:
         r_cls = sysconfig.Renderer
+        config = distro.renderer_configs.get('sysconfig')
 
-    r = r_cls()
+    r = r_cls(config=config)
+    sys.stderr.write(''.join([
+        "Read input format '%s' from '%s'.\n" % (
+            args.kind, args.network_data.name),
+        "Wrote output format '%s' to '%s'\n" % (
+            args.output_kind, args.directory)]) + "\n")
     r.render_network_state(network_state=ns, target=args.directory)
 
 
 if __name__ == '__main__':
-    main()
+    args = get_parser().parse_args()
+    handle_args(NAME, args)
+
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/parser.py b/cloudinit/cmd/devel/parser.py
index acacc4e..99a234c 100644
--- a/cloudinit/cmd/devel/parser.py
+++ b/cloudinit/cmd/devel/parser.py
@@ -5,8 +5,10 @@
 """Define 'devel' subcommand argument parsers to include in cloud-init cmd."""
 
 import argparse
-from cloudinit.config.schema import (
-    get_parser as schema_parser, handle_schema_args)
+from cloudinit.config import schema
+
+from . import net_convert
+from . import render
 
 
 def get_parser(parser=None):
@@ -17,10 +19,17 @@ def get_parser(parser=None):
     subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
     subparsers.required = True
 
-    parser_schema = subparsers.add_parser(
-        'schema', help='Validate cloud-config files or document schema')
-    # Construct schema subcommand parser
-    schema_parser(parser_schema)
-    parser_schema.set_defaults(action=('schema', handle_schema_args))
+    subcmds = [
+        ('schema', 'Validate cloud-config files for document schema',
+         schema.get_parser, schema.handle_schema_args),
+        (net_convert.NAME, net_convert.__doc__,
+         net_convert.get_parser, net_convert.handle_args),
+        (render.NAME, render.__doc__,
+         render.get_parser, render.handle_args)
+    ]
+    for (subcmd, helpmsg, get_parser, handler) in subcmds:
+        parser = subparsers.add_parser(subcmd, help=helpmsg)
+        get_parser(parser)
+        parser.set_defaults(action=(subcmd, handler))
 
     return parser
diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py
new file mode 100755
index 0000000..2ba6b68
--- /dev/null
+++ b/cloudinit/cmd/devel/render.py
@@ -0,0 +1,85 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Debug jinja template rendering of user-data."""
+
+import argparse
+import os
+import sys
+
+from cloudinit.handlers.jinja_template import render_jinja_payload_from_file
+from cloudinit import log
+from cloudinit.sources import INSTANCE_JSON_FILE
+from . import addLogHandlerCLI, read_cfg_paths
+
+NAME = 'render'
+DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json'
+
+LOG = log.getLogger(NAME)
+
+
+def get_parser(parser=None):
+    """Build or extend and arg parser for jinja render utility.
+
+    @param parser: Optional existing ArgumentParser instance representing the
+        subcommand which will be extended to support the args of this utility.
+
+    @returns: ArgumentParser with proper argument configuration.
+    """
+    if not parser:
+        parser = argparse.ArgumentParser(prog=NAME, description=__doc__)
+    parser.add_argument(
+        'user_data', type=str, help='Path to the user-data file to render')
+    parser.add_argument(
+        '-i', '--instance-data', type=str,
+        help=('Optional path to instance-data.json file. Defaults to'
+              ' /run/cloud-init/instance-data.json'))
+    parser.add_argument('-d', '--debug', action='store_true', default=False,
+                        help='Add verbose messages during template render')
+    return parser
+
+
+def handle_args(name, args):
+    """Render the provided user-data template file using instance-data values.
+
+    Also setup CLI log handlers to report to stderr since this is a development
+    utility which should be run by a human on the CLI.
+
+    @return 0 on success, 1 on failure.
+    """
+    addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)
+    if not args.instance_data:
+        paths = read_cfg_paths()
+        instance_data_fn = os.path.join(
+            paths.run_dir, INSTANCE_JSON_FILE)
+    else:
+        instance_data_fn = args.instance_data
+    if not os.path.exists(instance_data_fn):
+        LOG.error('Missing instance-data.json file: %s', instance_data_fn)
+        return 1
+    try:
+        with open(args.user_data) as stream:
+            user_data = stream.read()
+    except IOError:
+        LOG.error('Missing user-data file: %s', args.user_data)
+        return 1
+    rendered_payload = render_jinja_payload_from_file(
+        payload=user_data, payload_fn=args.user_data,
+        instance_data_file=instance_data_fn,
+        debug=True if args.debug else False)
+    if not rendered_payload:
+        LOG.error('Unable to render user-data file: %s', args.user_data)
+        return 1
+    sys.stdout.write(rendered_payload)
+    return 0
+
+
+def main():
+    args = get_parser().parse_args()
+    return(handle_args(NAME, args))
+
+
+if __name__ == '__main__':
+    sys.exit(main())
+
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py
new file mode 100644
index 0000000..fc5d2c0
--- /dev/null
+++ b/cloudinit/cmd/devel/tests/test_render.py
@@ -0,0 +1,101 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from six import StringIO
+import os
+
+from collections import namedtuple
+from cloudinit.cmd.devel import render
+from cloudinit.helpers import Paths
+from cloudinit.sources import INSTANCE_JSON_FILE
+from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja
+from cloudinit.util import ensure_dir, write_file
+
+
+class TestRender(CiTestCase):
+
+    with_logs = True
+
+    args = namedtuple('renderargs', 'user_data instance_data debug')
+
+    def setUp(self):
+        super(TestRender, self).setUp()
+        self.tmp = self.tmp_dir()
+
+    def test_handle_args_error_on_missing_user_data(self):
+        """When user_data file path does not exist, log an error."""
+        absent_file = self.tmp_path('user-data', dir=self.tmp)
+        instance_data = self.tmp_path('instance-data', dir=self.tmp)
+        write_file(instance_data, '{}')
+        args = self.args(
+            user_data=absent_file, instance_data=instance_data, debug=False)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            self.assertEqual(1, render.handle_args('anyname', args))
+        self.assertIn(
+            'Missing user-data file: %s' % absent_file,
+            self.logs.getvalue())
+
+    def test_handle_args_error_on_missing_instance_data(self):
+        """When instance_data file path does not exist, log an error."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        absent_file = self.tmp_path('instance-data', dir=self.tmp)
+        args = self.args(
+            user_data=user_data, instance_data=absent_file, debug=False)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            self.assertEqual(1, render.handle_args('anyname', args))
+        self.assertIn(
+            'Missing instance-data.json file: %s' % absent_file,
+            self.logs.getvalue())
+
+    def test_handle_args_defaults_instance_data(self):
+        """When no instance_data argument, default to configured run_dir."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        ensure_dir(run_dir)
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        args = self.args(
+            user_data=user_data, instance_data=None, debug=False)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            self.assertEqual(1, render.handle_args('anyname', args))
+        json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
+        self.assertIn(
+            'Missing instance-data.json file: %s' % json_file,
+            self.logs.getvalue())
+
+    @skipUnlessJinja()
+    def test_handle_args_renders_instance_data_vars_in_template(self):
+        """If user_data file is a jinja template render instance-data vars."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        write_file(user_data, '##template: jinja\nrendering: {{ my_var }}')
+        instance_data = self.tmp_path('instance-data', dir=self.tmp)
+        write_file(instance_data, '{"my-var": "jinja worked"}')
+        args = self.args(
+            user_data=user_data, instance_data=instance_data, debug=True)
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_console_err:
+            with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+                self.assertEqual(0, render.handle_args('anyname', args))
+        self.assertIn(
+            'DEBUG: Converted jinja variables\n{', self.logs.getvalue())
+        self.assertIn(
+            'DEBUG: Converted jinja variables\n{', m_console_err.getvalue())
+        self.assertEqual('rendering: jinja worked', m_stdout.getvalue())
+
+    @skipUnlessJinja()
+    def test_handle_args_warns_and_gives_up_on_invalid_jinja_operation(self):
+        """If user_data file has invalid jinja operations log warnings."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        write_file(user_data, '##template: jinja\nrendering: {{ my-var }}')
+        instance_data = self.tmp_path('instance-data', dir=self.tmp)
+        write_file(instance_data, '{"my-var": "jinja worked"}')
+        args = self.args(
+            user_data=user_data, instance_data=instance_data, debug=True)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            self.assertEqual(1, render.handle_args('anyname', args))
+        self.assertIn(
+            'WARNING: Ignoring jinja template for %s: Undefined jinja'
+            ' variable: "my-var". Jinja tried subtraction. Perhaps you meant'
+            ' "my_var"?' % user_data,
+            self.logs.getvalue())
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index d6ba90f..5a43702 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -315,7 +315,7 @@ def main_init(name, args):
                 existing = "trust"
 
         init.purge_cache()
-        # Delete the non-net file as well
+        # Delete the no-net file as well
         util.del_file(os.path.join(path_helper.get_cpath("data"), "no-net"))
 
     # Stage 5
@@ -339,7 +339,7 @@ def main_init(name, args):
                               " Likely bad things to come!"))
         if not args.force:
             init.apply_network_config(bring_up=not args.local)
-            LOG.debug("[%s] Exiting without datasource in local mode", mode)
+            LOG.debug("[%s] Exiting without datasource", mode)
             if mode == sources.DSMODE_LOCAL:
                 return (None, [])
             else:
@@ -348,6 +348,7 @@ def main_init(name, args):
             LOG.debug("[%s] barreling on in force mode without datasource",
                       mode)
 
+    _maybe_persist_instance_data(init)
     # Stage 6
     iid = init.instancify()
     LOG.debug("[%s] %s will now be targeting instance id: %s. new=%s",
@@ -490,6 +491,7 @@ def main_modules(action_name, args):
         print_exc(msg)
         if not args.force:
             return [(msg)]
+    _maybe_persist_instance_data(init)
     # Stage 3
     mods = stages.Modules(init, extract_fns(args), reporter=args.reporter)
     # Stage 4
@@ -541,6 +543,7 @@ def main_single(name, args):
                    " likely bad things to come!"))
         if not args.force:
             return 1
+    _maybe_persist_instance_data(init)
     # Stage 3
     mods = stages.Modules(init, extract_fns(args), reporter=args.reporter)
     mod_args = args.module_args
@@ -688,6 +691,15 @@ def status_wrapper(name, args, data_d=None, link_d=None):
     return len(v1[mode]['errors'])
 
 
+def _maybe_persist_instance_data(init):
+    """Write instance-data.json file if absent and datasource is restored."""
+    if init.ds_restored:
+        instance_data_file = os.path.join(
+            init.paths.run_dir, sources.INSTANCE_JSON_FILE)
+        if not os.path.exists(instance_data_file):
+            init.datasource.persist_instance_data()
+
+
 def _maybe_set_hostname(init, stage, retry_stage):
     """Call set-hostname if metadata, vendordata or userdata provides it.
 
@@ -779,6 +791,10 @@ def main(sysv_args=None):
                                      ' pass to this module'))
     parser_single.set_defaults(action=('single', main_single))
 
+    parser_query = subparsers.add_parser(
+        'query',
+        help='Query standardized instance metadata from the command line.')
+
     parser_dhclient = subparsers.add_parser('dhclient-hook',
                                             help=('run the dhclient hook'
                                                   'to record network info'))
@@ -830,6 +846,12 @@ def main(sysv_args=None):
             clean_parser(parser_clean)
             parser_clean.set_defaults(
                 action=('clean', handle_clean_args))
+        elif sysv_args[0] == 'query':
+            from cloudinit.cmd.query import (
+                get_parser as query_parser, handle_args as handle_query_args)
+            query_parser(parser_query)
+            parser_query.set_defaults(
+                action=('render', handle_query_args))
         elif sysv_args[0] == 'status':
             from cloudinit.cmd.status import (
                 get_parser as status_parser, handle_status_args)
@@ -877,14 +899,18 @@ def main(sysv_args=None):
         rname, rdesc, reporting_enabled=report_on)
 
     with args.reporter:
-        return util.log_time(
+        retval = util.log_time(
             logfunc=LOG.debug, msg="cloud-init mode '%s'" % name,
             get_uptime=True, func=functor, args=(name, args))
+        reporting.flush_events()
+        return retval
 
 
 if __name__ == '__main__':
     if 'TZ' not in os.environ:
         os.environ['TZ'] = ":/etc/localtime"
-    main(sys.argv)
+    return_value = main(sys.argv)
+    if return_value:
+        sys.exit(return_value)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py
new file mode 100644
index 0000000..7d2d4fe
--- /dev/null
+++ b/cloudinit/cmd/query.py
@@ -0,0 +1,155 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Query standardized instance metadata from the command line."""
+
+import argparse
+import os
+import six
+import sys
+
+from cloudinit.handlers.jinja_template import (
+    convert_jinja_instance_data, render_jinja_payload)
+from cloudinit.cmd.devel import addLogHandlerCLI, read_cfg_paths
+from cloudinit import log
+from cloudinit.sources import (
+    INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE, REDACT_SENSITIVE_VALUE)
+from cloudinit import util
+
+NAME = 'query'
+LOG = log.getLogger(NAME)
+
+
+def get_parser(parser=None):
+    """Build or extend an arg parser for query utility.
+
+    @param parser: Optional existing ArgumentParser instance representing the
+        query subcommand which will be extended to support the args of
+        this utility.
+
+    @returns: ArgumentParser with proper argument configuration.
+    """
+    if not parser:
+        parser = argparse.ArgumentParser(
+            prog=NAME, description='Query cloud-init instance data')
+    parser.add_argument(
+        '-d', '--debug', action='store_true', default=False,
+        help='Add verbose messages during template render')
+    parser.add_argument(
+        '-i', '--instance-data', type=str,
+        help=('Path to instance-data.json file. Default is /run/cloud-init/%s'
+              % INSTANCE_JSON_FILE))
+    parser.add_argument(
+        '-l', '--list-keys', action='store_true', default=False,
+        help=('List query keys available at the provided instance-data'
+              ' <varname>.'))
+    parser.add_argument(
+        '-u', '--user-data', type=str,
+        help=('Path to user-data file. Default is'
+              ' /var/lib/cloud/instance/user-data.txt'))
+    parser.add_argument(
+        '-v', '--vendor-data', type=str,
+        help=('Path to vendor-data file. Default is'
+              ' /var/lib/cloud/instance/vendor-data.txt'))
+    parser.add_argument(
+        'varname', type=str, nargs='?',
+        help=('A dot-delimited instance data variable to query from'
+              ' instance-data query. For example: v2.local_hostname'))
+    parser.add_argument(
+        '-a', '--all', action='store_true', default=False, dest='dump_all',
+        help='Dump all available instance-data')
+    parser.add_argument(
+        '-f', '--format', type=str, dest='format',
+        help=('Optionally specify a custom output format string. Any'
+              ' instance-data variable can be specified between double-curly'
+              ' braces. For example -f "{{ v2.cloud_name }}"'))
+    return parser
+
+
+def handle_args(name, args):
+    """Handle calls to 'cloud-init query' as a subcommand."""
+    paths = None
+    addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)
+    if not any([args.list_keys, args.varname, args.format, args.dump_all]):
+        LOG.error(
+            'Expected one of the options: --all, --format,'
+            ' --list-keys or varname')
+        get_parser().print_help()
+        return 1
+
+    uid = os.getuid()
+    if not all([args.instance_data, args.user_data, args.vendor_data]):
+        paths = read_cfg_paths()
+    if not args.instance_data:
+        if uid == 0:
+            default_json_fn = INSTANCE_JSON_SENSITIVE_FILE
+        else:
+            default_json_fn = INSTANCE_JSON_FILE  # World readable
+        instance_data_fn = os.path.join(paths.run_dir, default_json_fn)
+    else:
+        instance_data_fn = args.instance_data
+    if not args.user_data:
+        user_data_fn = os.path.join(paths.instance_link, 'user-data.txt')
+    else:
+        user_data_fn = args.user_data
+    if not args.vendor_data:
+        vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt')
+    else:
+        vendor_data_fn = args.vendor_data
+
+    try:
+        instance_json = util.load_file(instance_data_fn)
+    except IOError:
+        LOG.error('Missing instance-data.json file: %s', instance_data_fn)
+        return 1
+
+    instance_data = util.load_json(instance_json)
+    if uid != 0:
+        instance_data['userdata'] = (
+            '<%s> file:%s' % (REDACT_SENSITIVE_VALUE, user_data_fn))
+        instance_data['vendordata'] = (
+            '<%s> file:%s' % (REDACT_SENSITIVE_VALUE, vendor_data_fn))
+    else:
+        instance_data['userdata'] = util.load_file(user_data_fn)
+        instance_data['vendordata'] = util.load_file(vendor_data_fn)
+    if args.format:
+        payload = '## template: jinja\n{fmt}'.format(fmt=args.format)
+        rendered_payload = render_jinja_payload(
+            payload=payload, payload_fn='query commandline',
+            instance_data=instance_data,
+            debug=True if args.debug else False)
+        if rendered_payload:
+            print(rendered_payload)
+            return 0
+        return 1
+
+    response = convert_jinja_instance_data(instance_data)
+    if args.varname:
+        try:
+            for var in args.varname.split('.'):
+                response = response[var]
+        except KeyError:
+            LOG.error('Undefined instance-data key %s', args.varname)
+            return 1
+        if args.list_keys:
+            if not isinstance(response, dict):
+                LOG.error("--list-keys provided but '%s' is not a dict", var)
+                return 1
+            response = '\n'.join(sorted(response.keys()))
+    elif args.list_keys:
+        response = '\n'.join(sorted(response.keys()))
+    if not isinstance(response, six.string_types):
+        response = util.json_dumps(response)
+    print(response)
+    return 0
+
+
+def main():
+    """Tool to query specific instance-data values."""
+    parser = get_parser()
+    sys.exit(handle_args(NAME, parser.parse_args()))
+
+
+if __name__ == '__main__':
+    main()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/tests/test_main.py b/cloudinit/cmd/tests/test_main.py
index e2c54ae..a1e534f 100644
--- a/cloudinit/cmd/tests/test_main.py
+++ b/cloudinit/cmd/tests/test_main.py
@@ -125,7 +125,9 @@ class TestMain(FilesystemMockingTestCase):
             updated_cfg.update(
                 {'def_log_file': '/var/log/cloud-init.log',
                  'log_cfgs': [],
-                 'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel'],
+                 'syslog_fix_perms': [
+                     'syslog:adm', 'root:adm', 'root:wheel', 'root:root'
+                 ],
                  'vendor_data': {'enabled': True, 'prefix': []}})
             updated_cfg.pop('system_info')
 
diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py
new file mode 100644
index 0000000..fb87c6a
--- /dev/null
+++ b/cloudinit/cmd/tests/test_query.py
@@ -0,0 +1,193 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from six import StringIO
+from textwrap import dedent
+import os
+
+from collections import namedtuple
+from cloudinit.cmd import query
+from cloudinit.helpers import Paths
+from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE
+from cloudinit.tests.helpers import CiTestCase, mock
+from cloudinit.util import ensure_dir, write_file
+
+
+class TestQuery(CiTestCase):
+
+    with_logs = True
+
+    args = namedtuple(
+        'queryargs',
+        ('debug dump_all format instance_data list_keys user_data vendor_data'
+         ' varname'))
+
+    def setUp(self):
+        super(TestQuery, self).setUp()
+        self.tmp = self.tmp_dir()
+        self.instance_data = self.tmp_path('instance-data', dir=self.tmp)
+
+    def test_handle_args_error_on_missing_param(self):
+        """Error when missing required parameters and print usage."""
+        args = self.args(
+            debug=False, dump_all=False, format=None, instance_data=None,
+            list_keys=False, user_data=None, vendor_data=None, varname=None)
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+                self.assertEqual(1, query.handle_args('anyname', args))
+        expected_error = (
+            'ERROR: Expected one of the options: --all, --format, --list-keys'
+            ' or varname\n')
+        self.assertIn(expected_error, self.logs.getvalue())
+        self.assertIn('usage: query', m_stdout.getvalue())
+        self.assertIn(expected_error, m_stderr.getvalue())
+
+    def test_handle_args_error_on_missing_instance_data(self):
+        """When instance_data file path does not exist, log an error."""
+        absent_fn = self.tmp_path('absent', dir=self.tmp)
+        args = self.args(
+            debug=False, dump_all=True, format=None, instance_data=absent_fn,
+            list_keys=False, user_data='ud', vendor_data='vd', varname=None)
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            self.assertEqual(1, query.handle_args('anyname', args))
+        self.assertIn(
+            'ERROR: Missing instance-data.json file: %s' % absent_fn,
+            self.logs.getvalue())
+        self.assertIn(
+            'ERROR: Missing instance-data.json file: %s' % absent_fn,
+            m_stderr.getvalue())
+
+    def test_handle_args_defaults_instance_data(self):
+        """When no instance_data argument, default to configured run_dir."""
+        args = self.args(
+            debug=False, dump_all=True, format=None, instance_data=None,
+            list_keys=False, user_data=None, vendor_data=None, varname=None)
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        ensure_dir(run_dir)
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            self.assertEqual(1, query.handle_args('anyname', args))
+        json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
+        self.assertIn(
+            'ERROR: Missing instance-data.json file: %s' % json_file,
+            self.logs.getvalue())
+        self.assertIn(
+            'ERROR: Missing instance-data.json file: %s' % json_file,
+            m_stderr.getvalue())
+
+    def test_handle_args_dumps_all_instance_data(self):
+        """When --all is specified query will dump all instance data vars."""
+        write_file(self.instance_data, '{"my-var": "it worked"}')
+        args = self.args(
+            debug=False, dump_all=True, format=None,
+            instance_data=self.instance_data, list_keys=False,
+            user_data='ud', vendor_data='vd', varname=None)
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual(
+            '{\n "my_var": "it worked",\n "userdata": "<%s> file:ud",\n'
+            ' "vendordata": "<%s> file:vd"\n}\n' % (
+                REDACT_SENSITIVE_VALUE, REDACT_SENSITIVE_VALUE),
+            m_stdout.getvalue())
+
+    def test_handle_args_returns_top_level_varname(self):
+        """When the argument varname is passed, report its value."""
+        write_file(self.instance_data, '{"my-var": "it worked"}')
+        args = self.args(
+            debug=False, dump_all=True, format=None,
+            instance_data=self.instance_data, list_keys=False,
+            user_data='ud', vendor_data='vd', varname='my_var')
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual('it worked\n', m_stdout.getvalue())
+
+    def test_handle_args_returns_nested_varname(self):
+        """If user_data file is a jinja template render instance-data vars."""
+        write_file(self.instance_data,
+                   '{"v1": {"key-2": "value-2"}, "my-var": "it worked"}')
+        args = self.args(
+            debug=False, dump_all=False, format=None,
+            instance_data=self.instance_data, user_data='ud', vendor_data='vd',
+            list_keys=False, varname='v1.key_2')
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual('value-2\n', m_stdout.getvalue())
+
+    def test_handle_args_returns_standardized_vars_to_top_level_aliases(self):
+        """Any standardized vars under v# are promoted as top-level aliases."""
+        write_file(
+            self.instance_data,
+            '{"v1": {"v1_1": "val1.1"}, "v2": {"v2_2": "val2.2"},'
+            ' "top": "gun"}')
+        expected = dedent("""\
+            {
+             "top": "gun",
+             "userdata": "<redacted for non-root user> file:ud",
+             "v1": {
+              "v1_1": "val1.1"
+             },
+             "v1_1": "val1.1",
+             "v2": {
+              "v2_2": "val2.2"
+             },
+             "v2_2": "val2.2",
+             "vendordata": "<redacted for non-root user> file:vd"
+            }
+        """)
+        args = self.args(
+            debug=False, dump_all=True, format=None,
+            instance_data=self.instance_data, user_data='ud', vendor_data='vd',
+            list_keys=False, varname=None)
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual(expected, m_stdout.getvalue())
+
+    def test_handle_args_list_keys_sorts_top_level_keys_when_no_varname(self):
+        """Sort all top-level keys when only --list-keys provided."""
+        write_file(
+            self.instance_data,
+            '{"v1": {"v1_1": "val1.1"}, "v2": {"v2_2": "val2.2"},'
+            ' "top": "gun"}')
+        expected = 'top\nuserdata\nv1\nv1_1\nv2\nv2_2\nvendordata\n'
+        args = self.args(
+            debug=False, dump_all=False, format=None,
+            instance_data=self.instance_data, list_keys=True, user_data='ud',
+            vendor_data='vd', varname=None)
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual(expected, m_stdout.getvalue())
+
+    def test_handle_args_list_keys_sorts_nested_keys_when_varname(self):
+        """Sort all nested keys of varname object when --list-keys provided."""
+        write_file(
+            self.instance_data,
+            '{"v1": {"v1_1": "val1.1", "v1_2": "val1.2"}, "v2":' +
+            ' {"v2_2": "val2.2"}, "top": "gun"}')
+        expected = 'v1_1\nv1_2\n'
+        args = self.args(
+            debug=False, dump_all=False, format=None,
+            instance_data=self.instance_data, list_keys=True,
+            user_data='ud', vendor_data='vd', varname='v1')
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual(expected, m_stdout.getvalue())
+
+    def test_handle_args_list_keys_errors_when_varname_is_not_a_dict(self):
+        """Raise an error when --list-keys and varname specify a non-list."""
+        write_file(
+            self.instance_data,
+            '{"v1": {"v1_1": "val1.1", "v1_2": "val1.2"}, "v2": ' +
+            '{"v2_2": "val2.2"}, "top": "gun"}')
+        expected_error = "ERROR: --list-keys provided but 'top' is not a dict"
+        args = self.args(
+            debug=False, dump_all=False, format=None,
+            instance_data=self.instance_data, list_keys=True, user_data='ud',
+            vendor_data='vd',  varname='top')
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+                self.assertEqual(1, query.handle_args('anyname', args))
+        self.assertEqual('', m_stdout.getvalue())
+        self.assertIn(expected_error, m_stderr.getvalue())
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/tests/test_status.py b/cloudinit/cmd/tests/test_status.py
index 37a8993..aded858 100644
--- a/cloudinit/cmd/tests/test_status.py
+++ b/cloudinit/cmd/tests/test_status.py
@@ -39,7 +39,8 @@ class TestStatus(CiTestCase):
         ensure_file(self.disable_file)  # Create the ignored disable file
         (is_disabled, reason) = wrap_and_call(
             'cloudinit.cmd.status',
-            {'uses_systemd': False},
+            {'uses_systemd': False,
+             'get_cmdline': "root=/dev/my-root not-important"},
             status._is_cloudinit_disabled, self.disable_file, self.paths)
         self.assertFalse(
             is_disabled, 'expected enabled cloud-init on sysvinit')
@@ -50,7 +51,8 @@ class TestStatus(CiTestCase):
         ensure_file(self.disable_file)  # Create observed disable file
         (is_disabled, reason) = wrap_and_call(
             'cloudinit.cmd.status',
-            {'uses_systemd': True},
+            {'uses_systemd': True,
+             'get_cmdline': "root=/dev/my-root not-important"},
             status._is_cloudinit_disabled, self.disable_file, self.paths)
         self.assertTrue(is_disabled, 'expected disabled cloud-init')
         self.assertEqual(
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index ac72ac4..24a8ebe 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -104,6 +104,7 @@ def handle(name, cfg, cloud, log, args):
             'network_address', 'network_port', 'storage_backend',
             'storage_create_device', 'storage_create_loop',
             'storage_pool', 'trust_password')
+        util.subp(['lxd', 'waitready', '--timeout=300'])
         cmd = ['lxd', 'init', '--auto']
         for k in init_keys:
             if init_cfg.get(k):
@@ -260,7 +261,9 @@ def bridge_to_cmd(bridge_cfg):
 
 
 def _lxc(cmd):
-    env = {'LC_ALL': 'C'}
+    env = {'LC_ALL': 'C',
+           'HOME': os.environ.get('HOME', '/root'),
+           'USER': os.environ.get('USER', 'root')}
     util.subp(['lxc'] + list(cmd) + ["--force-local"], update_env=env)
 
 
@@ -276,27 +279,27 @@ def maybe_cleanup_default(net_name, did_init, create, attach,
     if net_name != _DEFAULT_NETWORK_NAME or not did_init:
         return
 
-    fail_assume_enoent = " failed. Assuming it did not exist."
-    succeeded = " succeeded."
+    fail_assume_enoent = "failed. Assuming it did not exist."
+    succeeded = "succeeded."
     if create:
-        msg = "Deletion of lxd network '%s'" % net_name
+        msg = "Deletion of lxd network '%s' %s"
         try:
             _lxc(["network", "delete", net_name])
-            LOG.debug(msg + succeeded)
+            LOG.debug(msg, net_name, succeeded)
         except util.ProcessExecutionError as e:
             if e.exit_code != 1:
                 raise e
-            LOG.debug(msg + fail_assume_enoent)
+            LOG.debug(msg, net_name, fail_assume_enoent)
 
     if attach:
-        msg = "Removal of device '%s' from profile '%s'" % (nic_name, profile)
+        msg = "Removal of device '%s' from profile '%s' %s"
         try:
             _lxc(["profile", "device", "remove", profile, nic_name])
-            LOG.debug(msg + succeeded)
+            LOG.debug(msg, nic_name, profile, succeeded)
         except util.ProcessExecutionError as e:
             if e.exit_code != 1:
                 raise e
-            LOG.debug(msg + fail_assume_enoent)
+            LOG.debug(msg, nic_name, profile, fail_assume_enoent)
 
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_rh_subscription.py b/cloudinit/config/cc_rh_subscription.py
index 1c67943..edee01e 100644
--- a/cloudinit/config/cc_rh_subscription.py
+++ b/cloudinit/config/cc_rh_subscription.py
@@ -126,7 +126,6 @@ class SubscriptionManager(object):
         self.enable_repo = self.rhel_cfg.get('enable-repo')
         self.disable_repo = self.rhel_cfg.get('disable-repo')
         self.servicelevel = self.rhel_cfg.get('service-level')
-        self.subman = ['subscription-manager']
 
     def log_success(self, msg):
         '''Simple wrapper for logging info messages. Useful for unittests'''
@@ -173,21 +172,12 @@ class SubscriptionManager(object):
         cmd = ['identity']
 
         try:
-            self._sub_man_cli(cmd)
+            _sub_man_cli(cmd)
         except util.ProcessExecutionError:
             return False
 
         return True
 
-    def _sub_man_cli(self, cmd, logstring_val=False):
-        '''
-        Uses the prefered cloud-init subprocess def of util.subp
-        and runs subscription-manager.  Breaking this to a
-        separate function for later use in mocking and unittests
-        '''
-        cmd = self.subman + cmd
-        return util.subp(cmd, logstring=logstring_val)
-
     def rhn_register(self):
         '''
         Registers the system by userid and password or activation key
@@ -209,7 +199,7 @@ class SubscriptionManager(object):
                 cmd.append("--serverurl={0}".format(self.server_hostname))
 
             try:
-                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
+                return_out = _sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -232,7 +222,7 @@ class SubscriptionManager(object):
 
             # Attempting to register the system only
             try:
-                return_out = self._sub_man_cli(cmd, logstring_val=True)[0]
+                return_out = _sub_man_cli(cmd, logstring_val=True)[0]
             except util.ProcessExecutionError as e:
                 if e.stdout == "":
                     self.log_warn("Registration failed due "
@@ -255,7 +245,7 @@ class SubscriptionManager(object):
                .format(self.servicelevel)]
 
         try:
-            return_out = self._sub_man_cli(cmd)[0]
+            return_out = _sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             if e.stdout.rstrip() != '':
                 for line in e.stdout.split("\n"):
@@ -273,7 +263,7 @@ class SubscriptionManager(object):
     def _set_auto_attach(self):
         cmd = ['attach', '--auto']
         try:
-            return_out = self._sub_man_cli(cmd)[0]
+            return_out = _sub_man_cli(cmd)[0]
         except util.ProcessExecutionError as e:
             self.log_warn("Auto-attach failed with: {0}".format(e))
             return False
@@ -292,12 +282,12 @@ class SubscriptionManager(object):
 
         # Get all available pools
         cmd = ['list', '--available', '--pool-only']
-        results = self._sub_man_cli(cmd)[0]
+        results = _sub_man_cli(cmd)[0]
         available = (results.rstrip()).split("\n")
 
         # Get all consumed pools
         cmd = ['list', '--consumed', '--pool-only']
-        results = self._sub_man_cli(cmd)[0]
+        results = _sub_man_cli(cmd)[0]
         consumed = (results.rstrip()).split("\n")
 
         return available, consumed
@@ -309,14 +299,14 @@ class SubscriptionManager(object):
         '''
 
         cmd = ['repos', '--list-enabled']
-        return_out = self._sub_man_cli(cmd)[0]
+        return_out = _sub_man_cli(cmd)[0]
         active_repos = []
         for repo in return_out.split("\n"):
             if "Repo ID:" in repo:
                 active_repos.append((repo.split(':')[1]).strip())
 
         cmd = ['repos', '--list-disabled']
-        return_out = self._sub_man_cli(cmd)[0]
+        return_out = _sub_man_cli(cmd)[0]
 
         inactive_repos = []
         for repo in return_out.split("\n"):
@@ -346,7 +336,7 @@ class SubscriptionManager(object):
         if len(pool_list) > 0:
             cmd.extend(pool_list)
             try:
-                self._sub_man_cli(cmd)
+                _sub_man_cli(cmd)
                 self.log.debug("Attached the following pools to your "
                                "system: %s", (", ".join(pool_list))
                                .replace('--pool=', ''))
@@ -423,7 +413,7 @@ class SubscriptionManager(object):
             cmd.extend(enable_list)
 
         try:
-            self._sub_man_cli(cmd)
+            _sub_man_cli(cmd)
         except util.ProcessExecutionError as e:
             self.log_warn("Unable to alter repos due to {0}".format(e))
             return False
@@ -439,4 +429,15 @@ class SubscriptionManager(object):
     def is_configured(self):
         return bool((self.userid and self.password) or self.activation_key)
 
+
+def _sub_man_cli(cmd, logstring_val=False):
+    '''
+    Uses the prefered cloud-init subprocess def of util.subp
+    and runs subscription-manager.  Breaking this to a
+    separate function for later use in mocking and unittests
+    '''
+    return util.subp(['subscription-manager'] + cmd,
+                     logstring=logstring_val)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
index 45204a0..f8f7cb3 100755
--- a/cloudinit/config/cc_ssh.py
+++ b/cloudinit/config/cc_ssh.py
@@ -101,10 +101,6 @@ from cloudinit.distros import ug_util
 from cloudinit import ssh_util
 from cloudinit import util
 
-DISABLE_ROOT_OPTS = (
-    "no-port-forwarding,no-agent-forwarding,"
-    "no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\""
-    " rather than the user \\\"root\\\".\';echo;sleep 10\"")
 
 GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']
 KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
@@ -185,7 +181,7 @@ def handle(_name, cfg, cloud, log, _args):
         (user, _user_config) = ug_util.extract_default(users)
         disable_root = util.get_cfg_option_bool(cfg, "disable_root", True)
         disable_root_opts = util.get_cfg_option_str(cfg, "disable_root_opts",
-                                                    DISABLE_ROOT_OPTS)
+                                                    ssh_util.DISABLE_USER_OPTS)
 
         keys = cloud.get_public_ssh_keys() or []
         if "ssh_authorized_keys" in cfg:
@@ -207,6 +203,7 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
         if not user:
             user = "NONE"
         key_prefix = disable_root_opts.replace('$USER', user)
+        key_prefix = key_prefix.replace('$DISABLE_USER', 'root')
     else:
         key_prefix = ''
 
diff --git a/cloudinit/config/cc_users_groups.py b/cloudinit/config/cc_users_groups.py
index c95bdaa..c32a743 100644
--- a/cloudinit/config/cc_users_groups.py
+++ b/cloudinit/config/cc_users_groups.py
@@ -52,8 +52,17 @@ config keys for an entry in ``users`` are as follows:
       associated with the address, username and SSH keys will be requested from
       there. Default: none
     - ``ssh_authorized_keys``: Optional. List of ssh keys to add to user's
-      authkeys file. Default: none
-    - ``ssh_import_id``: Optional. SSH id to import for user. Default: none
+      authkeys file. Default: none. This key can not be combined with
+      ``ssh_redirect_user``.
+    - ``ssh_import_id``: Optional. SSH id to import for user. Default: none.
+      This key can not be combined with ``ssh_redirect_user``.
+    - ``ssh_redirect_user``: Optional. Boolean set to true to disable SSH
+      logins for this user. When specified, all cloud meta-data public ssh
+      keys will be set up in a disabled state for this username. Any ssh login
+      as this username will timeout and prompt with a message to login instead
+      as the configured <default_username> for this instance. Default: false.
+      This key can not be combined with ``ssh_import_id`` or
+      ``ssh_authorized_keys``.
     - ``sudo``: Optional. Sudo rule to use, list of sudo rules to use or False.
       Default: none. An absence of sudo key, or a value of none or false
       will result in no sudo rules being written for the user.
@@ -101,6 +110,7 @@ config keys for an entry in ``users`` are as follows:
           selinux_user: <selinux username>
           shell: <shell path>
           snapuser: <email>
+          ssh_redirect_user: <true/false>
           ssh_authorized_keys:
               - <key>
               - <key>
@@ -114,17 +124,44 @@ config keys for an entry in ``users`` are as follows:
 # since the module attribute 'distros'
 # is a list of distros that are supported, not a sub-module
 from cloudinit.distros import ug_util
+from cloudinit import log as logging
 
 from cloudinit.settings import PER_INSTANCE
 
+LOG = logging.getLogger(__name__)
+
 frequency = PER_INSTANCE
 
 
 def handle(name, cfg, cloud, _log, _args):
     (users, groups) = ug_util.normalize_users_groups(cfg, cloud.distro)
+    (default_user, _user_config) = ug_util.extract_default(users)
+    cloud_keys = cloud.get_public_ssh_keys() or []
     for (name, members) in groups.items():
         cloud.distro.create_group(name, members)
     for (user, config) in users.items():
+        ssh_redirect_user = config.pop("ssh_redirect_user", False)
+        if ssh_redirect_user:
+            if 'ssh_authorized_keys' in config or 'ssh_import_id' in config:
+                raise ValueError(
+                    'Not creating user %s. ssh_redirect_user cannot be'
+                    ' provided with ssh_import_id or ssh_authorized_keys' %
+                    user)
+            if ssh_redirect_user not in (True, 'default'):
+                raise ValueError(
+                    'Not creating user %s. Invalid value of'
+                    ' ssh_redirect_user: %s. Expected values: true, default'
+                    ' or false.' % (user, ssh_redirect_user))
+            if default_user is None:
+                LOG.warning(
+                    'Ignoring ssh_redirect_user: %s for %s.'
+                    ' No default_user defined.'
+                    ' Perhaps missing cloud configuration users: '
+                    ' [default, ..].',
+                    ssh_redirect_user, user)
+            else:
+                config['ssh_redirect_user'] = default_user
+                config['cloud_public_ssh_keys'] = cloud_keys
         cloud.distro.create_user(user, **config)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/tests/test_snap.py b/cloudinit/config/tests/test_snap.py
index 34c80f1..3c47289 100644
--- a/cloudinit/config/tests/test_snap.py
+++ b/cloudinit/config/tests/test_snap.py
@@ -162,6 +162,7 @@ class TestAddAssertions(CiTestCase):
 class TestRunCommands(CiTestCase):
 
     with_logs = True
+    allowed_subp = [CiTestCase.SUBP_SHELL_TRUE]
 
     def setUp(self):
         super(TestRunCommands, self).setUp()
@@ -424,8 +425,10 @@ class TestHandle(CiTestCase):
             'snap': {'commands': ['echo "HI" >> %s' % outfile,
                                   'echo "MOM" >> %s' % outfile]}}
         mock_path = 'cloudinit.config.cc_snap.sys.stderr'
-        with mock.patch(mock_path, new_callable=StringIO):
-            handle('snap', cfg=cfg, cloud=None, log=self.logger, args=None)
+        with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]):
+            with mock.patch(mock_path, new_callable=StringIO):
+                handle('snap', cfg=cfg, cloud=None, log=self.logger, args=None)
+
         self.assertEqual('HI\nMOM\n', util.load_file(outfile))
 
     @mock.patch('cloudinit.config.cc_snap.util.subp')
diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py
new file mode 100644
index 0000000..c8a4271
--- /dev/null
+++ b/cloudinit/config/tests/test_ssh.py
@@ -0,0 +1,151 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+
+from cloudinit.config import cc_ssh
+from cloudinit import ssh_util
+from cloudinit.tests.helpers import CiTestCase, mock
+
+MODPATH = "cloudinit.config.cc_ssh."
+
+
+@mock.patch(MODPATH + "ssh_util.setup_user_keys")
+class TestHandleSsh(CiTestCase):
+    """Test cc_ssh handling of ssh config."""
+
+    def test_apply_credentials_with_user(self, m_setup_keys):
+        """Apply keys for the given user and root."""
+        keys = ["key1"]
+        user = "clouduser"
+        cc_ssh.apply_credentials(keys, user, False, ssh_util.DISABLE_USER_OPTS)
+        self.assertEqual([mock.call(set(keys), user),
+                          mock.call(set(keys), "root", options="")],
+                         m_setup_keys.call_args_list)
+
+    def test_apply_credentials_with_no_user(self, m_setup_keys):
+        """Apply keys for root only."""
+        keys = ["key1"]
+        user = None
+        cc_ssh.apply_credentials(keys, user, False, ssh_util.DISABLE_USER_OPTS)
+        self.assertEqual([mock.call(set(keys), "root", options="")],
+                         m_setup_keys.call_args_list)
+
+    def test_apply_credentials_with_user_disable_root(self, m_setup_keys):
+        """Apply keys for the given user and disable root ssh."""
+        keys = ["key1"]
+        user = "clouduser"
+        options = ssh_util.DISABLE_USER_OPTS
+        cc_ssh.apply_credentials(keys, user, True, options)
+        options = options.replace("$USER", user)
+        options = options.replace("$DISABLE_USER", "root")
+        self.assertEqual([mock.call(set(keys), user),
+                          mock.call(set(keys), "root", options=options)],
+                         m_setup_keys.call_args_list)
+
+    def test_apply_credentials_with_no_user_disable_root(self, m_setup_keys):
+        """Apply keys no user and disable root ssh."""
+        keys = ["key1"]
+        user = None
+        options = ssh_util.DISABLE_USER_OPTS
+        cc_ssh.apply_credentials(keys, user, True, options)
+        options = options.replace("$USER", "NONE")
+        options = options.replace("$DISABLE_USER", "root")
+        self.assertEqual([mock.call(set(keys), "root", options=options)],
+                         m_setup_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_no_cfg(self, m_path_exists, m_nug,
+                           m_glob, m_setup_keys):
+        """Test handle with no config ignores generating existing keyfiles."""
+        cfg = {}
+        keys = ["key1"]
+        m_glob.return_value = []  # Return no matching keys to prevent removal
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ([], {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        options = ssh_util.DISABLE_USER_OPTS.replace("$USER", "NONE")
+        options = options.replace("$DISABLE_USER", "root")
+        m_glob.assert_called_once_with('/etc/ssh/ssh_host_*key*')
+        self.assertIn(
+            [mock.call('/etc/ssh/ssh_host_rsa_key'),
+             mock.call('/etc/ssh/ssh_host_dsa_key'),
+             mock.call('/etc/ssh/ssh_host_ecdsa_key'),
+             mock.call('/etc/ssh/ssh_host_ed25519_key')],
+            m_path_exists.call_args_list)
+        self.assertEqual([mock.call(set(keys), "root", options=options)],
+                         m_setup_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_no_cfg_and_default_root(self, m_path_exists, m_nug,
+                                            m_glob, m_setup_keys):
+        """Test handle with no config and a default distro user."""
+        cfg = {}
+        keys = ["key1"]
+        user = "clouduser"
+        m_glob.return_value = []  # Return no matching keys to prevent removal
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cc_ssh.handle("name", cfg, cloud, None, None)
+
+        options = ssh_util.DISABLE_USER_OPTS.replace("$USER", user)
+        options = options.replace("$DISABLE_USER", "root")
+        self.assertEqual([mock.call(set(keys), user),
+                          mock.call(set(keys), "root", options=options)],
+                         m_setup_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_cfg_with_explicit_disable_root(self, m_path_exists, m_nug,
+                                                   m_glob, m_setup_keys):
+        """Test handle with explicit disable_root and a default distro user."""
+        # This test is identical to test_handle_no_cfg_and_default_root,
+        # except this uses an explicit cfg value
+        cfg = {"disable_root": True}
+        keys = ["key1"]
+        user = "clouduser"
+        m_glob.return_value = []  # Return no matching keys to prevent removal
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cc_ssh.handle("name", cfg, cloud, None, None)
+
+        options = ssh_util.DISABLE_USER_OPTS.replace("$USER", user)
+        options = options.replace("$DISABLE_USER", "root")
+        self.assertEqual([mock.call(set(keys), user),
+                          mock.call(set(keys), "root", options=options)],
+                         m_setup_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_cfg_without_disable_root(self, m_path_exists, m_nug,
+                                             m_glob, m_setup_keys):
+        """Test handle with disable_root == False."""
+        # When disable_root == False, the ssh redirect for root is skipped
+        cfg = {"disable_root": False}
+        keys = ["key1"]
+        user = "clouduser"
+        m_glob.return_value = []  # Return no matching keys to prevent removal
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.get_public_ssh_keys = mock.Mock(return_value=keys)
+        cc_ssh.handle("name", cfg, cloud, None, None)
+
+        self.assertEqual([mock.call(set(keys), user),
+                          mock.call(set(keys), "root", options="")],
+                         m_setup_keys.call_args_list)
diff --git a/cloudinit/config/tests/test_ubuntu_advantage.py b/cloudinit/config/tests/test_ubuntu_advantage.py
index f1beeff..b7cf9be 100644
--- a/cloudinit/config/tests/test_ubuntu_advantage.py
+++ b/cloudinit/config/tests/test_ubuntu_advantage.py
@@ -23,6 +23,7 @@ class FakeCloud(object):
 class TestRunCommands(CiTestCase):
 
     with_logs = True
+    allowed_subp = [CiTestCase.SUBP_SHELL_TRUE]
 
     def setUp(self):
         super(TestRunCommands, self).setUp()
@@ -234,8 +235,10 @@ class TestHandle(CiTestCase):
             'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile,
                                               'echo "MOM" >> %s' % outfile]}}
         mock_path = '%s.sys.stderr' % MPATH
-        with mock.patch(mock_path, new_callable=StringIO):
-            handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
+        with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]):
+            with mock.patch(mock_path, new_callable=StringIO):
+                handle('nomatter', cfg=cfg, cloud=None, log=self.logger,
+                       args=None)
         self.assertEqual('HI\nMOM\n', util.load_file(outfile))
 
 
diff --git a/cloudinit/config/tests/test_users_groups.py b/cloudinit/config/tests/test_users_groups.py
new file mode 100644
index 0000000..ba0afae
--- /dev/null
+++ b/cloudinit/config/tests/test_users_groups.py
@@ -0,0 +1,144 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+
+from cloudinit.config import cc_users_groups
+from cloudinit.tests.helpers import CiTestCase, mock
+
+MODPATH = "cloudinit.config.cc_users_groups"
+
+
+@mock.patch('cloudinit.distros.ubuntu.Distro.create_group')
+@mock.patch('cloudinit.distros.ubuntu.Distro.create_user')
+class TestHandleUsersGroups(CiTestCase):
+    """Test cc_users_groups handling of config."""
+
+    with_logs = True
+
+    def test_handle_no_cfg_creates_no_users_or_groups(self, m_user, m_group):
+        """Test handle with no config will not create users or groups."""
+        cfg = {}  # merged cloud-config
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        m_user.assert_not_called()
+        m_group.assert_not_called()
+
+    def test_handle_users_in_cfg_calls_create_users(self, m_user, m_group):
+        """When users in config, create users with distro.create_user."""
+        cfg = {'users': ['default', {'name': 'me2'}]}  # merged cloud-config
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        self.assertItemsEqual(
+            m_user.call_args_list,
+            [mock.call('ubuntu', groups='lxd,sudo', lock_passwd=True,
+                       shell='/bin/bash'),
+             mock.call('me2', default=False)])
+        m_group.assert_not_called()
+
+    def test_users_with_ssh_redirect_user_passes_keys(self, m_user, m_group):
+        """When ssh_redirect_user is True pass default user and cloud keys."""
+        cfg = {
+            'users': ['default', {'name': 'me2', 'ssh_redirect_user': True}]}
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {'public-keys': ['key1']}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        self.assertItemsEqual(
+            m_user.call_args_list,
+            [mock.call('ubuntu', groups='lxd,sudo', lock_passwd=True,
+                       shell='/bin/bash'),
+             mock.call('me2', cloud_public_ssh_keys=['key1'], default=False,
+                       ssh_redirect_user='ubuntu')])
+        m_group.assert_not_called()
+
+    def test_users_with_ssh_redirect_user_default_str(self, m_user, m_group):
+        """When ssh_redirect_user is 'default' pass default username."""
+        cfg = {
+            'users': ['default', {'name': 'me2',
+                                  'ssh_redirect_user': 'default'}]}
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {'public-keys': ['key1']}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        self.assertItemsEqual(
+            m_user.call_args_list,
+            [mock.call('ubuntu', groups='lxd,sudo', lock_passwd=True,
+                       shell='/bin/bash'),
+             mock.call('me2', cloud_public_ssh_keys=['key1'], default=False,
+                       ssh_redirect_user='ubuntu')])
+        m_group.assert_not_called()
+
+    def test_users_with_ssh_redirect_user_non_default(self, m_user, m_group):
+        """Warn when ssh_redirect_user is not 'default'."""
+        cfg = {
+            'users': ['default', {'name': 'me2',
+                                  'ssh_redirect_user': 'snowflake'}]}
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {'public-keys': ['key1']}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        with self.assertRaises(ValueError) as context_manager:
+            cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        m_group.assert_not_called()
+        self.assertEqual(
+            'Not creating user me2. Invalid value of ssh_redirect_user:'
+            ' snowflake. Expected values: true, default or false.',
+            str(context_manager.exception))
+
+    def test_users_with_ssh_redirect_user_default_false(self, m_user, m_group):
+        """When unspecified ssh_redirect_user is false and not set up."""
+        cfg = {'users': ['default', {'name': 'me2'}]}
+        # System config defines a default user for the distro.
+        sys_cfg = {'default_user': {'name': 'ubuntu', 'lock_passwd': True,
+                                    'groups': ['lxd', 'sudo'],
+                                    'shell': '/bin/bash'}}
+        metadata = {'public-keys': ['key1']}
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        self.assertItemsEqual(
+            m_user.call_args_list,
+            [mock.call('ubuntu', groups='lxd,sudo', lock_passwd=True,
+                       shell='/bin/bash'),
+             mock.call('me2', default=False)])
+        m_group.assert_not_called()
+
+    def test_users_ssh_redirect_user_and_no_default(self, m_user, m_group):
+        """Warn when ssh_redirect_user is True and no default user present."""
+        cfg = {
+            'users': ['default', {'name': 'me2', 'ssh_redirect_user': True}]}
+        # System config defines *no* default user for the distro.
+        sys_cfg = {}
+        metadata = {}  # no public-keys defined
+        cloud = self.tmp_cloud(
+            distro='ubuntu', sys_cfg=sys_cfg, metadata=metadata)
+        cc_users_groups.handle('modulename', cfg, cloud, None, None)
+        m_user.assert_called_once_with('me2', default=False)
+        m_group.assert_not_called()
+        self.assertEqual(
+            'WARNING: Ignoring ssh_redirect_user: True for me2. No'
+            ' default_user defined. Perhaps missing'
+            ' cloud configuration users:  [default, ..].\n',
+            self.logs.getvalue())
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
old mode 100755
new mode 100644
index ab0b077..ef618c2
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -74,11 +74,10 @@ class Distro(object):
     def install_packages(self, pkglist):
         raise NotImplementedError()
 
-    @abc.abstractmethod
     def _write_network(self, settings):
-        # In the future use the http://fedorahosted.org/netcf/
-        # to write this blob out in a distro format
-        raise NotImplementedError()
+        raise RuntimeError(
+            "Legacy function '_write_network' was called in distro '%s'.\n"
+            "_write_network_config needs implementation.\n" % self.name)
 
     def _write_network_config(self, settings):
         raise NotImplementedError()
@@ -91,7 +90,7 @@ class Distro(object):
         LOG.debug("Selected renderer '%s' from priority list: %s",
                   name, priority)
         renderer = render_cls(config=self.renderer_configs.get(name))
-        renderer.render_network_config(network_config=network_config)
+        renderer.render_network_config(network_config)
         return []
 
     def _find_tz_file(self, tz):
@@ -144,7 +143,11 @@ class Distro(object):
         # this applies network where 'settings' is interfaces(5) style
         # it is obsolete compared to apply_network_config
         # Write it out
+
+        # pylint: disable=assignment-from-no-return
+        # We have implementations in arch, freebsd and gentoo still
         dev_names = self._write_network(settings)
+        # pylint: enable=assignment-from-no-return
         # Now try to bring them up
         if bring_up:
             return self._bring_up_interfaces(dev_names)
@@ -157,7 +160,7 @@ class Distro(object):
                     distro)
         header = '\n'.join([
             "# Converted from network_config for distro %s" % distro,
-            "# Implmentation of _write_network_config is needed."
+            "# Implementation of _write_network_config is needed."
         ])
         ns = network_state.parse_net_config_data(netconfig)
         contents = eni.network_state_to_eni(
@@ -381,6 +384,9 @@ class Distro(object):
         """
         Add a user to the system using standard GNU tools
         """
+        # XXX need to make add_user idempotent somehow as we
+        # still want to add groups or modify ssh keys on pre-existing
+        # users in the image.
         if util.is_user(name):
             LOG.info("User %s already exists, skipping.", name)
             return
@@ -547,10 +553,24 @@ class Distro(object):
                     LOG.warning("Invalid type '%s' detected for"
                                 " 'ssh_authorized_keys', expected list,"
                                 " string, dict, or set.", type(keys))
+                    keys = []
                 else:
                     keys = set(keys) or []
-                    ssh_util.setup_user_keys(keys, name, options=None)
-
+            ssh_util.setup_user_keys(set(keys), name)
+        if 'ssh_redirect_user' in kwargs:
+            cloud_keys = kwargs.get('cloud_public_ssh_keys', [])
+            if not cloud_keys:
+                LOG.warning(
+                    'Unable to disable ssh logins for %s given'
+                    ' ssh_redirect_user: %s. No cloud public-keys present.',
+                    name, kwargs['ssh_redirect_user'])
+            else:
+                redirect_user = kwargs['ssh_redirect_user']
+                disable_option = ssh_util.DISABLE_USER_OPTS
+                disable_option = disable_option.replace('$USER', redirect_user)
+                disable_option = disable_option.replace('$DISABLE_USER', name)
+                ssh_util.setup_user_keys(
+                    set(cloud_keys), name, options=disable_option)
         return True
 
     def lock_passwd(self, name):
diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
index 33cc0bf..d517fb8 100644
--- a/cloudinit/distros/debian.py
+++ b/cloudinit/distros/debian.py
@@ -109,11 +109,6 @@ class Distro(distros.Distro):
         self.update_package_sources()
         self.package_command('install', pkgs=pkglist)
 
-    def _write_network(self, settings):
-        # this is a legacy method, it will always write eni
-        util.write_file(self.network_conf_fn["eni"], settings)
-        return ['all']
-
     def _write_network_config(self, netconfig):
         _maybe_remove_legacy_eth0()
         return self._supported_write_network_config(netconfig)
diff --git a/cloudinit/distros/net_util.py b/cloudinit/distros/net_util.py
index 1ce1aa7..edfcd99 100644
--- a/cloudinit/distros/net_util.py
+++ b/cloudinit/distros/net_util.py
@@ -67,6 +67,10 @@
 #     }
 # }
 
+from cloudinit.net.network_state import (
+    net_prefix_to_ipv4_mask, mask_and_ipv4_to_bcast_addr)
+
+
 def translate_network(settings):
     # Get the standard cmd, args from the ubuntu format
     entries = []
@@ -134,6 +138,21 @@ def translate_network(settings):
                     val = info[k].strip().lower()
                     if val:
                         iface_info[k] = val
+            # handle static ip configurations using
+            # ipaddress/prefix-length format
+            if 'address' in iface_info:
+                if 'netmask' not in iface_info:
+                    # check if the address has a network prefix
+                    addr, _, prefix = iface_info['address'].partition('/')
+                    if prefix:
+                        iface_info['netmask'] = (
+                            net_prefix_to_ipv4_mask(prefix))
+                        iface_info['address'] = addr
+                        # if we set the netmask, we also can set the broadcast
+                        iface_info['broadcast'] = (
+                            mask_and_ipv4_to_bcast_addr(
+                                iface_info['netmask'], addr))
+
             # Name server info provided??
             if 'dns-nameservers' in info:
                 iface_info['dns-nameservers'] = info['dns-nameservers'].split()
diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
index 9f90e95..1bfe047 100644
--- a/cloudinit/distros/opensuse.py
+++ b/cloudinit/distros/opensuse.py
@@ -16,7 +16,6 @@ from cloudinit import helpers
 from cloudinit import log as logging
 from cloudinit import util
 
-from cloudinit.distros import net_util
 from cloudinit.distros import rhel_util as rhutil
 from cloudinit.settings import PER_INSTANCE
 
@@ -28,13 +27,23 @@ class Distro(distros.Distro):
     hostname_conf_fn = '/etc/HOSTNAME'
     init_cmd = ['service']
     locale_conf_fn = '/etc/sysconfig/language'
-    network_conf_fn = '/etc/sysconfig/network'
+    network_conf_fn = '/etc/sysconfig/network/config'
     network_script_tpl = '/etc/sysconfig/network/ifcfg-%s'
     resolve_conf_fn = '/etc/resolv.conf'
     route_conf_tpl = '/etc/sysconfig/network/ifroute-%s'
     systemd_hostname_conf_fn = '/etc/hostname'
     systemd_locale_conf_fn = '/etc/locale.conf'
     tz_local_fn = '/etc/localtime'
+    renderer_configs = {
+        'sysconfig': {
+            'control': 'etc/sysconfig/network/config',
+            'iface_templates': '%(base)s/network/ifcfg-%(name)s',
+            'route_templates': {
+                'ipv4': '%(base)s/network/ifroute-%(name)s',
+                'ipv6': '%(base)s/network/ifroute-%(name)s',
+            }
+        }
+    }
 
     def __init__(self, name, cfg, paths):
         distros.Distro.__init__(self, name, cfg, paths)
@@ -162,51 +171,8 @@ class Distro(distros.Distro):
             conf.set_hostname(hostname)
             util.write_file(out_fn, str(conf), 0o644)
 
-    def _write_network(self, settings):
-        # Convert debian settings to ifcfg format
-        entries = net_util.translate_network(settings)
-        LOG.debug("Translated ubuntu style network settings %s into %s",
-                  settings, entries)
-        # Make the intermediate format as the suse format...
-        nameservers = []
-        searchservers = []
-        dev_names = entries.keys()
-        for (dev, info) in entries.items():
-            net_fn = self.network_script_tpl % (dev)
-            route_fn = self.route_conf_tpl % (dev)
-            mode = None
-            if info.get('auto', None):
-                mode = 'auto'
-            else:
-                mode = 'manual'
-            bootproto = info.get('bootproto', None)
-            gateway = info.get('gateway', None)
-            net_cfg = {
-                'BOOTPROTO': bootproto,
-                'BROADCAST': info.get('broadcast'),
-                'GATEWAY': gateway,
-                'IPADDR': info.get('address'),
-                'LLADDR': info.get('hwaddress'),
-                'NETMASK': info.get('netmask'),
-                'STARTMODE': mode,
-                'USERCONTROL': 'no'
-            }
-            if dev != 'lo':
-                net_cfg['ETHTOOL_OPTIONS'] = ''
-            else:
-                net_cfg['FIREWALL'] = 'no'
-            rhutil.update_sysconfig_file(net_fn, net_cfg, True)
-            if gateway and bootproto == 'static':
-                default_route = 'default    %s' % gateway
-                util.write_file(route_fn, default_route, 0o644)
-            if 'dns-nameservers' in info:
-                nameservers.extend(info['dns-nameservers'])
-            if 'dns-search' in info:
-                searchservers.extend(info['dns-search'])
-        if nameservers or searchservers:
-            rhutil.update_resolve_conf_file(self.resolve_conf_fn,
-                                            nameservers, searchservers)
-        return dev_names
+    def _write_network_config(self, netconfig):
+        return self._supported_write_network_config(netconfig)
 
     @property
     def preferred_ntp_clients(self):
diff --git a/cloudinit/distros/rhel.py b/cloudinit/distros/rhel.py
index 1fecb61..f55d96f 100644
--- a/cloudinit/distros/rhel.py
+++ b/cloudinit/distros/rhel.py
@@ -13,7 +13,6 @@ from cloudinit import helpers
 from cloudinit import log as logging
 from cloudinit import util
 
-from cloudinit.distros import net_util
 from cloudinit.distros import rhel_util
 from cloudinit.settings import PER_INSTANCE
 
@@ -39,6 +38,16 @@ class Distro(distros.Distro):
     resolve_conf_fn = "/etc/resolv.conf"
     tz_local_fn = "/etc/localtime"
     usr_lib_exec = "/usr/libexec"
+    renderer_configs = {
+        'sysconfig': {
+            'control': 'etc/sysconfig/network',
+            'iface_templates': '%(base)s/network-scripts/ifcfg-%(name)s',
+            'route_templates': {
+                'ipv4': '%(base)s/network-scripts/route-%(name)s',
+                'ipv6': '%(base)s/network-scripts/route6-%(name)s'
+            }
+        }
+    }
 
     def __init__(self, name, cfg, paths):
         distros.Distro.__init__(self, name, cfg, paths)
@@ -55,54 +64,6 @@ class Distro(distros.Distro):
     def _write_network_config(self, netconfig):
         return self._supported_write_network_config(netconfig)
 
-    def _write_network(self, settings):
-        # TODO(harlowja) fix this... since this is the ubuntu format
-        entries = net_util.translate_network(settings)
-        LOG.debug("Translated ubuntu style network settings %s into %s",
-                  settings, entries)
-        # Make the intermediate format as the rhel format...
-        nameservers = []
-        searchservers = []
-        dev_names = entries.keys()
-        use_ipv6 = False
-        for (dev, info) in entries.items():
-            net_fn = self.network_script_tpl % (dev)
-            net_cfg = {
-                'DEVICE': dev,
-                'NETMASK': info.get('netmask'),
-                'IPADDR': info.get('address'),
-                'BOOTPROTO': info.get('bootproto'),
-                'GATEWAY': info.get('gateway'),
-                'BROADCAST': info.get('broadcast'),
-                'MACADDR': info.get('hwaddress'),
-                'ONBOOT': _make_sysconfig_bool(info.get('auto')),
-            }
-            if info.get('inet6'):
-                use_ipv6 = True
-                net_cfg.update({
-                    'IPV6INIT': _make_sysconfig_bool(True),
-                    'IPV6ADDR': info.get('ipv6').get('address'),
-                    'IPV6_DEFAULTGW': info.get('ipv6').get('gateway'),
-                })
-            rhel_util.update_sysconfig_file(net_fn, net_cfg)
-            if 'dns-nameservers' in info:
-                nameservers.extend(info['dns-nameservers'])
-            if 'dns-search' in info:
-                searchservers.extend(info['dns-search'])
-        if nameservers or searchservers:
-            rhel_util.update_resolve_conf_file(self.resolve_conf_fn,
-                                               nameservers, searchservers)
-        if dev_names:
-            net_cfg = {
-                'NETWORKING': _make_sysconfig_bool(True),
-            }
-            # If IPv6 interface present, enable ipv6 networking
-            if use_ipv6:
-                net_cfg['NETWORKING_IPV6'] = _make_sysconfig_bool(True)
-                net_cfg['IPV6_AUTOCONF'] = _make_sysconfig_bool(False)
-            rhel_util.update_sysconfig_file(self.network_conf_fn, net_cfg)
-        return dev_names
-
     def apply_locale(self, locale, out_fn=None):
         if self.uses_systemd():
             if not out_fn:
diff --git a/cloudinit/handlers/__init__.py b/cloudinit/handlers/__init__.py
index c3576c0..0db75af 100644
--- a/cloudinit/handlers/__init__.py
+++ b/cloudinit/handlers/__init__.py
@@ -41,7 +41,7 @@ PART_HANDLER_FN_TMPL = 'part-handler-%03d'
 # For parts without filenames
 PART_FN_TPL = 'part-%03d'
 
-# Different file beginnings to there content type
+# Different file beginnings to their content type
 INCLUSION_TYPES_MAP = {
     '#include': 'text/x-include-url',
     '#include-once': 'text/x-include-once-url',
@@ -52,6 +52,7 @@ INCLUSION_TYPES_MAP = {
     '#cloud-boothook': 'text/cloud-boothook',
     '#cloud-config-archive': 'text/cloud-config-archive',
     '#cloud-config-jsonp': 'text/cloud-config-jsonp',
+    '## template: jinja': 'text/jinja2',
 }
 
 # Sorted longest first
@@ -69,9 +70,13 @@ class Handler(object):
     def __repr__(self):
         return "%s: [%s]" % (type_utils.obj_name(self), self.list_types())
 
-    @abc.abstractmethod
     def list_types(self):
-        raise NotImplementedError()
+        # Each subclass must define the supported content prefixes it handles.
+        if not hasattr(self, 'prefixes'):
+            raise NotImplementedError('Missing prefixes subclass attribute')
+        else:
+            return [INCLUSION_TYPES_MAP[prefix]
+                    for prefix in getattr(self, 'prefixes')]
 
     @abc.abstractmethod
     def handle_part(self, *args, **kwargs):
diff --git a/cloudinit/handlers/boot_hook.py b/cloudinit/handlers/boot_hook.py
index 057b4db..dca50a4 100644
--- a/cloudinit/handlers/boot_hook.py
+++ b/cloudinit/handlers/boot_hook.py
@@ -17,10 +17,13 @@ from cloudinit import util
 from cloudinit.settings import (PER_ALWAYS)
 
 LOG = logging.getLogger(__name__)
-BOOTHOOK_PREFIX = "#cloud-boothook"
 
 
 class BootHookPartHandler(handlers.Handler):
+
+    # The content prefixes this handler understands.
+    prefixes = ['#cloud-boothook']
+
     def __init__(self, paths, datasource, **_kwargs):
         handlers.Handler.__init__(self, PER_ALWAYS)
         self.boothook_dir = paths.get_ipath("boothooks")
@@ -28,16 +31,11 @@ class BootHookPartHandler(handlers.Handler):
         if datasource:
             self.instance_id = datasource.get_instance_id()
 
-    def list_types(self):
-        return [
-            handlers.type_from_starts_with(BOOTHOOK_PREFIX),
-        ]
-
     def _write_part(self, payload, filename):
         filename = util.clean_filename(filename)
         filepath = os.path.join(self.boothook_dir, filename)
         contents = util.strip_prefix_suffix(util.dos2unix(payload),
-                                            prefix=BOOTHOOK_PREFIX)
+                                            prefix=self.prefixes[0])
         util.write_file(filepath, contents.lstrip(), 0o700)
         return filepath
 
diff --git a/cloudinit/handlers/cloud_config.py b/cloudinit/handlers/cloud_config.py
index 178a5b9..99bf0e6 100644
--- a/cloudinit/handlers/cloud_config.py
+++ b/cloudinit/handlers/cloud_config.py
@@ -42,14 +42,12 @@ DEF_MERGERS = mergers.string_extract_mergers('dict(replace)+list()+str()')
 CLOUD_PREFIX = "#cloud-config"
 JSONP_PREFIX = "#cloud-config-jsonp"
 
-# The file header -> content types this module will handle.
-CC_TYPES = {
-    JSONP_PREFIX: handlers.type_from_starts_with(JSONP_PREFIX),
-    CLOUD_PREFIX: handlers.type_from_starts_with(CLOUD_PREFIX),
-}
-
 
 class CloudConfigPartHandler(handlers.Handler):
+
+    # The content prefixes this handler understands.
+    prefixes = [CLOUD_PREFIX, JSONP_PREFIX]
+
     def __init__(self, paths, **_kwargs):
         handlers.Handler.__init__(self, PER_ALWAYS, version=3)
         self.cloud_buf = None
@@ -58,9 +56,6 @@ class CloudConfigPartHandler(handlers.Handler):
             self.cloud_fn = paths.get_ipath(_kwargs["cloud_config_path"])
         self.file_names = []
 
-    def list_types(self):
-        return list(CC_TYPES.values())
-
     def _write_cloud_config(self):
         if not self.cloud_fn:
             return
@@ -138,7 +133,7 @@ class CloudConfigPartHandler(handlers.Handler):
             # First time through, merge with an empty dict...
             if self.cloud_buf is None or not self.file_names:
                 self.cloud_buf = {}
-            if ctype == CC_TYPES[JSONP_PREFIX]:
+            if ctype == handlers.INCLUSION_TYPES_MAP[JSONP_PREFIX]:
                 self._merge_patch(payload)
             else:
                 self._merge_part(payload, headers)
diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py
new file mode 100644
index 0000000..3fa4097
--- /dev/null
+++ b/cloudinit/handlers/jinja_template.py
@@ -0,0 +1,137 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+import os
+import re
+
+try:
+    from jinja2.exceptions import UndefinedError as JUndefinedError
+except ImportError:
+    # No jinja2 dependency
+    JUndefinedError = Exception
+
+from cloudinit import handlers
+from cloudinit import log as logging
+from cloudinit.sources import INSTANCE_JSON_FILE
+from cloudinit.templater import render_string, MISSING_JINJA_PREFIX
+from cloudinit.util import b64d, load_file, load_json, json_dumps
+
+from cloudinit.settings import PER_ALWAYS
+
+LOG = logging.getLogger(__name__)
+
+
+class JinjaTemplatePartHandler(handlers.Handler):
+
+    prefixes = ['## template: jinja']
+
+    def __init__(self, paths, **_kwargs):
+        handlers.Handler.__init__(self, PER_ALWAYS, version=3)
+        self.paths = paths
+        self.sub_handlers = {}
+        for handler in _kwargs.get('sub_handlers', []):
+            for ctype in handler.list_types():
+                self.sub_handlers[ctype] = handler
+
+    def handle_part(self, data, ctype, filename, payload, frequency, headers):
+        if ctype in handlers.CONTENT_SIGNALS:
+            return
+        jinja_json_file = os.path.join(self.paths.run_dir, INSTANCE_JSON_FILE)
+        rendered_payload = render_jinja_payload_from_file(
+            payload, filename, jinja_json_file)
+        if not rendered_payload:
+            return
+        subtype = handlers.type_from_starts_with(rendered_payload)
+        sub_handler = self.sub_handlers.get(subtype)
+        if not sub_handler:
+            LOG.warning(
+                'Ignoring jinja template for %s. Could not find supported'
+                ' sub-handler for type %s', filename, subtype)
+            return
+        if sub_handler.handler_version == 3:
+            sub_handler.handle_part(
+                data, ctype, filename, rendered_payload, frequency, headers)
+        elif sub_handler.handler_version == 2:
+            sub_handler.handle_part(
+                data, ctype, filename, rendered_payload, frequency)
+
+
+def render_jinja_payload_from_file(
+        payload, payload_fn, instance_data_file, debug=False):
+    """Render a jinja template payload sourcing variables from jinja_vars_path.
+
+    @param payload: String of jinja template content. Should begin with
+        ## template: jinja\n.
+    @param payload_fn: String representing the filename from which the payload
+        was read used in error reporting. Generally in part-handling this is
+        'part-##'.
+    @param instance_data_file: A path to a json file containing variables that
+        will be used as jinja template variables.
+
+    @return: A string of jinja-rendered content with the jinja header removed.
+        Returns None on error.
+    """
+    instance_data = {}
+    rendered_payload = None
+    if not os.path.exists(instance_data_file):
+        raise RuntimeError(
+            'Cannot render jinja template vars. Instance data not yet'
+            ' present at %s' % instance_data_file)
+    instance_data = load_json(load_file(instance_data_file))
+    rendered_payload = render_jinja_payload(
+        payload, payload_fn, instance_data, debug)
+    if not rendered_payload:
+        return None
+    return rendered_payload
+
+
+def render_jinja_payload(payload, payload_fn, instance_data, debug=False):
+    instance_jinja_vars = convert_jinja_instance_data(
+        instance_data,
+        decode_paths=instance_data.get('base64-encoded-keys', []))
+    if debug:
+        LOG.debug('Converted jinja variables\n%s',
+                  json_dumps(instance_jinja_vars))
+    try:
+        rendered_payload = render_string(payload, instance_jinja_vars)
+    except (TypeError, JUndefinedError) as e:
+        LOG.warning(
+            'Ignoring jinja template for %s: %s', payload_fn, str(e))
+        return None
+    warnings = [
+        "'%s'" % var.replace(MISSING_JINJA_PREFIX, '')
+        for var in re.findall(
+            r'%s[^\s]+' % MISSING_JINJA_PREFIX, rendered_payload)]
+    if warnings:
+        LOG.warning(
+            "Could not render jinja template variables in file '%s': %s",
+            payload_fn, ', '.join(warnings))
+    return rendered_payload
+
+
+def convert_jinja_instance_data(data, prefix='', sep='/', decode_paths=()):
+    """Process instance-data.json dict for use in jinja templates.
+
+    Replace hyphens with underscores for jinja templates and decode any
+    base64_encoded_keys.
+    """
+    result = {}
+    decode_paths = [path.replace('-', '_') for path in decode_paths]
+    for key, value in sorted(data.items()):
+        if '-' in key:
+            # Standardize keys for use in #cloud-config/shell templates
+            key = key.replace('-', '_')
+        key_path = '{0}{1}{2}'.format(prefix, sep, key) if prefix else key
+        if key_path in decode_paths:
+            value = b64d(value)
+        if isinstance(value, dict):
+            result[key] = convert_jinja_instance_data(
+                value, key_path, sep=sep, decode_paths=decode_paths)
+            if re.match(r'v\d+', key):
+                # Copy values to top-level aliases
+                for subkey, subvalue in result[key].items():
+                    result[subkey] = subvalue
+        else:
+            result[key] = value
+    return result
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/handlers/shell_script.py b/cloudinit/handlers/shell_script.py
index e4945a2..214714b 100644
--- a/cloudinit/handlers/shell_script.py
+++ b/cloudinit/handlers/shell_script.py
@@ -17,21 +17,18 @@ from cloudinit import util
 from cloudinit.settings import (PER_ALWAYS)
 
 LOG = logging.getLogger(__name__)
-SHELL_PREFIX = "#!"
 
 
 class ShellScriptPartHandler(handlers.Handler):
+
+    prefixes = ['#!']
+
     def __init__(self, paths, **_kwargs):
         handlers.Handler.__init__(self, PER_ALWAYS)
         self.script_dir = paths.get_ipath_cur('scripts')
         if 'script_path' in _kwargs:
             self.script_dir = paths.get_ipath_cur(_kwargs['script_path'])
 
-    def list_types(self):
-        return [
-            handlers.type_from_starts_with(SHELL_PREFIX),
-        ]
-
     def handle_part(self, data, ctype, filename, payload, frequency):
         if ctype in handlers.CONTENT_SIGNALS:
             # TODO(harlowja): maybe delete existing things here
diff --git a/cloudinit/handlers/upstart_job.py b/cloudinit/handlers/upstart_job.py
index dc33876..83fb072 100644
--- a/cloudinit/handlers/upstart_job.py
+++ b/cloudinit/handlers/upstart_job.py
@@ -18,19 +18,16 @@ from cloudinit import util
 from cloudinit.settings import (PER_INSTANCE)
 
 LOG = logging.getLogger(__name__)
-UPSTART_PREFIX = "#upstart-job"
 
 
 class UpstartJobPartHandler(handlers.Handler):
+
+    prefixes = ['#upstart-job']
+
     def __init__(self, paths, **_kwargs):
         handlers.Handler.__init__(self, PER_INSTANCE)
         self.upstart_dir = paths.upstart_conf_d
 
-    def list_types(self):
-        return [
-            handlers.type_from_starts_with(UPSTART_PREFIX),
-        ]
-
     def handle_part(self, data, ctype, filename, payload, frequency):
         if ctype in handlers.CONTENT_SIGNALS:
             return
diff --git a/cloudinit/helpers.py b/cloudinit/helpers.py
index 1979cd9..dcd2645 100644
--- a/cloudinit/helpers.py
+++ b/cloudinit/helpers.py
@@ -239,6 +239,10 @@ class ConfigMerger(object):
             if cc_fn and os.path.isfile(cc_fn):
                 try:
                     i_cfgs.append(util.read_conf(cc_fn))
+                except PermissionError:
+                    LOG.debug(
+                        'Skipped loading cloud-config from %s due to'
+                        ' non-root.', cc_fn)
                 except Exception:
                     util.logexc(LOG, 'Failed loading of cloud-config from %s',
                                 cc_fn)
@@ -449,4 +453,8 @@ class DefaultingConfigParser(RawConfigParser):
             contents = '\n'.join([header, contents, ''])
         return contents
 
+
+def identity(object):
+    return object
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/log.py b/cloudinit/log.py
index 1d75c9f..5ae312b 100644
--- a/cloudinit/log.py
+++ b/cloudinit/log.py
@@ -38,10 +38,18 @@ DEF_CON_FORMAT = '%(asctime)s - %(filename)s[%(levelname)s]: %(message)s'
 logging.Formatter.converter = time.gmtime
 
 
-def setupBasicLogging(level=DEBUG):
+def setupBasicLogging(level=DEBUG, formatter=None):
+    if not formatter:
+        formatter = logging.Formatter(DEF_CON_FORMAT)
     root = logging.getLogger()
+    for handler in root.handlers:
+        if hasattr(handler, 'stream') and hasattr(handler.stream, 'name'):
+            if handler.stream.name == '<stderr>':
+                handler.setLevel(level)
+                return
+    # Didn't have an existing stderr handler; create a new handler
     console = logging.StreamHandler(sys.stderr)
-    console.setFormatter(logging.Formatter(DEF_CON_FORMAT))
+    console.setFormatter(formatter)
     console.setLevel(level)
     root.addHandler(console)
     root.setLevel(level)
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 3ffde52..f83d368 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -569,6 +569,20 @@ def get_interface_mac(ifname):
     return read_sys_net_safe(ifname, path)
 
 
+def get_ib_interface_hwaddr(ifname, ethernet_format):
+    """Returns the string value of an Infiniband interface's hardware
+    address. If ethernet_format is True, an Ethernet MAC-style 6 byte
+    representation of the address will be returned.
+    """
+    # Type 32 is Infiniband.
+    if read_sys_net_safe(ifname, 'type') == '32':
+        mac = get_interface_mac(ifname)
+        if mac and ethernet_format:
+            # Use bytes 13-15 and 18-20 of the hardware address.
+            mac = mac[36:-14] + mac[51:]
+        return mac
+
+
 def get_interfaces_by_mac():
     """Build a dictionary of tuples {mac: name}.
 
@@ -580,6 +594,15 @@ def get_interfaces_by_mac():
                 "duplicate mac found! both '%s' and '%s' have mac '%s'" %
                 (name, ret[mac], mac))
         ret[mac] = name
+        # Try to get an Infiniband hardware address (in 6 byte Ethernet format)
+        # for the interface.
+        ib_mac = get_ib_interface_hwaddr(name, True)
+        if ib_mac:
+            if ib_mac in ret:
+                raise RuntimeError(
+                    "duplicate mac found! both '%s' and '%s' have mac '%s'" %
+                    (name, ret[ib_mac], ib_mac))
+            ret[ib_mac] = name
     return ret
 
 
@@ -607,6 +630,21 @@ def get_interfaces():
     return ret
 
 
+def get_ib_hwaddrs_by_interface():
+    """Build a dictionary mapping Infiniband interface names to their hardware
+    address."""
+    ret = {}
+    for name, _, _, _ in get_interfaces():
+        ib_mac = get_ib_interface_hwaddr(name, False)
+        if ib_mac:
+            if ib_mac in ret:
+                raise RuntimeError(
+                    "duplicate mac found! both '%s' and '%s' have mac '%s'" %
+                    (name, ret[ib_mac], ib_mac))
+            ret[name] = ib_mac
+    return ret
+
+
 class EphemeralIPv4Network(object):
     """Context manager which sets up temporary static network configuration.
 
@@ -698,6 +736,13 @@ class EphemeralIPv4Network(object):
                 self.interface, out.strip())
             return
         util.subp(
+            ['ip', '-4', 'route', 'add', self.router, 'dev', self.interface,
+             'src', self.ip], capture=True)
+        self.cleanup_cmds.insert(
+            0,
+            ['ip', '-4', 'route', 'del', self.router, 'dev', self.interface,
+             'src', self.ip])
+        util.subp(
             ['ip', '-4', 'route', 'add', 'default', 'via', self.router,
              'dev', self.interface], capture=True)
         self.cleanup_cmds.insert(
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index bd20a36..c6f631a 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -247,8 +247,15 @@ def _parse_deb_config_data(ifaces, contents, src_dir, src_path):
                 ifaces[currif]['bridge']['ports'] = []
                 for iface in split[1:]:
                     ifaces[currif]['bridge']['ports'].append(iface)
-            elif option == "bridge_hw" and split[1].lower() == "mac":
-                ifaces[currif]['bridge']['mac'] = split[2]
+            elif option == "bridge_hw":
+                # doc is confusing and thus some may put literal 'MAC'
+                #    bridge_hw MAC <address>
+                # but correct is:
+                #    bridge_hw <address>
+                if split[1].lower() == "mac":
+                    ifaces[currif]['bridge']['mac'] = split[2]
+                else:
+                    ifaces[currif]['bridge']['mac'] = split[1]
             elif option == "bridge_pathcost":
                 if 'pathcost' not in ifaces[currif]['bridge']:
                     ifaces[currif]['bridge']['pathcost'] = {}
@@ -473,7 +480,7 @@ class Renderer(renderer.Renderer):
 
         return '\n\n'.join(['\n'.join(s) for s in sections]) + "\n"
 
-    def render_network_state(self, network_state, target=None):
+    def render_network_state(self, network_state, templates=None, target=None):
         fpeni = util.target_path(target, self.eni_path)
         util.ensure_dir(os.path.dirname(fpeni))
         header = self.eni_header if self.eni_header else ""
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index 4014363..bc1087f 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -189,7 +189,7 @@ class Renderer(renderer.Renderer):
         self._postcmds = config.get('postcmds', False)
         self.clean_default = config.get('clean_default', True)
 
-    def render_network_state(self, network_state, target):
+    def render_network_state(self, network_state, templates=None, target=None):
         # check network state for version
         # if v2, then extract network_state.config
         # else render_v2_from_state
@@ -291,6 +291,8 @@ class Renderer(renderer.Renderer):
 
                 if len(bond_config) > 0:
                     bond.update({'parameters': bond_config})
+                if ifcfg.get('mac_address'):
+                    bond['macaddress'] = ifcfg.get('mac_address').lower()
                 slave_interfaces = ifcfg.get('bond-slaves')
                 if slave_interfaces == 'none':
                     _extract_bond_slaves_by_name(interfaces, bond, ifname)
@@ -327,6 +329,8 @@ class Renderer(renderer.Renderer):
 
                 if len(br_config) > 0:
                     bridge.update({'parameters': br_config})
+                if ifcfg.get('mac_address'):
+                    bridge['macaddress'] = ifcfg.get('mac_address').lower()
                 _extract_addresses(ifcfg, bridge, ifname)
                 bridges.update({ifname: bridge})
 
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index 72c803e..f76e508 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -483,6 +483,10 @@ class NetworkStateInterpreter(object):
 
         interfaces.update({iface['name']: iface})
 
+    @ensure_command_keys(['name'])
+    def handle_infiniband(self, command):
+        self.handle_physical(command)
+
     @ensure_command_keys(['address'])
     def handle_nameserver(self, command):
         dns = self._network_state.get('dns')
diff --git a/cloudinit/net/renderer.py b/cloudinit/net/renderer.py
index 57652e2..5f32e90 100644
--- a/cloudinit/net/renderer.py
+++ b/cloudinit/net/renderer.py
@@ -45,11 +45,14 @@ class Renderer(object):
         return content.getvalue()
 
     @abc.abstractmethod
-    def render_network_state(self, network_state, target=None):
+    def render_network_state(self, network_state, templates=None,
+                             target=None):
         """Render network state."""
 
-    def render_network_config(self, network_config, target=None):
+    def render_network_config(self, network_config, templates=None,
+                              target=None):
         return self.render_network_state(
-            network_state=parse_net_config_data(network_config), target=target)
+            network_state=parse_net_config_data(network_config),
+            templates=templates, target=target)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 3d71923..9c16d3a 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -91,19 +91,20 @@ class ConfigMap(object):
 class Route(ConfigMap):
     """Represents a route configuration."""
 
-    route_fn_tpl_ipv4 = '%(base)s/network-scripts/route-%(name)s'
-    route_fn_tpl_ipv6 = '%(base)s/network-scripts/route6-%(name)s'
-
-    def __init__(self, route_name, base_sysconf_dir):
+    def __init__(self, route_name, base_sysconf_dir,
+                 ipv4_tpl, ipv6_tpl):
         super(Route, self).__init__()
         self.last_idx = 1
         self.has_set_default_ipv4 = False
         self.has_set_default_ipv6 = False
         self._route_name = route_name
         self._base_sysconf_dir = base_sysconf_dir
+        self.route_fn_tpl_ipv4 = ipv4_tpl
+        self.route_fn_tpl_ipv6 = ipv6_tpl
 
     def copy(self):
-        r = Route(self._route_name, self._base_sysconf_dir)
+        r = Route(self._route_name, self._base_sysconf_dir,
+                  self.route_fn_tpl_ipv4, self.route_fn_tpl_ipv6)
         r._conf = self._conf.copy()
         r.last_idx = self.last_idx
         r.has_set_default_ipv4 = self.has_set_default_ipv4
@@ -169,18 +170,23 @@ class Route(ConfigMap):
 class NetInterface(ConfigMap):
     """Represents a sysconfig/networking-script (and its config + children)."""
 
-    iface_fn_tpl = '%(base)s/network-scripts/ifcfg-%(name)s'
-
     iface_types = {
         'ethernet': 'Ethernet',
         'bond': 'Bond',
         'bridge': 'Bridge',
+        'infiniband': 'InfiniBand',
     }
 
-    def __init__(self, iface_name, base_sysconf_dir, kind='ethernet'):
+    def __init__(self, iface_name, base_sysconf_dir, templates,
+                 kind='ethernet'):
         super(NetInterface, self).__init__()
         self.children = []
-        self.routes = Route(iface_name, base_sysconf_dir)
+        self.templates = templates
+        route_tpl = self.templates.get('route_templates')
+        self.routes = Route(iface_name, base_sysconf_dir,
+                            ipv4_tpl=route_tpl.get('ipv4'),
+                            ipv6_tpl=route_tpl.get('ipv6'))
+        self.iface_fn_tpl = self.templates.get('iface_templates')
         self.kind = kind
 
         self._iface_name = iface_name
@@ -213,7 +219,8 @@ class NetInterface(ConfigMap):
                                      'name': self.name})
 
     def copy(self, copy_children=False, copy_routes=False):
-        c = NetInterface(self.name, self._base_sysconf_dir, kind=self._kind)
+        c = NetInterface(self.name, self._base_sysconf_dir,
+                         self.templates, kind=self._kind)
         c._conf = self._conf.copy()
         if copy_children:
             c.children = list(self.children)
@@ -251,6 +258,8 @@ class Renderer(renderer.Renderer):
         ('bridge_bridgeprio', 'PRIO'),
     ])
 
+    templates = {}
+
     def __init__(self, config=None):
         if not config:
             config = {}
@@ -261,6 +270,11 @@ class Renderer(renderer.Renderer):
         nm_conf_path = 'etc/NetworkManager/conf.d/99-cloud-init.conf'
         self.networkmanager_conf_path = config.get('networkmanager_conf_path',
                                                    nm_conf_path)
+        self.templates = {
+            'control': config.get('control'),
+            'iface_templates': config.get('iface_templates'),
+            'route_templates': config.get('route_templates'),
+        }
 
     @classmethod
     def _render_iface_shared(cls, iface, iface_cfg):
@@ -512,7 +526,7 @@ class Renderer(renderer.Renderer):
         return content_str
 
     @staticmethod
-    def _render_networkmanager_conf(network_state):
+    def _render_networkmanager_conf(network_state, templates=None):
         content = networkmanager_conf.NetworkManagerConf("")
 
         # If DNS server information is provided, configure
@@ -556,20 +570,36 @@ class Renderer(renderer.Renderer):
             cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets)
 
     @classmethod
-    def _render_sysconfig(cls, base_sysconf_dir, network_state):
+    def _render_ib_interfaces(cls, network_state, iface_contents):
+        ib_filter = renderer.filter_by_type('infiniband')
+        for iface in network_state.iter_interfaces(ib_filter):
+            iface_name = iface['name']
+            iface_cfg = iface_contents[iface_name]
+            iface_cfg.kind = 'infiniband'
+            iface_subnets = iface.get("subnets", [])
+            route_cfg = iface_cfg.routes
+            cls._render_subnets(iface_cfg, iface_subnets)
+            cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets)
+
+    @classmethod
+    def _render_sysconfig(cls, base_sysconf_dir, network_state,
+                          templates=None):
         '''Given state, return /etc/sysconfig files + contents'''
+        if not templates:
+            templates = cls.templates
         iface_contents = {}
         for iface in network_state.iter_interfaces():
             if iface['type'] == "loopback":
                 continue
             iface_name = iface['name']
-            iface_cfg = NetInterface(iface_name, base_sysconf_dir)
+            iface_cfg = NetInterface(iface_name, base_sysconf_dir, templates)
             cls._render_iface_shared(iface, iface_cfg)
             iface_contents[iface_name] = iface_cfg
         cls._render_physical_interfaces(network_state, iface_contents)
         cls._render_bond_interfaces(network_state, iface_contents)
         cls._render_vlan_interfaces(network_state, iface_contents)
         cls._render_bridge_interfaces(network_state, iface_contents)
+        cls._render_ib_interfaces(network_state, iface_contents)
         contents = {}
         for iface_name, iface_cfg in iface_contents.items():
             if iface_cfg or iface_cfg.children:
@@ -578,17 +608,21 @@ class Renderer(renderer.Renderer):
                     if iface_cfg:
                         contents[iface_cfg.path] = iface_cfg.to_string()
             if iface_cfg.routes:
-                contents[iface_cfg.routes.path_ipv4] = \
-                    iface_cfg.routes.to_string("ipv4")
-                contents[iface_cfg.routes.path_ipv6] = \
-                    iface_cfg.routes.to_string("ipv6")
+                for cpath, proto in zip([iface_cfg.routes.path_ipv4,
+                                         iface_cfg.routes.path_ipv6],
+                                        ["ipv4", "ipv6"]):
+                    if cpath not in contents:
+                        contents[cpath] = iface_cfg.routes.to_string(proto)
         return contents
 
-    def render_network_state(self, network_state, target=None):
+    def render_network_state(self, network_state, templates=None, target=None):
+        if not templates:
+            templates = self.templates
         file_mode = 0o644
         base_sysconf_dir = util.target_path(target, self.sysconf_dir)
         for path, data in self._render_sysconfig(base_sysconf_dir,
-                                                 network_state).items():
+                                                 network_state,
+                                                 templates=templates).items():
             util.write_file(path, data, file_mode)
         if self.dns_path:
             dns_path = util.target_path(target, self.dns_path)
@@ -598,7 +632,8 @@ class Renderer(renderer.Renderer):
         if self.networkmanager_conf_path:
             nm_conf_path = util.target_path(target,
                                             self.networkmanager_conf_path)
-            nm_conf_content = self._render_networkmanager_conf(network_state)
+            nm_conf_content = self._render_networkmanager_conf(network_state,
+                                                               templates)
             if nm_conf_content:
                 util.write_file(nm_conf_path, nm_conf_content, file_mode)
         if self.netrules_path:
@@ -606,13 +641,16 @@ class Renderer(renderer.Renderer):
             netrules_path = util.target_path(target, self.netrules_path)
             util.write_file(netrules_path, netrules_content, file_mode)
 
-        # always write /etc/sysconfig/network configuration
-        sysconfig_path = util.target_path(target, "etc/sysconfig/network")
-        netcfg = [_make_header(), 'NETWORKING=yes']
-        if network_state.use_ipv6:
-            netcfg.append('NETWORKING_IPV6=yes')
-            netcfg.append('IPV6_AUTOCONF=no')
-        util.write_file(sysconfig_path, "\n".join(netcfg) + "\n", file_mode)
+        sysconfig_path = util.target_path(target, templates.get('control'))
+        # Distros configuring /etc/sysconfig/network as a file e.g. Centos
+        if sysconfig_path.endswith('network'):
+            util.ensure_dir(os.path.dirname(sysconfig_path))
+            netcfg = [_make_header(), 'NETWORKING=yes']
+            if network_state.use_ipv6:
+                netcfg.append('NETWORKING_IPV6=yes')
+                netcfg.append('IPV6_AUTOCONF=no')
+            util.write_file(sysconfig_path,
+                            "\n".join(netcfg) + "\n", file_mode)
 
 
 def available(target=None):
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 5c017d1..58e0a59 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -199,6 +199,8 @@ class TestGenerateFallbackConfig(CiTestCase):
         self.sysdir = self.tmp_dir() + '/'
         self.m_sys_path.return_value = self.sysdir
         self.addCleanup(sys_mock.stop)
+        self.add_patch('cloudinit.net.util.is_container', 'm_is_container',
+                       return_value=False)
         self.add_patch('cloudinit.net.util.udevadm_settle', 'm_settle')
 
     def test_generate_fallback_finds_connected_eth_with_mac(self):
@@ -513,12 +515,17 @@ class TestEphemeralIPV4Network(CiTestCase):
                 capture=True),
             mock.call(
                 ['ip', 'route', 'show', '0.0.0.0/0'], capture=True),
+            mock.call(['ip', '-4', 'route', 'add', '192.168.2.1',
+                       'dev', 'eth0', 'src', '192.168.2.2'], capture=True),
             mock.call(
                 ['ip', '-4', 'route', 'add', 'default', 'via',
                  '192.168.2.1', 'dev', 'eth0'], capture=True)]
-        expected_teardown_calls = [mock.call(
-            ['ip', '-4', 'route', 'del', 'default', 'dev', 'eth0'],
-            capture=True)]
+        expected_teardown_calls = [
+            mock.call(['ip', '-4', 'route', 'del', 'default', 'dev', 'eth0'],
+                      capture=True),
+            mock.call(['ip', '-4', 'route', 'del', '192.168.2.1',
+                       'dev', 'eth0', 'src', '192.168.2.2'], capture=True),
+        ]
 
         with net.EphemeralIPv4Network(**params):
             self.assertEqual(expected_setup_calls, m_subp.call_args_list)
diff --git a/cloudinit/reporting/__init__.py b/cloudinit/reporting/__init__.py
index 1ed2b48..ed5c703 100644
--- a/cloudinit/reporting/__init__.py
+++ b/cloudinit/reporting/__init__.py
@@ -18,7 +18,7 @@ DEFAULT_CONFIG = {
 
 
 def update_configuration(config):
-    """Update the instanciated_handler_registry.
+    """Update the instantiated_handler_registry.
 
     :param config:
         The dictionary containing changes to apply.  If a key is given
@@ -37,6 +37,12 @@ def update_configuration(config):
         instantiated_handler_registry.register_item(handler_name, instance)
 
 
+def flush_events():
+    for _, handler in instantiated_handler_registry.registered_items.items():
+        if hasattr(handler, 'flush'):
+            handler.flush()
+
+
 instantiated_handler_registry = DictRegistry()
 update_configuration(DEFAULT_CONFIG)
 
diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py
index 4066076..6d23558 100644
--- a/cloudinit/reporting/handlers.py
+++ b/cloudinit/reporting/handlers.py
@@ -1,17 +1,32 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import abc
+import fcntl
 import json
 import six
+import os
+import re
+import struct
+import threading
+import time
 
 from cloudinit import log as logging
 from cloudinit.registry import DictRegistry
 from cloudinit import (url_helper, util)
+from datetime import datetime
 
+if six.PY2:
+    from multiprocessing.queues import JoinableQueue as JQueue
+else:
+    from queue import Queue as JQueue
 
 LOG = logging.getLogger(__name__)
 
 
+class ReportException(Exception):
+    pass
+
+
 @six.add_metaclass(abc.ABCMeta)
 class ReportingHandler(object):
     """Base class for report handlers.
@@ -24,6 +39,10 @@ class ReportingHandler(object):
     def publish_event(self, event):
         """Publish an event."""
 
+    def flush(self):
+        """Ensure ReportingHandler has published all events"""
+        pass
+
 
 class LogHandler(ReportingHandler):
     """Publishes events to the cloud-init log at the ``DEBUG`` log level."""
@@ -85,9 +104,236 @@ class WebHookHandler(ReportingHandler):
             LOG.warning("failed posting event: %s", event.as_string())
 
 
+class HyperVKvpReportingHandler(ReportingHandler):
+    """
+    Reports events to a Hyper-V host using Key-Value-Pair exchange protocol
+    and can be used to obtain high level diagnostic information from the host.
+
+    To use this facility, the KVP user-space daemon (hv_kvp_daemon) has to be
+    running. It reads the kvp_file when the host requests the guest to
+    enumerate the KVP's.
+
+    This reporter collates all events for a module (origin|name) in a single
+    json string in the dictionary.
+
+    For more information, see
+    https://technet.microsoft.com/en-us/library/dn798287.aspx#Linux%20guests
+    """
+    HV_KVP_EXCHANGE_MAX_VALUE_SIZE = 2048
+    HV_KVP_EXCHANGE_MAX_KEY_SIZE = 512
+    HV_KVP_RECORD_SIZE = (HV_KVP_EXCHANGE_MAX_KEY_SIZE +
+                          HV_KVP_EXCHANGE_MAX_VALUE_SIZE)
+    EVENT_PREFIX = 'CLOUD_INIT'
+    MSG_KEY = 'msg'
+    RESULT_KEY = 'result'
+    DESC_IDX_KEY = 'msg_i'
+    JSON_SEPARATORS = (',', ':')
+    KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1'
+
+    def __init__(self,
+                 kvp_file_path=KVP_POOL_FILE_GUEST,
+                 event_types=None):
+        super(HyperVKvpReportingHandler, self).__init__()
+        self._kvp_file_path = kvp_file_path
+        self._event_types = event_types
+        self.q = JQueue()
+        self.kvp_file = None
+        self.incarnation_no = self._get_incarnation_no()
+        self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX,
+                                                  self.incarnation_no)
+        self._current_offset = 0
+        self.publish_thread = threading.Thread(
+                target=self._publish_event_routine)
+        self.publish_thread.daemon = True
+        self.publish_thread.start()
+
+    def _get_incarnation_no(self):
+        """
+        use the time passed as the incarnation number.
+        the incarnation number is the number which are used to
+        distinguish the old data stored in kvp and the new data.
+        """
+        uptime_str = util.uptime()
+        try:
+            return int(time.time() - float(uptime_str))
+        except ValueError:
+            LOG.warning("uptime '%s' not in correct format.", uptime_str)
+            return 0
+
+    def _iterate_kvps(self, offset):
+        """iterate the kvp file from the current offset."""
+        try:
+            with open(self._kvp_file_path, 'rb+') as f:
+                self.kvp_file = f
+                fcntl.flock(f, fcntl.LOCK_EX)
+                f.seek(offset)
+                record_data = f.read(self.HV_KVP_RECORD_SIZE)
+                while len(record_data) == self.HV_KVP_RECORD_SIZE:
+                    self._current_offset += self.HV_KVP_RECORD_SIZE
+                    kvp_item = self._decode_kvp_item(record_data)
+                    yield kvp_item
+                    record_data = f.read(self.HV_KVP_RECORD_SIZE)
+                fcntl.flock(f, fcntl.LOCK_UN)
+        finally:
+            self.kvp_file = None
+
+    def _event_key(self, event):
+        """
+        the event key format is:
+        CLOUD_INIT|<incarnation number>|<event_type>|<event_name>
+        """
+        return u"{0}|{1}|{2}".format(self.event_key_prefix,
+                                     event.event_type, event.name)
+
+    def _encode_kvp_item(self, key, value):
+        data = (struct.pack("%ds%ds" % (
+                self.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
+                self.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
+                key.encode('utf-8'), value.encode('utf-8')))
+        return data
+
+    def _decode_kvp_item(self, record_data):
+        record_data_len = len(record_data)
+        if record_data_len != self.HV_KVP_RECORD_SIZE:
+            raise ReportException(
+                "record_data len not correct {0} {1}."
+                .format(record_data_len, self.HV_KVP_RECORD_SIZE))
+        k = (record_data[0:self.HV_KVP_EXCHANGE_MAX_KEY_SIZE].decode('utf-8')
+                                                             .strip('\x00'))
+        v = (
+            record_data[
+                self.HV_KVP_EXCHANGE_MAX_KEY_SIZE:self.HV_KVP_RECORD_SIZE
+                ].decode('utf-8').strip('\x00'))
+
+        return {'key': k, 'value': v}
+
+    def _update_kvp_item(self, record_data):
+        if self.kvp_file is None:
+            raise ReportException(
+                "kvp file '{0}' not opened."
+                .format(self._kvp_file_path))
+        self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1)
+        self.kvp_file.write(record_data)
+
+    def _append_kvp_item(self, record_data):
+        with open(self._kvp_file_path, 'rb+') as f:
+            fcntl.flock(f, fcntl.LOCK_EX)
+            # seek to end of the file
+            f.seek(0, 2)
+            f.write(record_data)
+            f.flush()
+            fcntl.flock(f, fcntl.LOCK_UN)
+            self._current_offset = f.tell()
+
+    def _break_down(self, key, meta_data, description):
+        del meta_data[self.MSG_KEY]
+        des_in_json = json.dumps(description)
+        des_in_json = des_in_json[1:(len(des_in_json) - 1)]
+        i = 0
+        result_array = []
+        message_place_holder = "\"" + self.MSG_KEY + "\":\"\""
+        while True:
+            meta_data[self.DESC_IDX_KEY] = i
+            meta_data[self.MSG_KEY] = ''
+            data_without_desc = json.dumps(meta_data,
+                                           separators=self.JSON_SEPARATORS)
+            room_for_desc = (
+                self.HV_KVP_EXCHANGE_MAX_VALUE_SIZE -
+                len(data_without_desc) - 8)
+            value = data_without_desc.replace(
+                message_place_holder,
+                '"{key}":"{desc}"'.format(
+                    key=self.MSG_KEY, desc=des_in_json[:room_for_desc]))
+            result_array.append(self._encode_kvp_item(key, value))
+            i += 1
+            des_in_json = des_in_json[room_for_desc:]
+            if len(des_in_json) == 0:
+                break
+        return result_array
+
+    def _encode_event(self, event):
+        """
+        encode the event into kvp data bytes.
+        if the event content reaches the maximum length of kvp value.
+        then it would be cut to multiple slices.
+        """
+        key = self._event_key(event)
+        meta_data = {
+                "name": event.name,
+                "type": event.event_type,
+                "ts": (datetime.utcfromtimestamp(event.timestamp)
+                       .isoformat() + 'Z'),
+                }
+        if hasattr(event, self.RESULT_KEY):
+            meta_data[self.RESULT_KEY] = event.result
+        meta_data[self.MSG_KEY] = event.description
+        value = json.dumps(meta_data, separators=self.JSON_SEPARATORS)
+        # if it reaches the maximum length of kvp value,
+        # break it down to slices.
+        # this should be very corner case.
+        if len(value) > self.HV_KVP_EXCHANGE_MAX_VALUE_SIZE:
+            return self._break_down(key, meta_data, event.description)
+        else:
+            data = self._encode_kvp_item(key, value)
+            return [data]
+
+    def _publish_event_routine(self):
+        while True:
+            try:
+                event = self.q.get(block=True)
+                need_append = True
+                try:
+                    if not os.path.exists(self._kvp_file_path):
+                        LOG.warning(
+                            "skip writing events %s to %s. file not present.",
+                            event.as_string(),
+                            self._kvp_file_path)
+                    encoded_event = self._encode_event(event)
+                    # for each encoded_event
+                    for encoded_data in (encoded_event):
+                        for kvp in self._iterate_kvps(self._current_offset):
+                            match = (
+                                re.match(
+                                    r"^{0}\|(\d+)\|.+"
+                                    .format(self.EVENT_PREFIX),
+                                    kvp['key']
+                                ))
+                            if match:
+                                match_groups = match.groups(0)
+                                if int(match_groups[0]) < self.incarnation_no:
+                                    need_append = False
+                                    self._update_kvp_item(encoded_data)
+                                    continue
+                        if need_append:
+                            self._append_kvp_item(encoded_data)
+                except IOError as e:
+                    LOG.warning(
+                        "failed posting event to kvp: %s e:%s",
+                        event.as_string(), e)
+                finally:
+                    self.q.task_done()
+
+            # when main process exits, q.get() will through EOFError
+            # indicating we should exit this thread.
+            except EOFError:
+                return
+
+    # since the saving to the kvp pool can be a time costing task
+    # if the kvp pool already contains a chunk of data,
+    # so defer it to another thread.
+    def publish_event(self, event):
+        if (not self._event_types or event.event_type in self._event_types):
+            self.q.put(event)
+
+    def flush(self):
+        LOG.debug('HyperVReportingHandler flushing remaining events')
+        self.q.join()
+
+
 available_handlers = DictRegistry()
 available_handlers.register_item('log', LogHandler)
 available_handlers.register_item('print', PrintHandler)
 available_handlers.register_item('webhook', WebHookHandler)
+available_handlers.register_item('hyperv', HyperVKvpReportingHandler)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index dde5749..b1ebaad 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -38,12 +38,13 @@ CFG_BUILTIN = {
         'Scaleway',
         'Hetzner',
         'IBMCloud',
+        'Oracle',
         # At the end to act as a 'catch' when none of the above work...
         'None',
     ],
     'def_log_file': '/var/log/cloud-init.log',
     'log_cfgs': [],
-    'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel'],
+    'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel', 'root:root'],
     'system_info': {
         'paths': {
             'cloud_dir': '/var/lib/cloud',
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index 24fd65f..8cd312d 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -181,27 +181,18 @@ class DataSourceAltCloud(sources.DataSource):
 
         # modprobe floppy
         try:
-            cmd = CMD_PROBE_FLOPPY
-            (cmd_out, _err) = util.subp(cmd)
-            LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
+            modprobe_floppy()
         except ProcessExecutionError as e:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
-            return False
-        except OSError as e:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
+            util.logexc(LOG, 'Failed modprobe: %s', e)
             return False
 
         floppy_dev = '/dev/fd0'
 
         # udevadm settle for floppy device
         try:
-            (cmd_out, _err) = util.udevadm_settle(exists=floppy_dev, timeout=5)
-            LOG.debug('Command: %s\nOutput%s', ' '.join(cmd), cmd_out)
-        except ProcessExecutionError as e:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
-            return False
-        except OSError as e:
-            util.logexc(LOG, 'Failed command: %s\n%s', ' '.join(cmd), e)
+            util.udevadm_settle(exists=floppy_dev, timeout=5)
+        except (ProcessExecutionError, OSError) as e:
+            util.logexc(LOG, 'Failed udevadm_settle: %s\n', e)
             return False
 
         try:
@@ -258,6 +249,11 @@ class DataSourceAltCloud(sources.DataSource):
             return False
 
 
+def modprobe_floppy():
+    out, _err = util.subp(CMD_PROBE_FLOPPY)
+    LOG.debug('Command: %s\nOutput%s', ' '.join(CMD_PROBE_FLOPPY), out)
+
+
 # Used to match classes to dependencies
 # Source DataSourceAltCloud does not really depend on networking.
 # In the future 'dsmode' like behavior can be added to offer user
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 7007d9e..783445e 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -8,6 +8,7 @@ import base64
 import contextlib
 import crypt
 from functools import partial
+import json
 import os
 import os.path
 import re
@@ -17,6 +18,7 @@ import xml.etree.ElementTree as ET
 
 from cloudinit import log as logging
 from cloudinit import net
+from cloudinit.event import EventType
 from cloudinit.net.dhcp import EphemeralDHCPv4
 from cloudinit import sources
 from cloudinit.sources.helpers.azure import get_metadata_from_fabric
@@ -49,7 +51,17 @@ DEFAULT_FS = 'ext4'
 AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
 REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
 REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
-IMDS_URL = "http://169.254.169.254/metadata/reprovisiondata";
+AGENT_SEED_DIR = '/var/lib/waagent'
+IMDS_URL = "http://169.254.169.254/metadata/";
+
+# List of static scripts and network config artifacts created by
+# stock ubuntu suported images.
+UBUNTU_EXTENDED_NETWORK_SCRIPTS = [
+    '/etc/netplan/90-azure-hotplug.yaml',
+    '/usr/local/sbin/ephemeral_eth.sh',
+    '/etc/udev/rules.d/10-net-device-added.rules',
+    '/run/network/interfaces.ephemeral.d',
+]
 
 
 def find_storvscid_from_sysctl_pnpinfo(sysctl_out, deviceid):
@@ -185,7 +197,7 @@ if util.is_FreeBSD():
 
 BUILTIN_DS_CONFIG = {
     'agent_command': AGENT_START_BUILTIN,
-    'data_dir': "/var/lib/waagent",
+    'data_dir': AGENT_SEED_DIR,
     'set_hostname': True,
     'hostname_bounce': {
         'interface': DEFAULT_PRIMARY_NIC,
@@ -252,6 +264,7 @@ class DataSourceAzure(sources.DataSource):
 
     dsname = 'Azure'
     _negotiated = False
+    _metadata_imds = sources.UNSET
 
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
@@ -263,6 +276,8 @@ class DataSourceAzure(sources.DataSource):
             BUILTIN_DS_CONFIG])
         self.dhclient_lease_file = self.ds_cfg.get('dhclient_lease_file')
         self._network_config = None
+        # Regenerate network config new_instance boot and every boot
+        self.update_events['network'].add(EventType.BOOT)
 
     def __str__(self):
         root = sources.DataSource.__str__(self)
@@ -336,15 +351,17 @@ class DataSourceAzure(sources.DataSource):
         metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
         return metadata
 
-    def _get_data(self):
+    def crawl_metadata(self):
+        """Walk all instance metadata sources returning a dict on success.
+
+        @return: A dictionary of any metadata content for this instance.
+        @raise: InvalidMetaDataException when the expected metadata service is
+            unavailable, broken or disabled.
+        """
+        crawled_data = {}
         # azure removes/ejects the cdrom containing the ovf-env.xml
         # file on reboot.  So, in order to successfully reboot we
         # need to look in the datadir and consider that valid
-        asset_tag = util.read_dmi_data('chassis-asset-tag')
-        if asset_tag != AZURE_CHASSIS_ASSET_TAG:
-            LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
-            return False
-
         ddir = self.ds_cfg['data_dir']
 
         candidates = [self.seed_dir]
@@ -373,41 +390,84 @@ class DataSourceAzure(sources.DataSource):
             except NonAzureDataSource:
                 continue
             except BrokenAzureDataSource as exc:
-                raise exc
+                msg = 'BrokenAzureDataSource: %s' % exc
+                raise sources.InvalidMetaDataException(msg)
             except util.MountFailedError:
                 LOG.warning("%s was not mountable", cdev)
                 continue
 
             if reprovision or self._should_reprovision(ret):
                 ret = self._reprovision()
-            (md, self.userdata_raw, cfg, files) = ret
+            imds_md = get_metadata_from_imds(
+                self.fallback_interface, retries=3)
+            (md, userdata_raw, cfg, files) = ret
             self.seed = cdev
-            self.metadata = util.mergemanydict([md, DEFAULT_METADATA])
-            self.cfg = util.mergemanydict([cfg, BUILTIN_CLOUD_CONFIG])
+            crawled_data.update({
+                'cfg': cfg,
+                'files': files,
+                'metadata': util.mergemanydict(
+                    [md, {'imds': imds_md}]),
+                'userdata_raw': userdata_raw})
             found = cdev
 
             LOG.debug("found datasource in %s", cdev)
             break
 
         if not found:
-            return False
+            raise sources.InvalidMetaDataException('No Azure metadata found')
 
         if found == ddir:
             LOG.debug("using files cached in %s", ddir)
 
         seed = _get_random_seed()
         if seed:
-            self.metadata['random_seed'] = seed
+            crawled_data['metadata']['random_seed'] = seed
+        crawled_data['metadata']['instance-id'] = util.read_dmi_data(
+            'system-uuid')
+        return crawled_data
+
+    def _is_platform_viable(self):
+        """Check platform environment to report if this datasource may run."""
+        return _is_platform_viable(self.seed_dir)
+
+    def clear_cached_attrs(self, attr_defaults=()):
+        """Reset any cached class attributes to defaults."""
+        super(DataSourceAzure, self).clear_cached_attrs(attr_defaults)
+        self._metadata_imds = sources.UNSET
+
+    def _get_data(self):
+        """Crawl and process datasource metadata caching metadata as attrs.
+
+        @return: True on success, False on error, invalid or disabled
+            datasource.
+        """
+        if not self._is_platform_viable():
+            return False
+        try:
+            crawled_data = util.log_time(
+                        logfunc=LOG.debug, msg='Crawl of metadata service',
+                        func=self.crawl_metadata)
+        except sources.InvalidMetaDataException as e:
+            LOG.warning('Could not crawl Azure metadata: %s', e)
+            return False
+        if self.distro and self.distro.name == 'ubuntu':
+            maybe_remove_ubuntu_network_config_scripts()
+
+        # Process crawled data and augment with various config defaults
+        self.cfg = util.mergemanydict(
+            [crawled_data['cfg'], BUILTIN_CLOUD_CONFIG])
+        self._metadata_imds = crawled_data['metadata']['imds']
+        self.metadata = util.mergemanydict(
+            [crawled_data['metadata'], DEFAULT_METADATA])
+        self.userdata_raw = crawled_data['userdata_raw']
 
         user_ds_cfg = util.get_cfg_by_path(self.cfg, DS_CFG_PATH, {})
         self.ds_cfg = util.mergemanydict([user_ds_cfg, self.ds_cfg])
 
         # walinux agent writes files world readable, but expects
         # the directory to be protected.
-        write_files(ddir, files, dirmode=0o700)
-
-        self.metadata['instance-id'] = util.read_dmi_data('system-uuid')
-
+        write_files(
+            self.ds_cfg['data_dir'], crawled_data['files'], dirmode=0o700)
         return True
 
     def device_name_to_device(self, name):
@@ -436,7 +496,7 @@ class DataSourceAzure(sources.DataSource):
     def _poll_imds(self):
         """Poll IMDS for the new provisioning data until we get a valid
         response. Then return the returned JSON object."""
-        url = IMDS_URL + "?api-version=2017-04-02"
+        url = IMDS_URL + "reprovisiondata?api-version=2017-04-02"
         headers = {"Metadata": "true"}
         report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
         LOG.debug("Start polling IMDS")
@@ -487,7 +547,7 @@ class DataSourceAzure(sources.DataSource):
         jump back into the polling loop in order to retrieve the ovf_env."""
         if not ret:
             return False
-        (_md, self.userdata_raw, cfg, _files) = ret
+        (_md, _userdata_raw, cfg, _files) = ret
         path = REPROVISION_MARKER_FILE
         if (cfg.get('PreprovisionedVm') is True or
                 os.path.isfile(path)):
@@ -543,22 +603,15 @@ class DataSourceAzure(sources.DataSource):
     @property
     def network_config(self):
         """Generate a network config like net.generate_fallback_network() with
-           the following execptions.
+           the following exceptions.
 
            1. Probe the drivers of the net-devices present and inject them in
               the network configuration under params: driver: <driver> value
            2. Generate a fallback network config that does not include any of
               the blacklisted devices.
         """
-        blacklist = ['mlx4_core']
         if not self._network_config:
-            LOG.debug('Azure: generating fallback configuration')
-            # generate a network config, blacklist picking any mlx4_core devs
-            netconfig = net.generate_fallback_config(
-                blacklist_drivers=blacklist, config_driver=True)
-
-            self._network_config = netconfig
-
+            self._network_config = parse_network_config(self._metadata_imds)
         return self._network_config
 
 
@@ -1025,6 +1078,151 @@ def load_azure_ds_dir(source_dir):
     return (md, ud, cfg, {'ovf-env.xml': contents})
 
 
+def parse_network_config(imds_metadata):
+    """Convert imds_metadata dictionary to network v2 configuration.
+
+    Parses network configuration from imds metadata if present or generate
+    fallback network config excluding mlx4_core devices.
+
+    @param: imds_metadata: Dict of content read from IMDS network service.
+    @return: Dictionary containing network version 2 standard configuration.
+    """
+    if imds_metadata != sources.UNSET and imds_metadata:
+        netconfig = {'version': 2, 'ethernets': {}}
+        LOG.debug('Azure: generating network configuration from IMDS')
+        network_metadata = imds_metadata['network']
+        for idx, intf in enumerate(network_metadata['interface']):
+            nicname = 'eth{idx}'.format(idx=idx)
+            dev_config = {}
+            for addr4 in intf['ipv4']['ipAddress']:
+                privateIpv4 = addr4['privateIpAddress']
+                if privateIpv4:
+                    if dev_config.get('dhcp4', False):
+                        # Append static address config for nic > 1
+                        netPrefix = intf['ipv4']['subnet'][0].get(
+                            'prefix', '24')
+                        if not dev_config.get('addresses'):
+                            dev_config['addresses'] = []
+                        dev_config['addresses'].append(
+                            '{ip}/{prefix}'.format(
+                                ip=privateIpv4, prefix=netPrefix))
+                    else:
+                        dev_config['dhcp4'] = True
+            for addr6 in intf['ipv6']['ipAddress']:
+                privateIpv6 = addr6['privateIpAddress']
+                if privateIpv6:
+                    dev_config['dhcp6'] = True
+                    break
+            if dev_config:
+                mac = ':'.join(re.findall(r'..', intf['macAddress']))
+                dev_config.update(
+                    {'match': {'macaddress': mac.lower()},
+                     'set-name': nicname})
+                netconfig['ethernets'][nicname] = dev_config
+    else:
+        blacklist = ['mlx4_core']
+        LOG.debug('Azure: generating fallback configuration')
+        # generate a network config, blacklist picking mlx4_core devs
+        netconfig = net.generate_fallback_config(
+            blacklist_drivers=blacklist, config_driver=True)
+    return netconfig
+
+
+def get_metadata_from_imds(fallback_nic, retries):
+    """Query Azure's network metadata service, returning a dictionary.
+
+    If network is not up, setup ephemeral dhcp on fallback_nic to talk to the
+    IMDS. For more info on IMDS:
+        https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service
+
+    @param fallback_nic: String. The name of the nic which requires active
+        network in order to query IMDS.
+    @param retries: The number of retries of the IMDS_URL.
+
+    @return: A dict of instance metadata containing compute and network
+        info.
+    """
+    kwargs = {'logfunc': LOG.debug,
+              'msg': 'Crawl of Azure Instance Metadata Service (IMDS)',
+              'func': _get_metadata_from_imds, 'args': (retries,)}
+    if net.is_up(fallback_nic):
+        return util.log_time(**kwargs)
+    else:
+        with EphemeralDHCPv4(fallback_nic):
+            return util.log_time(**kwargs)
+
+
+def _get_metadata_from_imds(retries):
+
+    def retry_on_url_error(msg, exception):
+        if isinstance(exception, UrlError) and exception.code == 404:
+            return True  # Continue retries
+        return False  # Stop retries on all other exceptions
+
+    url = IMDS_URL + "instance?api-version=2017-12-01"
+    headers = {"Metadata": "true"}
+    try:
+        response = readurl(
+            url, timeout=1, headers=headers, retries=retries,
+            exception_cb=retry_on_url_error)
+    except Exception as e:
+        LOG.debug('Ignoring IMDS instance metadata: %s', e)
+        return {}
+    try:
+        return util.load_json(str(response))
+    except json.decoder.JSONDecodeError:
+        LOG.warning(
+            'Ignoring non-json IMDS instance metadata: %s', str(response))
+    return {}
+
+
+def maybe_remove_ubuntu_network_config_scripts(paths=None):
+    """Remove Azure-specific ubuntu network config for non-primary nics.
+
+    @param paths: List of networking scripts or directories to remove when
+        present.
+
+    In certain supported ubuntu images, static udev rules or netplan yaml
+    config is delivered in the base ubuntu image to support dhcp on any
+    additional interfaces which get attached by a customer at some point
+    after initial boot. Since the Azure datasource can now regenerate
+    network configuration as metadata reports these new devices, we no longer
+    want the udev rules or netplan's 90-azure-hotplug.yaml to configure
+    networking on eth1 or greater as it might collide with cloud-init's
+    configuration.
+
+    Remove the any existing extended network scripts if the datasource is
+    enabled to write network per-boot.
+    """
+    if not paths:
+        paths = UBUNTU_EXTENDED_NETWORK_SCRIPTS
+    logged = False
+    for path in paths:
+        if os.path.exists(path):
+            if not logged:
+                LOG.info(
+                    'Removing Ubuntu extended network scripts because'
+                    ' cloud-init updates Azure network configuration on the'
+                    ' following event: %s.',
+                    EventType.BOOT)
+                logged = True
+            if os.path.isdir(path):
+                util.del_dir(path)
+            else:
+                util.del_file(path)
+
+
+def _is_platform_viable(seed_dir):
+    """Check platform environment to report if this datasource may run."""
+    asset_tag = util.read_dmi_data('chassis-asset-tag')
+    if asset_tag == AZURE_CHASSIS_ASSET_TAG:
+        return True
+    LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
+    if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
+        return True
+    return False
+
+
 class BrokenAzureDataSource(Exception):
     pass
 
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index 4cb2897..664dc4b 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -196,7 +196,7 @@ def on_first_boot(data, distro=None, network=True):
         net_conf = data.get("network_config", '')
         if net_conf and distro:
             LOG.warning("Updating network interfaces from config drive")
-            distro.apply_network(net_conf)
+            distro.apply_network_config(eni.convert_eni_data(net_conf))
     write_injected_files(data.get('files'))
 
 
diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py
index 01106ec..a535814 100644
--- a/cloudinit/sources/DataSourceIBMCloud.py
+++ b/cloudinit/sources/DataSourceIBMCloud.py
@@ -295,7 +295,7 @@ def read_md():
             results = metadata_from_dir(path)
         else:
             results = util.mount_cb(path, metadata_from_dir)
-    except BrokenMetadata as e:
+    except sources.BrokenMetadata as e:
         raise RuntimeError(
             "Failed reading IBM config disk (platform=%s path=%s): %s" %
             (platform, path, e))
@@ -304,10 +304,6 @@ def read_md():
     return ret
 
 
-class BrokenMetadata(IOError):
-    pass
-
-
 def metadata_from_dir(source_dir):
     """Walk source_dir extracting standardized metadata.
 
@@ -352,12 +348,13 @@ def metadata_from_dir(source_dir):
             try:
                 data = transl(raw)
             except Exception as e:
-                raise BrokenMetadata("Failed decoding %s: %s" % (path, e))
+                raise sources.BrokenMetadata(
+                    "Failed decoding %s: %s" % (path, e))
 
         results[name] = data
 
     if results.get('metadata_raw') is None:
-        raise BrokenMetadata(
+        raise sources.BrokenMetadata(
             "%s missing required file 'meta_data.json'" % source_dir)
 
     results['metadata'] = {}
@@ -368,7 +365,7 @@ def metadata_from_dir(source_dir):
         try:
             md['random_seed'] = base64.b64decode(md_raw['random_seed'])
         except (ValueError, TypeError) as e:
-            raise BrokenMetadata(
+            raise sources.BrokenMetadata(
                 "Badly formatted metadata random_seed entry: %s" % e)
 
     renames = (
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index 16c1078..77ccd12 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -232,7 +232,7 @@ class OpenNebulaNetwork(object):
 
             # Set IPv6 default gateway
             gateway6 = self.get_gateway6(c_dev)
-            if gateway:
+            if gateway6:
                 devconf['gateway6'] = gateway6
 
             # Set DNS servers and search domains
diff --git a/cloudinit/sources/DataSourceOpenStack.py b/cloudinit/sources/DataSourceOpenStack.py
index 365af96..4a01524 100644
--- a/cloudinit/sources/DataSourceOpenStack.py
+++ b/cloudinit/sources/DataSourceOpenStack.py
@@ -13,6 +13,7 @@ from cloudinit import url_helper
 from cloudinit import util
 
 from cloudinit.sources.helpers import openstack
+from cloudinit.sources import DataSourceOracle as oracle
 
 LOG = logging.getLogger(__name__)
 
@@ -121,8 +122,10 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
             False when unable to contact metadata service or when metadata
             format is invalid or disabled.
         """
-        if not detect_openstack():
+        oracle_considered = 'Oracle' in self.sys_cfg.get('datasource_list')
+        if not detect_openstack(accept_oracle=not oracle_considered):
             return False
+
         if self.perform_dhcp_setup:  # Setup networking in init-local stage.
             try:
                 with EphemeralDHCPv4(self.fallback_interface):
@@ -214,7 +217,7 @@ def read_metadata_service(base_url, ssl_details=None,
     return reader.read_v2()
 
 
-def detect_openstack():
+def detect_openstack(accept_oracle=False):
     """Return True when a potential OpenStack platform is detected."""
     if not util.is_x86():
         return True  # Non-Intel cpus don't properly report dmi product names
@@ -223,6 +226,8 @@ def detect_openstack():
         return True
     elif util.read_dmi_data('chassis-asset-tag') in VALID_DMI_ASSET_TAGS:
         return True
+    elif accept_oracle and oracle._is_platform_viable():
+        return True
     elif util.get_proc_env(1).get('product_name') == DMI_PRODUCT_NOVA:
         return True
     return False
diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
new file mode 100644
index 0000000..fab39af
--- /dev/null
+++ b/cloudinit/sources/DataSourceOracle.py
@@ -0,0 +1,233 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+"""Datasource for Oracle (OCI/Oracle Cloud Infrastructure)
+
+OCI provides a OpenStack like metadata service which provides only
+'2013-10-17' and 'latest' versions..
+
+Notes:
+ * This datasource does not support the OCI-Classic. OCI-Classic
+   provides an EC2 lookalike metadata service.
+ * The uuid provided in DMI data is not the same as the meta-data provided
+   instance-id, but has an equivalent lifespan.
+ * We do need to support upgrade from an instance that cloud-init
+   identified as OpenStack.
+ * Both bare-metal and vms use iscsi root
+ * Both bare-metal and vms provide chassis-asset-tag of OracleCloud.com
+"""
+
+from cloudinit.url_helper import combine_url, readurl, UrlError
+from cloudinit.net import dhcp
+from cloudinit import net
+from cloudinit import sources
+from cloudinit import util
+from cloudinit.net import cmdline
+from cloudinit import log as logging
+
+import json
+import re
+
+LOG = logging.getLogger(__name__)
+
+CHASSIS_ASSET_TAG = "OracleCloud.com"
+METADATA_ENDPOINT = "http://169.254.169.254/openstack/";
+
+
+class DataSourceOracle(sources.DataSource):
+
+    dsname = 'Oracle'
+    system_uuid = None
+    vendordata_pure = None
+    _network_config = sources.UNSET
+
+    def _is_platform_viable(self):
+        """Check platform environment to report if this datasource may run."""
+        return _is_platform_viable()
+
+    def _get_data(self):
+        if not self._is_platform_viable():
+            return False
+
+        # network may be configured if iscsi root.  If that is the case
+        # then read_kernel_cmdline_config will return non-None.
+        if _is_iscsi_root():
+            data = self.crawl_metadata()
+        else:
+            with dhcp.EphemeralDHCPv4(net.find_fallback_nic()):
+                data = self.crawl_metadata()
+
+        self._crawled_metadata = data
+        vdata = data['2013-10-17']
+
+        self.userdata_raw = vdata.get('user_data')
+        self.system_uuid = vdata['system_uuid']
+
+        vd = vdata.get('vendor_data')
+        if vd:
+            self.vendordata_pure = vd
+            try:
+                self.vendordata_raw = sources.convert_vendordata(vd)
+            except ValueError as e:
+                LOG.warning("Invalid content in vendor-data: %s", e)
+                self.vendordata_raw = None
+
+        mdcopies = ('public_keys',)
+        md = dict([(k, vdata['meta_data'].get(k))
+                   for k in mdcopies if k in vdata['meta_data']])
+
+        mdtrans = (
+            # oracle meta_data.json name, cloudinit.datasource.metadata name
+            ('availability_zone', 'availability-zone'),
+            ('hostname', 'local-hostname'),
+            ('launch_index', 'launch-index'),
+            ('uuid', 'instance-id'),
+        )
+        for dsname, ciname in mdtrans:
+            if dsname in vdata['meta_data']:
+                md[ciname] = vdata['meta_data'][dsname]
+
+        self.metadata = md
+        return True
+
+    def crawl_metadata(self):
+        return read_metadata()
+
+    def check_instance_id(self, sys_cfg):
+        """quickly check (local only) if self.instance_id is still valid
+
+        On Oracle, the dmi-provided system uuid differs from the instance-id
+        but has the same life-span."""
+        return sources.instance_id_matches_system_uuid(self.system_uuid)
+
+    def get_public_ssh_keys(self):
+        return sources.normalize_pubkey_data(self.metadata.get('public_keys'))
+
+    @property
+    def network_config(self):
+        """Network config is read from initramfs provided files
+        If none is present, then we fall back to fallback configuration.
+
+        One thing to note here is that this method is not currently
+        considered at all if there is is kernel/initramfs provided
+        data.  In that case, stages considers that the cmdline data
+        overrides datasource provided data and does not consult here.
+
+        We nonetheless return cmdline provided config if present
+        and fallback to generate fallback."""
+        if self._network_config == sources.UNSET:
+            cmdline_cfg = cmdline.read_kernel_cmdline_config()
+            if cmdline_cfg:
+                self._network_config = cmdline_cfg
+            else:
+                self._network_config = self.distro.generate_fallback_config()
+        return self._network_config
+
+
+def _read_system_uuid():
+    sys_uuid = util.read_dmi_data('system-uuid')
+    return None if sys_uuid is None else sys_uuid.lower()
+
+
+def _is_platform_viable():
+    asset_tag = util.read_dmi_data('chassis-asset-tag')
+    return asset_tag == CHASSIS_ASSET_TAG
+
+
+def _is_iscsi_root():
+    return bool(cmdline.read_kernel_cmdline_config())
+
+
+def _load_index(content):
+    """Return a list entries parsed from content.
+
+    OpenStack's metadata service returns a newline delimited list
+    of items.  Oracle's implementation has html formatted list of links.
+    The parser here just grabs targets from <a href="target">
+    and throws away "../".
+
+    Oracle has accepted that to be buggy and may fix in the future
+    to instead return a '\n' delimited plain text list.  This function
+    will continue to work if that change is made."""
+    if not content.lower().startswith("<html>"):
+        return content.splitlines()
+    items = re.findall(
+        r'href="(?P<target>[^"]*)"', content, re.MULTILINE | re.IGNORECASE)
+    return [i for i in items if not i.startswith(".")]
+
+
+def read_metadata(endpoint_base=METADATA_ENDPOINT, sys_uuid=None,
+                  version='2013-10-17'):
+    """Read metadata, return a dictionary.
+
+    Each path listed in the index will be represented in the dictionary.
+    If the path ends in .json, then the content will be decoded and
+    populated into the dictionary.
+
+    The system uuid (/sys/class/dmi/id/product_uuid) is also populated.
+    Example: given paths = ('user_data', 'meta_data.json')
+    This would return:
+      {version: {'user_data': b'blob', 'meta_data': json.loads(blob.decode())
+                 'system_uuid': '3b54f2e0-3ab2-458d-b770-af9926eee3b2'}}
+    """
+    endpoint = combine_url(endpoint_base, version) + "/"
+    if sys_uuid is None:
+        sys_uuid = _read_system_uuid()
+    if not sys_uuid:
+        raise sources.BrokenMetadata("Failed to read system uuid.")
+
+    try:
+        resp = readurl(endpoint)
+        if not resp.ok():
+            raise sources.BrokenMetadata(
+                "Bad response from %s: %s" % (endpoint, resp.code))
+    except UrlError as e:
+        raise sources.BrokenMetadata(
+            "Failed to read index at %s: %s" % (endpoint, e))
+
+    entries = _load_index(resp.contents.decode('utf-8'))
+    LOG.debug("index url %s contained: %s", endpoint, entries)
+
+    # meta_data.json is required.
+    mdj = 'meta_data.json'
+    if mdj not in entries:
+        raise sources.BrokenMetadata(
+            "Required field '%s' missing in index at %s" % (mdj, endpoint))
+
+    ret = {'system_uuid': sys_uuid}
+    for path in entries:
+        response = readurl(combine_url(endpoint, path))
+        if path.endswith(".json"):
+            ret[path.rpartition(".")[0]] = (
+                json.loads(response.contents.decode('utf-8')))
+        else:
+            ret[path] = response.contents
+
+    return {version: ret}
+
+
+# Used to match classes to dependencies
+datasources = [
+    (DataSourceOracle, (sources.DEP_FILESYSTEM,)),
+]
+
+
+# Return a list of data sources that match this set of dependencies
+def get_datasource_list(depends):
+    return sources.list_from_depends(depends, datasources)
+
+
+if __name__ == "__main__":
+    import argparse
+    import os
+
+    parser = argparse.ArgumentParser(description='Query Oracle Cloud Metadata')
+    parser.add_argument("--endpoint", metavar="URL",
+                        help="The url of the metadata service.",
+                        default=METADATA_ENDPOINT)
+    args = parser.parse_args()
+    sys_uuid = "uuid-not-available-not-root" if os.geteuid() != 0 else None
+
+    data = read_metadata(endpoint_base=args.endpoint, sys_uuid=sys_uuid)
+    data['is_platform_viable'] = _is_platform_viable()
+    print(util.json_dumps(data))
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py
index e2502b0..9dc4ab2 100644
--- a/cloudinit/sources/DataSourceScaleway.py
+++ b/cloudinit/sources/DataSourceScaleway.py
@@ -29,7 +29,9 @@ from cloudinit import log as logging
 from cloudinit import sources
 from cloudinit import url_helper
 from cloudinit import util
-
+from cloudinit import net
+from cloudinit.net.dhcp import EphemeralDHCPv4, NoDHCPLeaseError
+from cloudinit.event import EventType
 
 LOG = logging.getLogger(__name__)
 
@@ -168,8 +170,8 @@ def query_data_api(api_type, api_address, retries, timeout):
 
 
 class DataSourceScaleway(sources.DataSource):
-
     dsname = "Scaleway"
+    update_events = {'network': [EventType.BOOT_NEW_INSTANCE, EventType.BOOT]}
 
     def __init__(self, sys_cfg, distro, paths):
         super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths)
@@ -185,11 +187,10 @@ class DataSourceScaleway(sources.DataSource):
 
         self.retries = int(self.ds_cfg.get('retries', DEF_MD_RETRIES))
         self.timeout = int(self.ds_cfg.get('timeout', DEF_MD_TIMEOUT))
+        self._fallback_interface = None
+        self._network_config = None
 
-    def _get_data(self):
-        if not on_scaleway():
-            return False
-
+    def _crawl_metadata(self):
         resp = url_helper.readurl(self.metadata_address,
                                   timeout=self.timeout,
                                   retries=self.retries)
@@ -203,9 +204,48 @@ class DataSourceScaleway(sources.DataSource):
             'vendor-data', self.vendordata_address,
             self.retries, self.timeout
         )
+
+    def _get_data(self):
+        if not on_scaleway():
+            return False
+
+        if self._fallback_interface is None:
+            self._fallback_interface = net.find_fallback_nic()
+        try:
+            with EphemeralDHCPv4(self._fallback_interface):
+                util.log_time(
+                    logfunc=LOG.debug, msg='Crawl of metadata service',
+                    func=self._crawl_metadata)
+        except (NoDHCPLeaseError) as e:
+            util.logexc(LOG, str(e))
+            return False
         return True
 
     @property
+    def network_config(self):
+        """
+        Configure networking according to data received from the
+        metadata API.
+        """
+        if self._network_config:
+            return self._network_config
+
+        if self._fallback_interface is None:
+            self._fallback_interface = net.find_fallback_nic()
+
+        netcfg = {'type': 'physical', 'name': '%s' % self._fallback_interface}
+        subnets = [{'type': 'dhcp4'}]
+        if self.metadata['ipv6']:
+            subnets += [{'type': 'static',
+                         'address': '%s' % self.metadata['ipv6']['address'],
+                         'gateway': '%s' % self.metadata['ipv6']['gateway'],
+                         'netmask': '%s' % self.metadata['ipv6']['netmask'],
+                         }]
+        netcfg['subnets'] = subnets
+        self._network_config = {'version': 1, 'config': [netcfg]}
+        return self._network_config
+
+    @property
     def launch_index(self):
         return None
 
@@ -228,7 +268,7 @@ class DataSourceScaleway(sources.DataSource):
 
 
 datasources = [
-    (DataSourceScaleway, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
+    (DataSourceScaleway, (sources.DEP_FILESYSTEM,)),
 ]
 
 
diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
index f92e8b5..593ac91 100644
--- a/cloudinit/sources/DataSourceSmartOS.py
+++ b/cloudinit/sources/DataSourceSmartOS.py
@@ -564,7 +564,7 @@ class JoyentMetadataSerialClient(JoyentMetadataClient):
                     continue
                 LOG.warning('Unexpected response "%s" during flush', response)
             except JoyentMetadataTimeoutException:
-                LOG.warning('Timeout while initializing metadata client. ' +
+                LOG.warning('Timeout while initializing metadata client. '
                             'Is the host metadata service running?')
         LOG.debug('Got "invalid command".  Flush complete.')
         self.fp.timeout = timeout
@@ -683,6 +683,18 @@ def jmc_client_factory(
     raise ValueError("Unknown value for smartos_type: %s" % smartos_type)
 
 
+def identify_file(content_f):
+    cmd = ["file", "--brief", "--mime-type", content_f]
+    f_type = None
+    try:
+        (f_type, _err) = util.subp(cmd)
+        LOG.debug("script %s mime type is %s", content_f, f_type)
+    except util.ProcessExecutionError as e:
+        util.logexc(
+            LOG, ("Failed to identify script type for %s" % content_f, e))
+    return None if f_type is None else f_type.strip()
+
+
 def write_boot_content(content, content_f, link=None, shebang=False,
                        mode=0o400):
     """
@@ -715,18 +727,11 @@ def write_boot_content(content, content_f, link=None, shebang=False,
     util.write_file(content_f, content, mode=mode)
 
     if shebang and not content.startswith("#!"):
-        try:
-            cmd = ["file", "--brief", "--mime-type", content_f]
-            (f_type, _err) = util.subp(cmd)
-            LOG.debug("script %s mime type is %s", content_f, f_type)
-            if f_type.strip() == "text/plain":
-                new_content = "\n".join(["#!/bin/bash", content])
-                util.write_file(content_f, new_content, mode=mode)
-                LOG.debug("added shebang to file %s", content_f)
-
-        except Exception as e:
-            util.logexc(LOG, ("Failed to identify script type for %s" %
-                              content_f, e))
+        f_type = identify_file(content_f)
+        if f_type == "text/plain":
+            util.write_file(
+                content_f, "\n".join(["#!/bin/bash", content]), mode=mode)
+            LOG.debug("added shebang to file %s", content_f)
 
     if link:
         try:
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index f424316..5ac9882 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -38,8 +38,17 @@ DEP_FILESYSTEM = "FILESYSTEM"
 DEP_NETWORK = "NETWORK"
 DS_PREFIX = 'DataSource'
 
-# File in which instance meta-data, user-data and vendor-data is written
+EXPERIMENTAL_TEXT = (
+    "EXPERIMENTAL: The structure and format of content scoped under the 'ds'"
+    " key may change in subsequent releases of cloud-init.")
+
+
+# File in which public available instance meta-data is written
+# security-sensitive key values are redacted from this world-readable file
 INSTANCE_JSON_FILE = 'instance-data.json'
+# security-sensitive key values are present in this root-readable file
+INSTANCE_JSON_SENSITIVE_FILE = 'instance-data-sensitive.json'
+REDACT_SENSITIVE_VALUE = 'redacted for non-root user'
 
 # Key which can be provide a cloud's official product name to cloud-init
 METADATA_CLOUD_NAME_KEY = 'cloud-name'
@@ -58,26 +67,55 @@ class InvalidMetaDataException(Exception):
     pass
 
 
-def process_base64_metadata(metadata, key_path=''):
-    """Strip ci-b64 prefix and return metadata with base64-encoded-keys set."""
+def process_instance_metadata(metadata, key_path='', sensitive_keys=()):
+    """Process all instance metadata cleaning it up for persisting as json.
+
+    Strip ci-b64 prefix and catalog any 'base64_encoded_keys' as a list
+
+    @return Dict copy of processed metadata.
+    """
     md_copy = copy.deepcopy(metadata)
-    md_copy['base64-encoded-keys'] = []
+    md_copy['base64_encoded_keys'] = []
+    md_copy['sensitive_keys'] = []
     for key, val in metadata.items():
         if key_path:
             sub_key_path = key_path + '/' + key
         else:
             sub_key_path = key
+        if key in sensitive_keys or sub_key_path in sensitive_keys:
+            md_copy['sensitive_keys'].append(sub_key_path)
         if isinstance(val, str) and val.startswith('ci-b64:'):
-            md_copy['base64-encoded-keys'].append(sub_key_path)
+            md_copy['base64_encoded_keys'].append(sub_key_path)
             md_copy[key] = val.replace('ci-b64:', '')
         if isinstance(val, dict):
-            return_val = process_base64_metadata(val, sub_key_path)
-            md_copy['base64-encoded-keys'].extend(
-                return_val.pop('base64-encoded-keys'))
+            return_val = process_instance_metadata(
+                val, sub_key_path, sensitive_keys)
+            md_copy['base64_encoded_keys'].extend(
+                return_val.pop('base64_encoded_keys'))
+            md_copy['sensitive_keys'].extend(
+                return_val.pop('sensitive_keys'))
             md_copy[key] = return_val
     return md_copy
 
 
+def redact_sensitive_keys(metadata, redact_value=REDACT_SENSITIVE_VALUE):
+    """Redact any sensitive keys from to provided metadata dictionary.
+
+    Replace any keys values listed in 'sensitive_keys' with redact_value.
+    """
+    if not metadata.get('sensitive_keys', []):
+        return metadata
+    md_copy = copy.deepcopy(metadata)
+    for key_path in metadata.get('sensitive_keys'):
+        path_parts = key_path.split('/')
+        obj = md_copy
+        for path in path_parts:
+            if isinstance(obj[path], dict) and path != path_parts[-1]:
+                obj = obj[path]
+        obj[path] = redact_value
+    return md_copy
+
+
 URLParams = namedtuple(
     'URLParms', ['max_wait_seconds', 'timeout_seconds', 'num_retries'])
 
@@ -103,14 +141,14 @@ class DataSource(object):
     url_timeout = 10    # timeout for each metadata url read attempt
     url_retries = 5     # number of times to retry url upon 404
 
-    # The datasource defines a list of supported EventTypes during which
+    # The datasource defines a set of supported EventTypes during which
     # the datasource can react to changes in metadata and regenerate
     # network configuration on metadata changes.
     # A datasource which supports writing network config on each system boot
-    # would set update_events = {'network': [EventType.BOOT]}
+    # would call update_events['network'].add(EventType.BOOT).
 
     # Default: generate network config on new instance id (first boot).
-    update_events = {'network': [EventType.BOOT_NEW_INSTANCE]}
+    update_events = {'network': set([EventType.BOOT_NEW_INSTANCE])}
 
     # N-tuple listing default values for any metadata-related class
     # attributes cached on an instance by a process_data runs. These attribute
@@ -122,6 +160,10 @@ class DataSource(object):
 
     _dirty_cache = False
 
+    # N-tuple of keypaths or keynames redact from instance-data.json for
+    # non-root users
+    sensitive_metadata_keys = ('security-credentials',)
+
     def __init__(self, sys_cfg, distro, paths, ud_proc=None):
         self.sys_cfg = sys_cfg
         self.distro = distro
@@ -147,12 +189,24 @@ class DataSource(object):
 
     def _get_standardized_metadata(self):
         """Return a dictionary of standardized metadata keys."""
-        return {'v1': {
-            'local-hostname': self.get_hostname(),
-            'instance-id': self.get_instance_id(),
-            'cloud-name': self.cloud_name,
-            'region': self.region,
-            'availability-zone': self.availability_zone}}
+        local_hostname = self.get_hostname()
+        instance_id = self.get_instance_id()
+        availability_zone = self.availability_zone
+        cloud_name = self.cloud_name
+        # When adding new standard keys prefer underscore-delimited instead
+        # of hyphen-delimted to support simple variable references in jinja
+        # templates.
+        return {
+            'v1': {
+                'availability-zone': availability_zone,
+                'availability_zone': availability_zone,
+                'cloud-name': cloud_name,
+                'cloud_name': cloud_name,
+                'instance-id': instance_id,
+                'instance_id': instance_id,
+                'local-hostname': local_hostname,
+                'local_hostname': local_hostname,
+                'region': self.region}}
 
     def clear_cached_attrs(self, attr_defaults=()):
         """Reset any cached metadata attributes to datasource defaults.
@@ -180,15 +234,22 @@ class DataSource(object):
         """
         self._dirty_cache = True
         return_value = self._get_data()
-        json_file = os.path.join(self.paths.run_dir, INSTANCE_JSON_FILE)
         if not return_value:
             return return_value
+        self.persist_instance_data()
+        return return_value
+
+    def persist_instance_data(self):
+        """Process and write INSTANCE_JSON_FILE with all instance metadata.
 
+        Replace any hyphens with underscores in key names for use in template
+        processing.
+
+        @return True on successful write, False otherwise.
+        """
         instance_data = {
-            'ds': {
-                'meta-data': self.metadata,
-                'user-data': self.get_userdata_raw(),
-                'vendor-data': self.get_vendordata_raw()}}
+            'ds': {'_doc': EXPERIMENTAL_TEXT,
+                   'meta_data': self.metadata}}
         if hasattr(self, 'network_json'):
             network_json = getattr(self, 'network_json')
             if network_json != UNSET:
@@ -202,16 +263,23 @@ class DataSource(object):
         try:
             # Process content base64encoding unserializable values
             content = util.json_dumps(instance_data)
-            # Strip base64: prefix and return base64-encoded-keys
-            processed_data = process_base64_metadata(json.loads(content))
+            # Strip base64: prefix and set base64_encoded_keys list.
+            processed_data = process_instance_metadata(
+                json.loads(content),
+                sensitive_keys=self.sensitive_metadata_keys)
         except TypeError as e:
             LOG.warning('Error persisting instance-data.json: %s', str(e))
-            return return_value
+            return False
         except UnicodeDecodeError as e:
             LOG.warning('Error persisting instance-data.json: %s', str(e))
-            return return_value
-        write_json(json_file, processed_data, mode=0o600)
-        return return_value
+            return False
+        json_file = os.path.join(self.paths.run_dir, INSTANCE_JSON_FILE)
+        write_json(json_file, processed_data)  # World readable
+        json_sensitive_file = os.path.join(self.paths.run_dir,
+                                           INSTANCE_JSON_SENSITIVE_FILE)
+        write_json(json_sensitive_file,
+                   redact_sensitive_keys(processed_data), mode=0o600)
+        return True
 
     def _get_data(self):
         """Walk metadata sources, process crawled data and save attributes."""
@@ -475,8 +543,8 @@ class DataSource(object):
             for update_scope, update_events in self.update_events.items():
                 if event in update_events:
                     if not supported_events.get(update_scope):
-                        supported_events[update_scope] = []
-                    supported_events[update_scope].append(event)
+                        supported_events[update_scope] = set()
+                    supported_events[update_scope].add(event)
         for scope, matched_events in supported_events.items():
             LOG.debug(
                 "Update datasource metadata and %s config due to events: %s",
@@ -490,6 +558,8 @@ class DataSource(object):
             result = self.get_data()
             if result:
                 return True
+        LOG.debug("Datasource %s not updated for events: %s", self,
+                  ', '.join(source_event_types))
         return False
 
     def check_instance_id(self, sys_cfg):
@@ -669,6 +739,10 @@ def convert_vendordata(data, recurse=True):
     raise ValueError("Unknown data type for vendordata: %s" % type(data))
 
 
+class BrokenMetadata(IOError):
+    pass
+
+
 # 'depends' is a list of dependencies (DEP_FILESYSTEM)
 # ds_list is a list of 2 item lists
 # ds_list = [
diff --git a/cloudinit/sources/helpers/openstack.py b/cloudinit/sources/helpers/openstack.py
index a4cf066..9c29cea 100644
--- a/cloudinit/sources/helpers/openstack.py
+++ b/cloudinit/sources/helpers/openstack.py
@@ -21,6 +21,8 @@ from cloudinit import sources
 from cloudinit import url_helper
 from cloudinit import util
 
+from cloudinit.sources import BrokenMetadata
+
 # See https://docs.openstack.org/user-guide/cli-config-drive.html
 
 LOG = logging.getLogger(__name__)
@@ -36,21 +38,38 @@ KEY_COPIES = (
     ('local-hostname', 'hostname', False),
     ('instance-id', 'uuid', True),
 )
+
+# Versions and names taken from nova source nova/api/metadata/base.py
 OS_LATEST = 'latest'
 OS_FOLSOM = '2012-08-10'
 OS_GRIZZLY = '2013-04-04'
 OS_HAVANA = '2013-10-17'
 OS_LIBERTY = '2015-10-15'
+# NEWTON_ONE adds 'devices' to md (sriov-pf-passthrough-neutron-port-vlan)
+OS_NEWTON_ONE = '2016-06-30'
+# NEWTON_TWO adds vendor_data2.json (vendordata-reboot)
+OS_NEWTON_TWO = '2016-10-06'
+# OS_OCATA adds 'vif' field to devices (sriov-pf-passthrough-neutron-port-vlan)
+OS_OCATA = '2017-02-22'
+# OS_ROCKY adds a vf_trusted field to devices (sriov-trusted-vfs)
+OS_ROCKY = '2018-08-27'
+
+
 # keep this in chronological order. new supported versions go at the end.
 OS_VERSIONS = (
     OS_FOLSOM,
     OS_GRIZZLY,
     OS_HAVANA,
     OS_LIBERTY,
+    OS_NEWTON_ONE,
+    OS_NEWTON_TWO,
+    OS_OCATA,
+    OS_ROCKY,
 )
 
 PHYSICAL_TYPES = (
     None,
+    'bgpovs',  # not present in OpenStack upstream but used on OVH cloud.
     'bridge',
     'dvs',
     'ethernet',
@@ -68,10 +87,6 @@ class NonReadable(IOError):
     pass
 
 
-class BrokenMetadata(IOError):
-    pass
-
-
 class SourceMixin(object):
     def _ec2_name_to_device(self, name):
         if not self.ec2_metadata:
@@ -441,7 +456,7 @@ class MetadataReader(BaseReader):
             return self._versions
         found = []
         version_path = self._path_join(self.base_path, "openstack")
-        content = self._path_read(version_path)
+        content = self._path_read(version_path, decode=True)
         for line in content.splitlines():
             line = line.strip()
             if not line:
@@ -589,6 +604,8 @@ def convert_net_json(network_json=None, known_macs=None):
             cfg.update({'type': 'physical', 'mac_address': link_mac_addr})
         elif link['type'] in ['bond']:
             params = {}
+            if link_mac_addr:
+                params['mac_address'] = link_mac_addr
             for k, v in link.items():
                 if k == 'bond_links':
                     continue
@@ -658,6 +675,17 @@ def convert_net_json(network_json=None, known_macs=None):
             else:
                 cfg[key] = fmt % link_id_info[target]['name']
 
+    # Infiniband interfaces may be referenced in network_data.json by a 6 byte
+    # Ethernet MAC-style address, and we use that address to look up the
+    # interface name above. Now ensure that the hardware address is set to the
+    # full 20 byte address.
+    ib_known_hwaddrs = net.get_ib_hwaddrs_by_interface()
+    if ib_known_hwaddrs:
+        for cfg in config:
+            if cfg['name'] in ib_known_hwaddrs:
+                cfg['mac_address'] = ib_known_hwaddrs[cfg['name']]
+                cfg['type'] = 'infiniband'
+
     for service in services:
         cfg = service
         cfg.update({'type': 'nameserver'})
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index 3ef8c62..e1890e2 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -164,7 +164,7 @@ class NicConfigurator(object):
             return ([subnet], route_list)
 
         # Add routes if there is no primary nic
-        if not self._primaryNic:
+        if not self._primaryNic and v4.gateways:
             route_list.extend(self.gen_ipv4_route(nic,
                                                   v4.gateways,
                                                   v4.netmask))
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index dcd221b..8082019 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import copy
 import inspect
 import os
 import six
@@ -9,7 +10,8 @@ from cloudinit.event import EventType
 from cloudinit.helpers import Paths
 from cloudinit import importer
 from cloudinit.sources import (
-    INSTANCE_JSON_FILE, DataSource, UNSET)
+    EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE,
+    REDACT_SENSITIVE_VALUE, UNSET, DataSource, redact_sensitive_keys)
 from cloudinit.tests.helpers import CiTestCase, skipIf, mock
 from cloudinit.user_data import UserDataProcessor
 from cloudinit import util
@@ -20,24 +22,30 @@ class DataSourceTestSubclassNet(DataSource):
     dsname = 'MyTestSubclass'
     url_max_wait = 55
 
-    def __init__(self, sys_cfg, distro, paths, custom_userdata=None):
+    def __init__(self, sys_cfg, distro, paths, custom_metadata=None,
+                 custom_userdata=None, get_data_retval=True):
         super(DataSourceTestSubclassNet, self).__init__(
             sys_cfg, distro, paths)
         self._custom_userdata = custom_userdata
+        self._custom_metadata = custom_metadata
+        self._get_data_retval = get_data_retval
 
     def _get_cloud_name(self):
         return 'SubclassCloudName'
 
     def _get_data(self):
-        self.metadata = {'availability_zone': 'myaz',
-                         'local-hostname': 'test-subclass-hostname',
-                         'region': 'myregion'}
+        if self._custom_metadata:
+            self.metadata = self._custom_metadata
+        else:
+            self.metadata = {'availability_zone': 'myaz',
+                             'local-hostname': 'test-subclass-hostname',
+                             'region': 'myregion'}
         if self._custom_userdata:
             self.userdata_raw = self._custom_userdata
         else:
             self.userdata_raw = 'userdata_raw'
         self.vendordata_raw = 'vendordata_raw'
-        return True
+        return self._get_data_retval
 
 
 class InvalidDataSourceTestSubclassNet(DataSource):
@@ -264,8 +272,19 @@ class TestDataSource(CiTestCase):
                 self.assertEqual('fqdnhostname.domain.com',
                                  datasource.get_hostname(fqdn=True))
 
-    def test_get_data_write_json_instance_data(self):
-        """get_data writes INSTANCE_JSON_FILE to run_dir as readonly root."""
+    def test_get_data_does_not_write_instance_data_on_failure(self):
+        """get_data does not write INSTANCE_JSON_FILE on get_data False."""
+        tmp = self.tmp_dir()
+        datasource = DataSourceTestSubclassNet(
+            self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
+            get_data_retval=False)
+        self.assertFalse(datasource.get_data())
+        json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
+        self.assertFalse(
+            os.path.exists(json_file), 'Found unexpected file %s' % json_file)
+
+    def test_get_data_writes_json_instance_data_on_success(self):
+        """get_data writes INSTANCE_JSON_FILE to run_dir as world readable."""
         tmp = self.tmp_dir()
         datasource = DataSourceTestSubclassNet(
             self.sys_cfg, self.distro, Paths({'run_dir': tmp}))
@@ -273,40 +292,126 @@ class TestDataSource(CiTestCase):
         json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
         content = util.load_file(json_file)
         expected = {
-            'base64-encoded-keys': [],
+            'base64_encoded_keys': [],
+            'sensitive_keys': [],
             'v1': {
                 'availability-zone': 'myaz',
+                'availability_zone': 'myaz',
                 'cloud-name': 'subclasscloudname',
+                'cloud_name': 'subclasscloudname',
                 'instance-id': 'iid-datasource',
+                'instance_id': 'iid-datasource',
                 'local-hostname': 'test-subclass-hostname',
+                'local_hostname': 'test-subclass-hostname',
                 'region': 'myregion'},
             'ds': {
-                'meta-data': {'availability_zone': 'myaz',
+                '_doc': EXPERIMENTAL_TEXT,
+                'meta_data': {'availability_zone': 'myaz',
                               'local-hostname': 'test-subclass-hostname',
-                              'region': 'myregion'},
-                'user-data': 'userdata_raw',
-                'vendor-data': 'vendordata_raw'}}
+                              'region': 'myregion'}}}
         self.assertEqual(expected, util.load_json(content))
         file_stat = os.stat(json_file)
+        self.assertEqual(0o644, stat.S_IMODE(file_stat.st_mode))
+        self.assertEqual(expected, util.load_json(content))
+
+    def test_get_data_writes_json_instance_data_sensitive(self):
+        """get_data writes INSTANCE_JSON_SENSITIVE_FILE as readonly root."""
+        tmp = self.tmp_dir()
+        datasource = DataSourceTestSubclassNet(
+            self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
+            custom_metadata={
+                'availability_zone': 'myaz',
+                'local-hostname': 'test-subclass-hostname',
+                'region': 'myregion',
+                'some': {'security-credentials': {
+                    'cred1': 'sekret', 'cred2': 'othersekret'}}})
+        self.assertEqual(
+            ('security-credentials',), datasource.sensitive_metadata_keys)
+        datasource.get_data()
+        json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
+        sensitive_json_file = self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, tmp)
+        redacted = util.load_json(util.load_file(json_file))
+        self.assertEqual(
+            {'cred1': 'sekret', 'cred2': 'othersekret'},
+            redacted['ds']['meta_data']['some']['security-credentials'])
+        content = util.load_file(sensitive_json_file)
+        expected = {
+            'base64_encoded_keys': [],
+            'sensitive_keys': ['ds/meta_data/some/security-credentials'],
+            'v1': {
+                'availability-zone': 'myaz',
+                'availability_zone': 'myaz',
+                'cloud-name': 'subclasscloudname',
+                'cloud_name': 'subclasscloudname',
+                'instance-id': 'iid-datasource',
+                'instance_id': 'iid-datasource',
+                'local-hostname': 'test-subclass-hostname',
+                'local_hostname': 'test-subclass-hostname',
+                'region': 'myregion'},
+            'ds': {
+                '_doc': EXPERIMENTAL_TEXT,
+                'meta_data': {
+                    'availability_zone': 'myaz',
+                    'local-hostname': 'test-subclass-hostname',
+                    'region': 'myregion',
+                    'some': {'security-credentials': REDACT_SENSITIVE_VALUE}}}
+        }
+        self.maxDiff = None
+        self.assertEqual(expected, util.load_json(content))
+        file_stat = os.stat(sensitive_json_file)
         self.assertEqual(0o600, stat.S_IMODE(file_stat.st_mode))
+        self.assertEqual(expected, util.load_json(content))
 
     def test_get_data_handles_redacted_unserializable_content(self):
         """get_data warns unserializable content in INSTANCE_JSON_FILE."""
         tmp = self.tmp_dir()
         datasource = DataSourceTestSubclassNet(
             self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
-            custom_userdata={'key1': 'val1', 'key2': {'key2.1': self.paths}})
-        self.assertTrue(datasource.get_data())
+            custom_metadata={'key1': 'val1', 'key2': {'key2.1': self.paths}})
+        datasource.get_data()
         json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
         content = util.load_file(json_file)
-        expected_userdata = {
+        expected_metadata = {
             'key1': 'val1',
             'key2': {
                 'key2.1': "Warning: redacted unserializable type <class"
                           " 'cloudinit.helpers.Paths'>"}}
         instance_json = util.load_json(content)
         self.assertEqual(
-            expected_userdata, instance_json['ds']['user-data'])
+            expected_metadata, instance_json['ds']['meta_data'])
+
+    def test_persist_instance_data_writes_ec2_metadata_when_set(self):
+        """When ec2_metadata class attribute is set, persist to json."""
+        tmp = self.tmp_dir()
+        datasource = DataSourceTestSubclassNet(
+            self.sys_cfg, self.distro, Paths({'run_dir': tmp}))
+        datasource.ec2_metadata = UNSET
+        datasource.get_data()
+        json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
+        instance_data = util.load_json(util.load_file(json_file))
+        self.assertNotIn('ec2_metadata', instance_data['ds'])
+        datasource.ec2_metadata = {'ec2stuff': 'is good'}
+        datasource.persist_instance_data()
+        instance_data = util.load_json(util.load_file(json_file))
+        self.assertEqual(
+            {'ec2stuff': 'is good'},
+            instance_data['ds']['ec2_metadata'])
+
+    def test_persist_instance_data_writes_network_json_when_set(self):
+        """When network_data.json class attribute is set, persist to json."""
+        tmp = self.tmp_dir()
+        datasource = DataSourceTestSubclassNet(
+            self.sys_cfg, self.distro, Paths({'run_dir': tmp}))
+        datasource.get_data()
+        json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
+        instance_data = util.load_json(util.load_file(json_file))
+        self.assertNotIn('network_json', instance_data['ds'])
+        datasource.network_json = {'network_json': 'is good'}
+        datasource.persist_instance_data()
+        instance_data = util.load_json(util.load_file(json_file))
+        self.assertEqual(
+            {'network_json': 'is good'},
+            instance_data['ds']['network_json'])
 
     @skipIf(not six.PY3, "json serialization on <= py2.7 handles bytes")
     def test_get_data_base64encodes_unserializable_bytes(self):
@@ -314,17 +419,17 @@ class TestDataSource(CiTestCase):
         tmp = self.tmp_dir()
         datasource = DataSourceTestSubclassNet(
             self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
-            custom_userdata={'key1': 'val1', 'key2': {'key2.1': b'\x123'}})
+            custom_metadata={'key1': 'val1', 'key2': {'key2.1': b'\x123'}})
         self.assertTrue(datasource.get_data())
         json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
         content = util.load_file(json_file)
         instance_json = util.load_json(content)
-        self.assertEqual(
-            ['ds/user-data/key2/key2.1'],
-            instance_json['base64-encoded-keys'])
+        self.assertItemsEqual(
+            ['ds/meta_data/key2/key2.1'],
+            instance_json['base64_encoded_keys'])
         self.assertEqual(
             {'key1': 'val1', 'key2': {'key2.1': 'EjM='}},
-            instance_json['ds']['user-data'])
+            instance_json['ds']['meta_data'])
 
     @skipIf(not six.PY2, "json serialization on <= py2.7 handles bytes")
     def test_get_data_handles_bytes_values(self):
@@ -332,15 +437,15 @@ class TestDataSource(CiTestCase):
         tmp = self.tmp_dir()
         datasource = DataSourceTestSubclassNet(
             self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
-            custom_userdata={'key1': 'val1', 'key2': {'key2.1': b'\x123'}})
+            custom_metadata={'key1': 'val1', 'key2': {'key2.1': b'\x123'}})
         self.assertTrue(datasource.get_data())
         json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
         content = util.load_file(json_file)
         instance_json = util.load_json(content)
-        self.assertEqual([], instance_json['base64-encoded-keys'])
+        self.assertEqual([], instance_json['base64_encoded_keys'])
         self.assertEqual(
             {'key1': 'val1', 'key2': {'key2.1': '\x123'}},
-            instance_json['ds']['user-data'])
+            instance_json['ds']['meta_data'])
 
     @skipIf(not six.PY2, "Only python2 hits UnicodeDecodeErrors on non-utf8")
     def test_non_utf8_encoding_logs_warning(self):
@@ -348,7 +453,7 @@ class TestDataSource(CiTestCase):
         tmp = self.tmp_dir()
         datasource = DataSourceTestSubclassNet(
             self.sys_cfg, self.distro, Paths({'run_dir': tmp}),
-            custom_userdata={'key1': 'val1', 'key2': {'key2.1': b'ab\xaadef'}})
+            custom_metadata={'key1': 'val1', 'key2': {'key2.1': b'ab\xaadef'}})
         self.assertTrue(datasource.get_data())
         json_file = self.tmp_path(INSTANCE_JSON_FILE, tmp)
         self.assertFalse(os.path.exists(json_file))
@@ -429,8 +534,9 @@ class TestDataSource(CiTestCase):
 
     def test_update_metadata_only_acts_on_supported_update_events(self):
         """update_metadata won't get_data on unsupported update events."""
+        self.datasource.update_events['network'].discard(EventType.BOOT)
         self.assertEqual(
-            {'network': [EventType.BOOT_NEW_INSTANCE]},
+            {'network': set([EventType.BOOT_NEW_INSTANCE])},
             self.datasource.update_events)
 
         def fake_get_data():
@@ -461,4 +567,36 @@ class TestDataSource(CiTestCase):
             self.logs.getvalue())
 
 
+class TestRedactSensitiveData(CiTestCase):
+
+    def test_redact_sensitive_data_noop_when_no_sensitive_keys_present(self):
+        """When sensitive_keys is absent or empty from metadata do nothing."""
+        md = {'my': 'data'}
+        self.assertEqual(
+            md, redact_sensitive_keys(md, redact_value='redacted'))
+        md['sensitive_keys'] = []
+        self.assertEqual(
+            md, redact_sensitive_keys(md, redact_value='redacted'))
+
+    def test_redact_sensitive_data_redacts_exact_match_name(self):
+        """Only exact matched sensitive_keys are redacted from metadata."""
+        md = {'sensitive_keys': ['md/secure'],
+              'md': {'secure': 's3kr1t', 'insecure': 'publik'}}
+        secure_md = copy.deepcopy(md)
+        secure_md['md']['secure'] = 'redacted'
+        self.assertEqual(
+            secure_md,
+            redact_sensitive_keys(md, redact_value='redacted'))
+
+    def test_redact_sensitive_data_does_redacts_with_default_string(self):
+        """When redact_value is absent, REDACT_SENSITIVE_VALUE is used."""
+        md = {'sensitive_keys': ['md/secure'],
+              'md': {'secure': 's3kr1t', 'insecure': 'publik'}}
+        secure_md = copy.deepcopy(md)
+        secure_md['md']['secure'] = 'redacted for non-root user'
+        self.assertEqual(
+            secure_md,
+            redact_sensitive_keys(md))
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
new file mode 100644
index 0000000..7599126
--- /dev/null
+++ b/cloudinit/sources/tests/test_oracle.py
@@ -0,0 +1,331 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.sources import DataSourceOracle as oracle
+from cloudinit.sources import BrokenMetadata
+from cloudinit import helpers
+
+from cloudinit.tests import helpers as test_helpers
+
+from textwrap import dedent
+import argparse
+import httpretty
+import json
+import mock
+import os
+import six
+import uuid
+
+DS_PATH = "cloudinit.sources.DataSourceOracle"
+MD_VER = "2013-10-17"
+
+
+class TestDataSourceOracle(test_helpers.CiTestCase):
+    """Test datasource DataSourceOracle."""
+
+    ds_class = oracle.DataSourceOracle
+
+    my_uuid = str(uuid.uuid4())
+    my_md = {"uuid": "ocid1.instance.oc1.phx.abyhqlj",
+             "name": "ci-vm1", "availability_zone": "phx-ad-3",
+             "hostname": "ci-vm1hostname",
+             "launch_index": 0, "files": [],
+             "public_keys": {"0": "ssh-rsa AAAAB3N...== user@host"},
+             "meta": {}}
+
+    def _patch_instance(self, inst, patches):
+        """Patch an instance of a class 'inst'.
+        for each name, kwargs in patches:
+             inst.name = mock.Mock(**kwargs)
+        returns a namespace object that has
+             namespace.name = mock.Mock(**kwargs)
+        Do not bother with cleanup as instance is assumed transient."""
+        mocks = argparse.Namespace()
+        for name, kwargs in patches.items():
+            imock = mock.Mock(name=name, spec=getattr(inst, name), **kwargs)
+            setattr(mocks, name, imock)
+            setattr(inst, name, imock)
+        return mocks
+
+    def _get_ds(self, sys_cfg=None, distro=None, paths=None, ud_proc=None,
+                patches=None):
+        if sys_cfg is None:
+            sys_cfg = {}
+        if patches is None:
+            patches = {}
+        if paths is None:
+            tmpd = self.tmp_dir()
+            dirs = {'cloud_dir': self.tmp_path('cloud_dir', tmpd),
+                    'run_dir': self.tmp_path('run_dir')}
+            for d in dirs.values():
+                os.mkdir(d)
+            paths = helpers.Paths(dirs)
+
+        ds = self.ds_class(sys_cfg=sys_cfg, distro=distro,
+                           paths=paths, ud_proc=ud_proc)
+
+        return ds, self._patch_instance(ds, patches)
+
+    def test_platform_not_viable_returns_false(self):
+        ds, mocks = self._get_ds(
+            patches={'_is_platform_viable': {'return_value': False}})
+        self.assertFalse(ds._get_data())
+        mocks._is_platform_viable.assert_called_once_with()
+
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_without_userdata(self, m_is_iscsi_root):
+        """If no user-data is provided, it should not be in return dict."""
+        ds, mocks = self._get_ds(patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        self.assertTrue(ds._get_data())
+        mocks._is_platform_viable.assert_called_once_with()
+        mocks.crawl_metadata.assert_called_once_with()
+        self.assertEqual(self.my_uuid, ds.system_uuid)
+        self.assertEqual(self.my_md['availability_zone'], ds.availability_zone)
+        self.assertIn(self.my_md["public_keys"]["0"], ds.get_public_ssh_keys())
+        self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
+        self.assertIsNone(ds.userdata_raw)
+
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_with_vendordata(self, m_is_iscsi_root):
+        """Test with vendor data."""
+        vd = {'cloud-init': '#cloud-config\nkey: value'}
+        ds, mocks = self._get_ds(patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md,
+                             'vendor_data': vd}}}})
+        self.assertTrue(ds._get_data())
+        mocks._is_platform_viable.assert_called_once_with()
+        mocks.crawl_metadata.assert_called_once_with()
+        self.assertEqual(vd, ds.vendordata_pure)
+        self.assertEqual(vd['cloud-init'], ds.vendordata_raw)
+
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_with_userdata(self, m_is_iscsi_root):
+        """Ensure user-data is populated if present and is binary."""
+        my_userdata = b'abcdefg'
+        ds, mocks = self._get_ds(patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md,
+                             'user_data': my_userdata}}}})
+        self.assertTrue(ds._get_data())
+        mocks._is_platform_viable.assert_called_once_with()
+        mocks.crawl_metadata.assert_called_once_with()
+        self.assertEqual(self.my_uuid, ds.system_uuid)
+        self.assertIn(self.my_md["public_keys"]["0"], ds.get_public_ssh_keys())
+        self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
+        self.assertEqual(my_userdata, ds.userdata_raw)
+
+    @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config):
+        """network_config should read kernel cmdline."""
+        distro = mock.MagicMock()
+        ds, _ = self._get_ds(distro=distro, patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        ncfg = {'version': 1, 'config': [{'a': 'b'}]}
+        m_cmdline_config.return_value = ncfg
+        self.assertTrue(ds._get_data())
+        self.assertEqual(ncfg, ds.network_config)
+        m_cmdline_config.assert_called_once_with()
+        self.assertFalse(distro.generate_fallback_config.called)
+
+    @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config):
+        """test that fallback network is generated if no kernel cmdline."""
+        distro = mock.MagicMock()
+        ds, _ = self._get_ds(distro=distro, patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        ncfg = {'version': 1, 'config': [{'a': 'b'}]}
+        m_cmdline_config.return_value = None
+        self.assertTrue(ds._get_data())
+        ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}
+        distro.generate_fallback_config.return_value = ncfg
+        self.assertEqual(ncfg, ds.network_config)
+        m_cmdline_config.assert_called_once_with()
+        distro.generate_fallback_config.assert_called_once_with()
+        self.assertEqual(1, m_cmdline_config.call_count)
+
+        # test that the result got cached, and the methods not re-called.
+        self.assertEqual(ncfg, ds.network_config)
+        self.assertEqual(1, m_cmdline_config.call_count)
+
+
+@mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))
+class TestReadMetaData(test_helpers.HttprettyTestCase):
+    """Test the read_metadata which interacts with http metadata service."""
+
+    mdurl = oracle.METADATA_ENDPOINT
+    my_md = {"uuid": "ocid1.instance.oc1.phx.abyhqlj",
+             "name": "ci-vm1", "availability_zone": "phx-ad-3",
+             "hostname": "ci-vm1hostname",
+             "launch_index": 0, "files": [],
+             "public_keys": {"0": "ssh-rsa AAAAB3N...== user@host"},
+             "meta": {}}
+
+    def populate_md(self, data):
+        """call httppretty.register_url for each item dict 'data',
+           including valid indexes. Text values converted to bytes."""
+        httpretty.register_uri(
+            httpretty.GET, self.mdurl + MD_VER + "/",
+            '\n'.join(data.keys()).encode('utf-8'))
+        for k, v in data.items():
+            httpretty.register_uri(
+                httpretty.GET, self.mdurl + MD_VER + "/" + k,
+                v if not isinstance(v, six.text_type) else v.encode('utf-8'))
+
+    def test_broken_no_sys_uuid(self, m_read_system_uuid):
+        """Datasource requires ability to read system_uuid and true return."""
+        m_read_system_uuid.return_value = None
+        self.assertRaises(BrokenMetadata, oracle.read_metadata)
+
+    def test_broken_no_metadata_json(self, m_read_system_uuid):
+        """Datasource requires meta_data.json."""
+        httpretty.register_uri(
+            httpretty.GET, self.mdurl + MD_VER + "/",
+            '\n'.join(['user_data']).encode('utf-8'))
+        with self.assertRaises(BrokenMetadata) as cm:
+            oracle.read_metadata()
+        self.assertIn("Required field 'meta_data.json' missing",
+                      str(cm.exception))
+
+    def test_with_userdata(self, m_read_system_uuid):
+        data = {'user_data': b'#!/bin/sh\necho hi world\n',
+                'meta_data.json': json.dumps(self.my_md)}
+        self.populate_md(data)
+        result = oracle.read_metadata()[MD_VER]
+        self.assertEqual(data['user_data'], result['user_data'])
+        self.assertEqual(self.my_md, result['meta_data'])
+
+    def test_without_userdata(self, m_read_system_uuid):
+        data = {'meta_data.json': json.dumps(self.my_md)}
+        self.populate_md(data)
+        result = oracle.read_metadata()[MD_VER]
+        self.assertNotIn('user_data', result)
+        self.assertEqual(self.my_md, result['meta_data'])
+
+    def test_unknown_fields_included(self, m_read_system_uuid):
+        """Unknown fields listed in index should be included.
+        And those ending in .json should be decoded."""
+        some_data = {'key1': 'data1', 'subk1': {'subd1': 'subv'}}
+        some_vendor_data = {'cloud-init': 'foo'}
+        data = {'meta_data.json': json.dumps(self.my_md),
+                'some_data.json': json.dumps(some_data),
+                'vendor_data.json': json.dumps(some_vendor_data),
+                'other_blob': b'this is blob'}
+        self.populate_md(data)
+        result = oracle.read_metadata()[MD_VER]
+        self.assertNotIn('user_data', result)
+        self.assertEqual(self.my_md, result['meta_data'])
+        self.assertEqual(some_data, result['some_data'])
+        self.assertEqual(some_vendor_data, result['vendor_data'])
+        self.assertEqual(data['other_blob'], result['other_blob'])
+
+
+class TestIsPlatformViable(test_helpers.CiTestCase):
+    @mock.patch(DS_PATH + ".util.read_dmi_data",
+                return_value=oracle.CHASSIS_ASSET_TAG)
+    def test_expected_viable(self, m_read_dmi_data):
+        """System with known chassis tag is viable."""
+        self.assertTrue(oracle._is_platform_viable())
+        m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')])
+
+    @mock.patch(DS_PATH + ".util.read_dmi_data", return_value=None)
+    def test_expected_not_viable_dmi_data_none(self, m_read_dmi_data):
+        """System without known chassis tag is not viable."""
+        self.assertFalse(oracle._is_platform_viable())
+        m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')])
+
+    @mock.patch(DS_PATH + ".util.read_dmi_data", return_value="LetsGoCubs")
+    def test_expected_not_viable_other(self, m_read_dmi_data):
+        """System with unnown chassis tag is not viable."""
+        self.assertFalse(oracle._is_platform_viable())
+        m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')])
+
+
+class TestLoadIndex(test_helpers.CiTestCase):
+    """_load_index handles parsing of an index into a proper list.
+    The tests here guarantee correct parsing of html version or
+    a fixed version.  See the function docstring for more doc."""
+
+    _known_html_api_versions = dedent("""\
+        <html>
+        <head><title>Index of /openstack/</title></head>
+        <body bgcolor="white">
+        <h1>Index of /openstack/</h1><hr><pre><a href="../">../</a>
+        <a href="2013-10-17/">2013-10-17/</a>   27-Jun-2018 12:22  -
+        <a href="latest/">latest/</a>           27-Jun-2018 12:22  -
+        </pre><hr></body>
+        </html>""")
+
+    _known_html_contents = dedent("""\
+        <html>
+        <head><title>Index of /openstack/2013-10-17/</title></head>
+        <body bgcolor="white">
+        <h1>Index of /openstack/2013-10-17/</h1><hr><pre><a href="../">../</a>
+        <a href="meta_data.json">meta_data.json</a>  27-Jun-2018 12:22  679
+        <a href="user_data">user_data</a>            27-Jun-2018 12:22  146
+        </pre><hr></body>
+        </html>""")
+
+    def test_parse_html(self):
+        """Test parsing of lower case html."""
+        self.assertEqual(
+            ['2013-10-17/', 'latest/'],
+            oracle._load_index(self._known_html_api_versions))
+        self.assertEqual(
+            ['meta_data.json', 'user_data'],
+            oracle._load_index(self._known_html_contents))
+
+    def test_parse_html_upper(self):
+        """Test parsing of upper case html, although known content is lower."""
+        def _toupper(data):
+            return data.replace("<a", "<A").replace("html>", "HTML>")
+
+        self.assertEqual(
+            ['2013-10-17/', 'latest/'],
+            oracle._load_index(_toupper(self._known_html_api_versions)))
+        self.assertEqual(
+            ['meta_data.json', 'user_data'],
+            oracle._load_index(_toupper(self._known_html_contents)))
+
+    def test_parse_newline_list_with_endl(self):
+        """Test parsing of newline separated list with ending newline."""
+        self.assertEqual(
+            ['2013-10-17/', 'latest/'],
+            oracle._load_index("\n".join(["2013-10-17/", "latest/", ""])))
+        self.assertEqual(
+            ['meta_data.json', 'user_data'],
+            oracle._load_index("\n".join(["meta_data.json", "user_data", ""])))
+
+    def test_parse_newline_list_without_endl(self):
+        """Test parsing of newline separated list with no ending newline.
+
+        Actual openstack implementation does not include trailing newline."""
+        self.assertEqual(
+            ['2013-10-17/', 'latest/'],
+            oracle._load_index("\n".join(["2013-10-17/", "latest/"])))
+        self.assertEqual(
+            ['meta_data.json', 'user_data'],
+            oracle._load_index("\n".join(["meta_data.json", "user_data"])))
+
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/ssh_util.py b/cloudinit/ssh_util.py
index 73c3177..3f99b58 100644
--- a/cloudinit/ssh_util.py
+++ b/cloudinit/ssh_util.py
@@ -41,6 +41,12 @@ VALID_KEY_TYPES = (
 )
 
 
+DISABLE_USER_OPTS = (
+    "no-port-forwarding,no-agent-forwarding,"
+    "no-X11-forwarding,command=\"echo \'Please login as the user \\\"$USER\\\""
+    " rather than the user \\\"$DISABLE_USER\\\".\';echo;sleep 10\"")
+
+
 class AuthKeyLine(object):
     def __init__(self, source, keytype=None, base64=None,
                  comment=None, options=None):
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index c132b57..8a06412 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -17,10 +17,11 @@ from cloudinit.settings import (
 from cloudinit import handlers
 
 # Default handlers (used if not overridden)
-from cloudinit.handlers import boot_hook as bh_part
-from cloudinit.handlers import cloud_config as cc_part
-from cloudinit.handlers import shell_script as ss_part
-from cloudinit.handlers import upstart_job as up_part
+from cloudinit.handlers.boot_hook import BootHookPartHandler
+from cloudinit.handlers.cloud_config import CloudConfigPartHandler
+from cloudinit.handlers.jinja_template import JinjaTemplatePartHandler
+from cloudinit.handlers.shell_script import ShellScriptPartHandler
+from cloudinit.handlers.upstart_job import UpstartJobPartHandler
 
 from cloudinit.event import EventType
 
@@ -87,7 +88,7 @@ class Init(object):
             # from whatever it was to a new set...
             if self.datasource is not NULL_DATA_SOURCE:
                 self.datasource.distro = self._distro
-                self.datasource.sys_cfg = system_config
+                self.datasource.sys_cfg = self.cfg
         return self._distro
 
     @property
@@ -413,12 +414,17 @@ class Init(object):
             'datasource': self.datasource,
         })
         # TODO(harlowja) Hmmm, should we dynamically import these??
+        cloudconfig_handler = CloudConfigPartHandler(**opts)
+        shellscript_handler = ShellScriptPartHandler(**opts)
         def_handlers = [
-            cc_part.CloudConfigPartHandler(**opts),
-            ss_part.ShellScriptPartHandler(**opts),
-            bh_part.BootHookPartHandler(**opts),
-            up_part.UpstartJobPartHandler(**opts),
+            cloudconfig_handler,
+            shellscript_handler,
+            BootHookPartHandler(**opts),
+            UpstartJobPartHandler(**opts),
         ]
+        opts.update(
+            {'sub_handlers': [cloudconfig_handler, shellscript_handler]})
+        def_handlers.append(JinjaTemplatePartHandler(**opts))
         return def_handlers
 
     def _default_userdata_handlers(self):
@@ -510,7 +516,7 @@ class Init(object):
                 # The default frequency if handlers don't have one
                 'frequency': frequency,
                 # This will be used when new handlers are found
-                # to help write there contents to files with numbered
+                # to help write their contents to files with numbered
                 # names...
                 'handlercount': 0,
                 'excluded': excluded,
diff --git a/cloudinit/templater.py b/cloudinit/templater.py
index 7e7acb8..b668674 100644
--- a/cloudinit/templater.py
+++ b/cloudinit/templater.py
@@ -13,6 +13,7 @@
 import collections
 import re
 
+
 try:
     from Cheetah.Template import Template as CTemplate
     CHEETAH_AVAILABLE = True
@@ -20,23 +21,44 @@ except (ImportError, AttributeError):
     CHEETAH_AVAILABLE = False
 
 try:
-    import jinja2
+    from jinja2.runtime import implements_to_string
     from jinja2 import Template as JTemplate
+    from jinja2 import DebugUndefined as JUndefined
     JINJA_AVAILABLE = True
 except (ImportError, AttributeError):
+    from cloudinit.helpers import identity
+    implements_to_string = identity
     JINJA_AVAILABLE = False
+    JUndefined = object
 
 from cloudinit import log as logging
 from cloudinit import type_utils as tu
 from cloudinit import util
 
+
 LOG = logging.getLogger(__name__)
 TYPE_MATCHER = re.compile(r"##\s*template:(.*)", re.I)
 BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)')
+MISSING_JINJA_PREFIX = u'CI_MISSING_JINJA_VAR/'
+
+
+@implements_to_string   # Needed for python2.7. Otherwise cached super.__str__
+class UndefinedJinjaVariable(JUndefined):
+    """Class used to represent any undefined jinja template varible."""
+
+    def __str__(self):
+        return u'%s%s' % (MISSING_JINJA_PREFIX, self._undefined_name)
+
+    def __sub__(self, other):
+        other = str(other).replace(MISSING_JINJA_PREFIX, '')
+        raise TypeError(
+            'Undefined jinja variable: "{this}-{other}". Jinja tried'
+            ' subtraction. Perhaps you meant "{this}_{other}"?'.format(
+                this=self._undefined_name, other=other))
 
 
 def basic_render(content, params):
-    """This does simple replacement of bash variable like templates.
+    """This does sumple replacement of bash variable like templates.
 
     It identifies patterns like ${a} or $a and can also identify patterns like
     ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted
@@ -82,7 +104,7 @@ def detect_template(text):
         # keep_trailing_newline is in jinja2 2.7+, not 2.6
         add = "\n" if content.endswith("\n") else ""
         return JTemplate(content,
-                         undefined=jinja2.StrictUndefined,
+                         undefined=UndefinedJinjaVariable,
                          trim_blocks=True).render(**params) + add
 
     if text.find("\n") != -1:
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index 5bfe7fa..2eb7b0c 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -10,16 +10,16 @@ import shutil
 import sys
 import tempfile
 import time
-import unittest
 
 import mock
 import six
 import unittest2
+from unittest2.util import strclass
 
 try:
-    from contextlib import ExitStack
+    from contextlib import ExitStack, contextmanager
 except ImportError:
-    from contextlib2 import ExitStack
+    from contextlib2 import ExitStack, contextmanager
 
 try:
     from configparser import ConfigParser
@@ -28,11 +28,18 @@ except ImportError:
 
 from cloudinit.config.schema import (
     SchemaValidationError, validate_cloudconfig_schema)
+from cloudinit import cloud
+from cloudinit import distros
 from cloudinit import helpers as ch
+from cloudinit.sources import DataSourceNone
+from cloudinit.templater import JINJA_AVAILABLE
 from cloudinit import util
 
+_real_subp = util.subp
+
 # Used for skipping tests
 SkipTest = unittest2.SkipTest
+skipIf = unittest2.skipIf
 
 # Used for detecting different python versions
 PY2 = False
@@ -112,6 +119,9 @@ class TestCase(unittest2.TestCase):
         super(TestCase, self).setUp()
         self.reset_global_state()
 
+    def shortDescription(self):
+        return strclass(self.__class__) + '.' + self._testMethodName
+
     def add_patch(self, target, attr, *args, **kwargs):
         """Patches specified target object and sets it as attr on test
         instance also schedules cleanup"""
@@ -140,6 +150,17 @@ class CiTestCase(TestCase):
     # Subclass overrides for specific test behavior
     # Whether or not a unit test needs logfile setup
     with_logs = False
+    allowed_subp = False
+    SUBP_SHELL_TRUE = "shell=true"
+
+    @contextmanager
+    def allow_subp(self, allowed_subp):
+        orig = self.allowed_subp
+        try:
+            self.allowed_subp = allowed_subp
+            yield
+        finally:
+            self.allowed_subp = orig
 
     def setUp(self):
         super(CiTestCase, self).setUp()
@@ -152,11 +173,41 @@ class CiTestCase(TestCase):
             handler.setFormatter(formatter)
             self.old_handlers = self.logger.handlers
             self.logger.handlers = [handler]
+        if self.allowed_subp is True:
+            util.subp = _real_subp
+        else:
+            util.subp = self._fake_subp
+
+    def _fake_subp(self, *args, **kwargs):
+        if 'args' in kwargs:
+            cmd = kwargs['args']
+        else:
+            cmd = args[0]
+
+        if not isinstance(cmd, six.string_types):
+            cmd = cmd[0]
+        pass_through = False
+        if not isinstance(self.allowed_subp, (list, bool)):
+            raise TypeError("self.allowed_subp supports list or bool.")
+        if isinstance(self.allowed_subp, bool):
+            pass_through = self.allowed_subp
+        else:
+            pass_through = (
+                (cmd in self.allowed_subp) or
+                (self.SUBP_SHELL_TRUE in self.allowed_subp and
+                 kwargs.get('shell')))
+        if pass_through:
+            return _real_subp(*args, **kwargs)
+        raise Exception(
+            "called subp. set self.allowed_subp=True to allow\n subp(%s)" %
+            ', '.join([str(repr(a)) for a in args] +
+                      ["%s=%s" % (k, repr(v)) for k, v in kwargs.items()]))
 
     def tearDown(self):
         if self.with_logs:
             # Remove the handler we setup
             logging.getLogger().handlers = self.old_handlers
+        util.subp = _real_subp
         super(CiTestCase, self).tearDown()
 
     def tmp_dir(self, dir=None, cleanup=True):
@@ -187,6 +238,29 @@ class CiTestCase(TestCase):
         """
         raise SystemExit(code)
 
+    def tmp_cloud(self, distro, sys_cfg=None, metadata=None):
+        """Create a cloud with tmp working directory paths.
+
+        @param distro: Name of the distro to attach to the cloud.
+        @param metadata: Optional metadata to set on the datasource.
+
+        @return: The built cloud instance.
+        """
+        self.new_root = self.tmp_dir()
+        if not sys_cfg:
+            sys_cfg = {}
+        tmp_paths = {}
+        for var in ['templates_dir', 'run_dir', 'cloud_dir']:
+            tmp_paths[var] = self.tmp_path(var, dir=self.new_root)
+            util.ensure_dir(tmp_paths[var])
+        self.paths = ch.Paths(tmp_paths)
+        cls = distros.fetch(distro)
+        mydist = cls(distro, sys_cfg, self.paths)
+        myds = DataSourceNone.DataSourceNone(sys_cfg, mydist, self.paths)
+        if metadata:
+            myds.metadata.update(metadata)
+        return cloud.Cloud(myds, self.paths, sys_cfg, mydist, None)
+
 
 class ResourceUsingTestCase(CiTestCase):
 
@@ -300,6 +374,13 @@ class FilesystemMockingTestCase(ResourceUsingTestCase):
         self.patchOS(root)
         return root
 
+    @contextmanager
+    def reRooted(self, root=None):
+        try:
+            yield self.reRoot(root)
+        finally:
+            self.patched_funcs.close()
+
 
 class HttprettyTestCase(CiTestCase):
     # necessary as http_proxy gets in the way of httpretty
@@ -426,21 +507,6 @@ def readResource(name, mode='r'):
 
 
 try:
-    skipIf = unittest.skipIf
-except AttributeError:
-    # Python 2.6.  Doesn't have to be high fidelity.
-    def skipIf(condition, reason):
-        def decorator(func):
-            def wrapper(*args, **kws):
-                if condition:
-                    return func(*args, **kws)
-                else:
-                    print(reason, file=sys.stderr)
-            return wrapper
-        return decorator
-
-
-try:
     import jsonschema
     assert jsonschema  # avoid pyflakes error F401: import unused
     _missing_jsonschema_dep = False
@@ -453,6 +519,14 @@ def skipUnlessJsonSchema():
         _missing_jsonschema_dep, "No python-jsonschema dependency present.")
 
 
+def skipUnlessJinja():
+    return skipIf(not JINJA_AVAILABLE, "No jinja dependency present.")
+
+
+def skipIfJinja():
+    return skipIf(JINJA_AVAILABLE, "Jinja dependency present.")
+
+
 # older versions of mock do not have the useful 'assert_not_called'
 if not hasattr(mock.Mock, 'assert_not_called'):
     def __mock_assert_not_called(mmock):
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index 6a31e50..edb0c18 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -57,6 +57,34 @@ OS_RELEASE_CENTOS = dedent("""\
     REDHAT_SUPPORT_PRODUCT_VERSION="7"
 """)
 
+OS_RELEASE_REDHAT_7 = dedent("""\
+    NAME="Red Hat Enterprise Linux Server"
+    VERSION="7.5 (Maipo)"
+    ID="rhel"
+    ID_LIKE="fedora"
+    VARIANT="Server"
+    VARIANT_ID="server"
+    VERSION_ID="7.5"
+    PRETTY_NAME="Red Hat"
+    ANSI_COLOR="0;31"
+    CPE_NAME="cpe:/o:redhat:enterprise_linux:7.5:GA:server"
+    HOME_URL="https://www.redhat.com/";
+    BUG_REPORT_URL="https://bugzilla.redhat.com/";
+
+    REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
+    REDHAT_BUGZILLA_PRODUCT_VERSION=7.5
+    REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
+    REDHAT_SUPPORT_PRODUCT_VERSION="7.5"
+""")
+
+REDHAT_RELEASE_CENTOS_6 = "CentOS release 6.10 (Final)"
+REDHAT_RELEASE_CENTOS_7 = "CentOS Linux release 7.5.1804 (Core)"
+REDHAT_RELEASE_REDHAT_6 = (
+    "Red Hat Enterprise Linux Server release 6.10 (Santiago)")
+REDHAT_RELEASE_REDHAT_7 = (
+    "Red Hat Enterprise Linux Server release 7.5 (Maipo)")
+
+
 OS_RELEASE_DEBIAN = dedent("""\
     PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
     NAME="Debian GNU/Linux"
@@ -337,6 +365,12 @@ class TestGetLinuxDistro(CiTestCase):
         if path == '/etc/os-release':
             return 1
 
+    @classmethod
+    def redhat_release_exists(self, path):
+        """Side effect function """
+        if path == '/etc/redhat-release':
+            return 1
+
     @mock.patch('cloudinit.util.load_file')
     def test_get_linux_distro_quoted_name(self, m_os_release, m_path_exists):
         """Verify we get the correct name if the os-release file has
@@ -356,8 +390,48 @@ class TestGetLinuxDistro(CiTestCase):
         self.assertEqual(('ubuntu', '16.04', 'xenial'), dist)
 
     @mock.patch('cloudinit.util.load_file')
-    def test_get_linux_centos(self, m_os_release, m_path_exists):
-        """Verify we get the correct name and release name on CentOS."""
+    def test_get_linux_centos6(self, m_os_release, m_path_exists):
+        """Verify we get the correct name and release name on CentOS 6."""
+        m_os_release.return_value = REDHAT_RELEASE_CENTOS_6
+        m_path_exists.side_effect = TestGetLinuxDistro.redhat_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('centos', '6.10', 'Final'), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_centos7_redhat_release(self, m_os_release, m_exists):
+        """Verify the correct release info on CentOS 7 without os-release."""
+        m_os_release.return_value = REDHAT_RELEASE_CENTOS_7
+        m_exists.side_effect = TestGetLinuxDistro.redhat_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('centos', '7.5.1804', 'Core'), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_redhat7_osrelease(self, m_os_release, m_path_exists):
+        """Verify redhat 7 read from os-release."""
+        m_os_release.return_value = OS_RELEASE_REDHAT_7
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('redhat', '7.5', 'Maipo'), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_redhat7_rhrelease(self, m_os_release, m_path_exists):
+        """Verify redhat 7 read from redhat-release."""
+        m_os_release.return_value = REDHAT_RELEASE_REDHAT_7
+        m_path_exists.side_effect = TestGetLinuxDistro.redhat_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('redhat', '7.5', 'Maipo'), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_redhat6_rhrelease(self, m_os_release, m_path_exists):
+        """Verify redhat 6 read from redhat-release."""
+        m_os_release.return_value = REDHAT_RELEASE_REDHAT_6
+        m_path_exists.side_effect = TestGetLinuxDistro.redhat_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('redhat', '6.10', 'Santiago'), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_copr_centos(self, m_os_release, m_path_exists):
+        """Verify we get the correct name and release name on COPR CentOS."""
         m_os_release.return_value = OS_RELEASE_CENTOS
         m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
         dist = util.get_linux_distro()
diff --git a/cloudinit/util.py b/cloudinit/util.py
index d0b0e90..5068096 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -576,12 +576,42 @@ def get_cfg_option_int(yobj, key, default=0):
     return int(get_cfg_option_str(yobj, key, default=default))
 
 
+def _parse_redhat_release(release_file=None):
+    """Return a dictionary of distro info fields from /etc/redhat-release.
+
+    Dict keys will align with /etc/os-release keys:
+        ID, VERSION_ID, VERSION_CODENAME
+    """
+
+    if not release_file:
+        release_file = '/etc/redhat-release'
+    if not os.path.exists(release_file):
+        return {}
+    redhat_release = load_file(release_file)
+    redhat_regex = (
+        r'(?P<name>.+) release (?P<version>[\d\.]+) '
+        r'\((?P<codename>[^)]+)\)')
+    match = re.match(redhat_regex, redhat_release)
+    if match:
+        group = match.groupdict()
+        group['name'] = group['name'].lower().partition(' linux')[0]
+        if group['name'] == 'red hat enterprise':
+            group['name'] = 'redhat'
+        return {'ID': group['name'], 'VERSION_ID': group['version'],
+                'VERSION_CODENAME': group['codename']}
+    return {}
+
+
 def get_linux_distro():
     distro_name = ''
     distro_version = ''
     flavor = ''
+    os_release = {}
     if os.path.exists('/etc/os-release'):
         os_release = load_shell_content(load_file('/etc/os-release'))
+    if not os_release:
+        os_release = _parse_redhat_release()
+    if os_release:
         distro_name = os_release.get('ID', '')
         distro_version = os_release.get('VERSION_ID', '')
         if 'sles' in distro_name or 'suse' in distro_name:
@@ -594,9 +624,11 @@ def get_linux_distro():
             flavor = os_release.get('VERSION_CODENAME', '')
             if not flavor:
                 match = re.match(r'[^ ]+ \((?P<codename>[^)]+)\)',
-                                 os_release.get('VERSION'))
+                                 os_release.get('VERSION', ''))
                 if match:
                     flavor = match.groupdict()['codename']
+        if distro_name == 'rhel':
+            distro_name = 'redhat'
     else:
         dist = ('', '', '')
         try:
diff --git a/cloudinit/version.py b/cloudinit/version.py
index 3b60fc4..844a02e 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "18.3"
+__VERSION__ = "18.4"
 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
diff --git a/cloudinit/warnings.py b/cloudinit/warnings.py
index f9f7a63..1da90c4 100644
--- a/cloudinit/warnings.py
+++ b/cloudinit/warnings.py
@@ -130,7 +130,7 @@ def show_warning(name, cfg=None, sleep=None, mode=True, **kwargs):
         os.path.join(_get_warn_dir(cfg), name),
         topline + "\n".join(fmtlines) + "\n" + topline)
 
-    LOG.warning(topline + "\n".join(fmtlines) + "\n" + closeline)
+    LOG.warning("%s%s\n%s", topline, "\n".join(fmtlines), closeline)
 
     if sleep:
         LOG.debug("sleeping %d seconds for warning '%s'", sleep, name)
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 5619de3..1fef133 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -24,8 +24,6 @@ disable_root: true
 {% if variant in ["centos", "fedora", "rhel"] %}
 mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
 resize_rootfs_tmp: /dev
-ssh_deletekeys:   0
-ssh_genkeytypes:  ~
 ssh_pwauth:   0
 
 {% endif %}
diff --git a/debian/changelog b/debian/changelog
index 6121991..2bb9520 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,73 @@
+cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium
+
+  * drop the following cherry-picks now included:
+    + cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on
+  * refresh patches:
+   + debian/patches/openstack-no-network-config.patch
+  * New upstream release. (LP: #1795953)
+    - release 18.4
+    - tests: allow skipping an entire cloud_test without running.
+    - tests: disable lxd tests on cosmic
+    - cii-tests: use unittest2.SkipTest in ntp_chrony due to new deps
+    - lxd: adjust to snap installed lxd.
+    - docs: surface experimental doc in instance-data.json
+    - tests: fix ec2 integration tests. process meta_data instead of meta-data
+    - Add support for Infiniband network interfaces (IPoIB). [Mark Goddard]
+    - cli: add cloud-init query subcommand to query instance metadata
+    - tools/tox-venv: update for new features.
+    - pylint: ignore warning assignment-from-no-return for _write_network
+    - stages: Fix bug causing datasource to have incorrect sys_cfg.
+    - Remove dead-code _write_network distro implementations.
+    - net_util: ensure static configs have netmask in translate_network result
+      [Thomas Berger]
+    - Fall back to root:root on syslog permissions if other options fail.
+      [Robert Schweikert]
+    - tests: Add mock for util.get_hostname. [Robert Schweikert]
+    - ds-identify: doc string cleanup.
+    - OpenStack: Support setting mac address on bond. [Fabian Wiesel]
+    - bash_completion/cloud-init: fix shell syntax error.
+    - EphemeralIPv4Network: Be more explicit when adding default route.
+    - OpenStack: support reading of newer versions of metdata.
+    - OpenStack: fix bug causing 'latest' version to be used from network.
+    - user-data: jinja template to render instance-data.json in cloud-config
+    - config: disable ssh access to a configured user account
+    - tests: print failed testname instead of docstring upon failure
+    - tests: Disallow use of util.subp except for where needed.
+    - sysconfig: refactor sysconfig to accept distro specific templates paths
+    - Add unit tests for config/cc_ssh.py [Francis Ginther]
+    - Fix the built-in cloudinit/tests/helpers:skipIf
+    - read-version: enhance error message [Joshua Powers]
+    - hyperv_reporting_handler: simplify threaded publisher
+    - VMWare: Fix a network config bug in vm with static IPv4 and no gateway.
+      [Pengpeng Sun]
+    - logging: Add logging config type hyperv for reporting via Azure KVP
+      [Andy Liu]
+    - tests: disable other snap test as well [Joshua Powers]
+    - tests: disable snap, fix write_files binary [Joshua Powers]
+    - Add datasource Oracle Compute Infrastructure (OCI).
+    - azure: allow azure to generate network configuration from IMDS per boot.
+    - Scaleway: Add network configuration to the DataSource [Louis Bouchard]
+    - docs: Fix example cloud-init analyze command to match output.
+      [Wesley Gao]
+    - netplan: Correctly render macaddress on a bonds and bridges when
+      provided.
+    - tools: Add 'net-convert' subcommand command to 'cloud-init devel'.
+    - redhat: remove ssh keys on new instance.
+    - Use typeset or local in profile.d scripts.
+    - OpenNebula: Fix null gateway6 [Akihiko Ota]
+    - oracle: fix detect_openstack to report True on OracleCloud.com DMI data
+    - tests: improve LXDInstance trying to workaround or catch bug.
+    - update_metadata re-config on every boot comments and tests not quite
+      right [Mike Gerdts]
+    - tests: Collect build_info from system if available.
+    - pylint: Fix pylint warnings reported in pylint 2.0.0.
+    - get_linux_distro: add support for rhel via redhat-release.
+    - get_linux_distro: add support for centos6 and rawhide flavors of redhat
+    - tools: add '--debug' to tools/net-convert.py
+    - tests: bump the version of paramiko to 2.4.1.
+
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Wed, 03 Oct 2018 12:12:13 -0600
+
 cloud-init (18.3-9-g2e62cb8a-0ubuntu1~18.04.2) bionic-proposed; urgency=medium
 
   * cherry-pick 3cee0bf8: oracle: fix detect_openstack to report True on
diff --git a/debian/patches/cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on b/debian/patches/cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on
deleted file mode 100644
index ea29c34..0000000
--- a/debian/patches/cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on
+++ /dev/null
@@ -1,66 +0,0 @@
-From 3cee0bf85fbf12d272422c8eeed63bf06e64570b Mon Sep 17 00:00:00 2001
-From: Chad Smith <chad.smith@xxxxxxxxxxxxx>
-Date: Tue, 31 Jul 2018 18:44:12 +0000
-Subject: [PATCH] oracle: fix detect_openstack to report True on
- OracleCloud.com DMI data
-
-The OpenStack datasource in 18.3 changed to detect data in the
-init-local stage instead of init-network and attempted to redetect
-OpenStackLocal datasource on Oracle across reboots. The function
-detect_openstack was added to quickly detect whether a platform is
-OpenStack based on dmi product_name or chassis_asset_tag and it was
-a bit too strict for Oracle in checking for 'OpenStack Nova'/'Compute'
-DMI product_name.
-
-Oracle's DMI product_name reports 'SAtandard PC (i440FX + PIIX, 1996)'
-and DMI chassis_asset_tag is 'OracleCloud.com'.
-
-detect_openstack function now adds 'OracleCloud.com' as a supported value
-'OracleCloud.com' to valid chassis-asset-tags for the OpenStack
-datasource.
-
-LP: #1784685
----
- cloudinit/sources/DataSourceOpenStack.py          |  3 ++-
- tests/unittests/test_datasource/test_openstack.py | 18 ++++++++++++++++++
- 2 files changed, 20 insertions(+), 1 deletion(-)
-
---- a/cloudinit/sources/DataSourceOpenStack.py
-+++ b/cloudinit/sources/DataSourceOpenStack.py
-@@ -28,7 +28,8 @@ DMI_PRODUCT_NOVA = 'OpenStack Nova'
- DMI_PRODUCT_COMPUTE = 'OpenStack Compute'
- VALID_DMI_PRODUCT_NAMES = [DMI_PRODUCT_NOVA, DMI_PRODUCT_COMPUTE]
- DMI_ASSET_TAG_OPENTELEKOM = 'OpenTelekomCloud'
--VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM]
-+DMI_ASSET_TAG_ORACLE_CLOUD = 'OracleCloud.com'
-+VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM, DMI_ASSET_TAG_ORACLE_CLOUD]
- 
- 
- class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource):
---- a/tests/unittests/test_datasource/test_openstack.py
-+++ b/tests/unittests/test_datasource/test_openstack.py
-@@ -511,6 +511,24 @@ class TestDetectOpenStack(test_helpers.C
-             ds.detect_openstack(),
-             'Expected detect_openstack == True on OpenTelekomCloud')
- 
-+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
-+    def test_detect_openstack_oraclecloud_chassis_asset_tag(self, m_dmi,
-+                                                            m_is_x86):
-+        """Return True on OpenStack reporting Oracle cloud asset-tag."""
-+        m_is_x86.return_value = True
-+
-+        def fake_dmi_read(dmi_key):
-+            if dmi_key == 'system-product-name':
-+                return 'Standard PC (i440FX + PIIX, 1996)'  # No match
-+            if dmi_key == 'chassis-asset-tag':
-+                return 'OracleCloud.com'
-+            assert False, 'Unexpected dmi read of %s' % dmi_key
-+
-+        m_dmi.side_effect = fake_dmi_read
-+        self.assertTrue(
-+            ds.detect_openstack(),
-+            'Expected detect_openstack == True on OracleCloud.com')
-+
-     @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
-     @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
-     def test_detect_openstack_by_proc_1_environ(self, m_dmi, m_proc_env,
diff --git a/debian/patches/openstack-no-network-config.patch b/debian/patches/openstack-no-network-config.patch
index d6560f4..88449d1 100644
--- a/debian/patches/openstack-no-network-config.patch
+++ b/debian/patches/openstack-no-network-config.patch
@@ -15,7 +15,7 @@ Author: Chad Smith <chad.smith@xxxxxxxxxxxxx>
 
 --- a/cloudinit/sources/DataSourceOpenStack.py
 +++ b/cloudinit/sources/DataSourceOpenStack.py
-@@ -97,10 +97,9 @@ class DataSourceOpenStack(openstack.Sour
+@@ -98,10 +98,9 @@ class DataSourceOpenStack(openstack.Sour
          if self._network_config != sources.UNSET:
              return self._network_config
  
diff --git a/debian/patches/series b/debian/patches/series
index 3691601..2ce72fb 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,2 +1 @@
 openstack-no-network-config.patch
-cpick-3cee0bf8-oracle-fix-detect_openstack-to-report-True-on
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
index 01ecad7..6a363b7 100644
--- a/doc/examples/cloud-config-user-groups.txt
+++ b/doc/examples/cloud-config-user-groups.txt
@@ -36,6 +36,8 @@ users:
       - <ssh pub key 1>
       - <ssh pub key 2>
   - snapuser: joe@xxxxxxxxxx
+  - name: nosshlogins
+    ssh_redirect_user: true
 
 # Valid Values:
 #   name: The user's login name
@@ -76,6 +78,13 @@ users:
 #   no_log_init: When set to true, do not initialize lastlog and faillog database.
 #   ssh_import_id: Optional. Import SSH ids
 #   ssh_authorized_keys: Optional. [list] Add keys to user's authorized keys file
+#   ssh_redirect_user: Optional. [bool] Set true to block ssh logins for cloud
+#       ssh public keys and emit a message redirecting logins to
+#       use <default_username> instead. This option only disables cloud
+#       provided public-keys. An error will be raised if ssh_authorized_keys
+#       or ssh_import_id is provided for the same user.
+# 
+#       ssh_authorized_keys.
 #   sudo: Defaults to none. Accepts a sudo rule string, a list of sudo rule
 #         strings or False to explicitly deny sudo usage. Examples:
 #
diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt
index 774f66b..eb84dcf 100644
--- a/doc/examples/cloud-config.txt
+++ b/doc/examples/cloud-config.txt
@@ -232,9 +232,22 @@ disable_root: false
 # respective key in /root/.ssh/authorized_keys if disable_root is true
 # see 'man authorized_keys' for more information on what you can do here
 #
-# The string '$USER' will be replaced with the username of the default user
-#
-# disable_root_opts: no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"$USER\" rather than the user \"root\".';echo;sleep 10"
+# The string '$USER' will be replaced with the username of the default user.
+# The string '$DISABLE_USER' will be replaced with the username to disable.
+#
+# disable_root_opts: no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"$USER\" rather than the user \"$DISABLE_USER\".';echo;sleep 10"
+
+# disable ssh access for non-root-users
+# To disable ssh access for non-root users, ssh_redirect_user: true can be
+# provided for any use in the 'users' list. This will prompt any ssh login
+# attempts as that user with a message like that in disable_root_opts which
+# redirects the person to login as <default_username>
+# This option can not be combined with either ssh_authorized_keys or
+# ssh_import_id.
+users:
+ - default
+ - name: blockeduser
+   ssh_redirect_user: true
 
 
 # set the locale to a given locale
diff --git a/doc/rtd/index.rst b/doc/rtd/index.rst
index de67f36..20a99a3 100644
--- a/doc/rtd/index.rst
+++ b/doc/rtd/index.rst
@@ -31,6 +31,7 @@ initialization of a cloud instance.
    topics/capabilities.rst
    topics/availability.rst
    topics/format.rst
+   topics/instancedata.rst
    topics/dir_layout.rst
    topics/examples.rst
    topics/boot.rst
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 3e2c9e3..0d8b894 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -16,13 +16,15 @@ User configurability
 
 `Cloud-init`_ 's behavior can be configured via user-data.
 
-    User-data can be given by the user at instance launch time.
+    User-data can be given by the user at instance launch time. See
+    :ref:`user_data_formats` for acceptable user-data content.
+
 
 This is done via the ``--user-data`` or ``--user-data-file`` argument to
 ec2-run-instances for example.
 
-* Check your local clients documentation for how to provide a `user-data`
-  string or `user-data` file for usage by cloud-init on instance creation.
+* Check your local client's documentation for how to provide a `user-data`
+  string or `user-data` file to cloud-init on instance creation.
 
 
 Feature detection
@@ -51,10 +53,9 @@ system:
 
   % cloud-init --help
   usage: cloud-init [-h] [--version] [--file FILES]
-
                     [--debug] [--force]
-                    {init,modules,single,dhclient-hook,features,analyze,devel,collect-logs,clean,status}
-                    ...
+                    {init,modules,single,query,dhclient-hook,features,analyze,devel,collect-logs,clean,status}
+                                                         ...
 
   optional arguments:
     -h, --help            show this help message and exit
@@ -66,17 +67,19 @@ system:
                           your own risk)
 
   Subcommands:
-    {init,modules,single,dhclient-hook,features,analyze,devel,collect-logs,clean,status}
+    {init,modules,single,query,dhclient-hook,features,analyze,devel,collect-logs,clean,status}
       init                initializes cloud-init and performs initial modules
       modules             activates modules using a given configuration key
       single              run a single module
+      query               Query instance metadata from the command line
       dhclient-hook       run the dhclient hookto record network info
       features            list defined features
       analyze             Devel tool: Analyze cloud-init logs and data
       devel               Run development tools
       collect-logs        Collect and tar all cloud-init debug info
-      clean               Remove logs and artifacts so cloud-init can re-run.
-      status              Report cloud-init status or wait on completion.
+      clean               Remove logs and artifacts so cloud-init can re-run
+      status              Report cloud-init status or wait on completion
+
 
 CLI Subcommand details
 ======================
@@ -102,8 +105,8 @@ cloud-init status
 Report whether cloud-init is running, done, disabled or errored. Exits
 non-zero if an error is detected in cloud-init.
 
- * **--long**: Detailed status information.
- * **--wait**: Block until cloud-init completes.
+* **--long**: Detailed status information.
+* **--wait**: Block until cloud-init completes.
 
 .. code-block:: shell-session
 
@@ -141,6 +144,68 @@ Logs collected are:
  * journalctl output
  * /var/lib/cloud/instance/user-data.txt
 
+.. _cli_query:
+
+cloud-init query
+------------------
+Query standardized cloud instance metadata crawled by cloud-init and stored
+in ``/run/cloud-init/instance-data.json``. This is a convenience command-line
+interface to reference any cached configuration metadata that cloud-init
+crawls when booting the instance. See :ref:`instance_metadata` for more info.
+
+* **--all**: Dump all available instance data as json which can be queried.
+* **--instance-data**: Optional path to a different instance-data.json file to
+  source for queries.
+* **--list-keys**: List available query keys from cached instance data.
+
+.. code-block:: shell-session
+
+  # List all top-level query keys available (includes standardized aliases)
+  % cloud-init query --list-keys
+  availability_zone
+  base64_encoded_keys
+  cloud_name
+  ds
+  instance_id
+  local_hostname
+  region
+  v1
+
+* **<varname>**: A dot-delimited variable path into the instance-data.json
+   object.
+
+.. code-block:: shell-session
+
+  # Query cloud-init standardized metadata on any cloud
+  % cloud-init query v1.cloud_name
+  aws  # or openstack, azure, gce etc.
+
+  # Any standardized instance-data under a <v#> key is aliased as a top-level
+  # key for convenience.
+  % cloud-init query cloud_name
+  aws  # or openstack, azure, gce etc.
+
+  # Query datasource-specific metadata on EC2
+  % cloud-init query ds.meta_data.public_ipv4
+
+* **--format** A string that will use jinja-template syntax to render a string
+   replacing
+
+.. code-block:: shell-session
+
+  # Generate a custom hostname fqdn based on instance-id, cloud and region
+  % cloud-init query --format 'custom-{{instance_id}}.{{region}}.{{v1.cloud_name}}.com'
+  custom-i-0e91f69987f37ec74.us-east-2.aws.com
+
+
+.. note::
+  The standardized instance data keys under **v#** are guaranteed not to change
+  behavior or format. If using top-level convenience aliases for any
+  standardized instance data keys, the most value (highest **v#**) of that key
+  name is what is reported as the top-level value. So these aliases act as a
+  'latest'.
+
+
 .. _cli_analyze:
 
 cloud-init analyze
@@ -148,10 +213,10 @@ cloud-init analyze
 Get detailed reports of where cloud-init spends most of its time. See
 :ref:`boot_time_analysis` for more info.
 
- * **blame** Report ordered by most costly operations.
- * **dump** Machine-readable JSON dump of all cloud-init tracked events.
- * **show** show time-ordered report of the cost of operations during each
-   boot stage.
+* **blame** Report ordered by most costly operations.
+* **dump** Machine-readable JSON dump of all cloud-init tracked events.
+* **show** show time-ordered report of the cost of operations during each
+  boot stage.
 
 .. _cli_devel:
 
@@ -166,6 +231,13 @@ likely be promoted to top-level subcommands when stable.
    validation is work in progress and supports a subset of cloud-config
    modules.
 
+ * ``cloud-init devel render``: Use cloud-init's jinja template render to
+   process  **#cloud-config** or **custom-scripts**, injecting any variables
+   from ``/run/cloud-init/instance-data.json``. It accepts a user-data file
+   containing  the jinja template header ``## template: jinja`` and renders
+   that content with any instance-data.json variables present.
+
+
 .. _cli_clean:
 
 cloud-init clean
@@ -173,8 +245,8 @@ cloud-init clean
 Remove cloud-init artifacts from /var/lib/cloud and optionally reboot the
 machine to so cloud-init re-runs all stages as it did on first boot.
 
- * **--logs**: Optionally remove /var/log/cloud-init*log files.
- * **--reboot**: Reboot the system after removing artifacts.
+* **--logs**: Optionally remove /var/log/cloud-init*log files.
+* **--reboot**: Reboot the system after removing artifacts.
 
 .. _cli_init:
 
@@ -186,7 +258,7 @@ Can be run on the commandline, but is generally gated to run only once
 due to semaphores in **/var/lib/cloud/instance/sem/** and
 **/var/lib/cloud/sem**.
 
- * **--local**: Run *init-local* stage instead of *init*.
+* **--local**: Run *init-local* stage instead of *init*.
 
 .. _cli_modules:
 
@@ -201,8 +273,8 @@ declared to run in various boot stages in the file
 commandline, but each module is gated to run only once due to semaphores
 in ``/var/lib/cloud/``.
 
- * **--mode (init|config|final)**: Run *modules:init*, *modules:config* or
-   *modules:final* cloud-init stages. See :ref:`boot_stages` for more info.
+* **--mode (init|config|final)**: Run *modules:init*, *modules:config* or
+  *modules:final* cloud-init stages. See :ref:`boot_stages` for more info.
 
 .. _cli_single:
 
@@ -212,9 +284,9 @@ Attempt to run a single named cloud config module.  The following example
 re-runs the cc_set_hostname module ignoring the module default frequency
 of once-per-instance:
 
- * **--name**: The cloud-config module name to run
- * **--frequency**: Optionally override the declared module frequency
-   with one of (always|once-per-instance|once)
+* **--name**: The cloud-config module name to run
+* **--frequency**: Optionally override the declared module frequency
+  with one of (always|once-per-instance|once)
 
 .. code-block:: shell-session
 
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 30e57d8..e34f145 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -17,99 +17,10 @@ own way) internally a datasource abstract class was created to allow for a
 single way to access the different cloud systems methods to provide this data
 through the typical usage of subclasses.
 
-
-instance-data
--------------
-For reference, cloud-init stores all the metadata, vendordata and userdata
-provided by a cloud in a json blob at ``/run/cloud-init/instance-data.json``.
-While the json contains datasource-specific keys and names, cloud-init will
-maintain a minimal set of standardized keys that will remain stable on any
-cloud. Standardized instance-data keys will be present under a "v1" key.
-Any datasource metadata cloud-init consumes will all be present under the
-"ds" key.
-
-Below is an instance-data.json example from an OpenStack instance:
-
-.. sourcecode:: json
-
-  {
-   "base64-encoded-keys": [
-    "ds/meta-data/random_seed",
-    "ds/user-data"
-   ],
-   "ds": {
-    "ec2_metadata": {
-     "ami-id": "ami-0000032f",
-     "ami-launch-index": "0",
-     "ami-manifest-path": "FIXME",
-     "block-device-mapping": {
-      "ami": "vda",
-      "ephemeral0": "/dev/vdb",
-      "root": "/dev/vda"
-     },
-     "hostname": "xenial-test.novalocal",
-     "instance-action": "none",
-     "instance-id": "i-0006e030",
-     "instance-type": "m1.small",
-     "local-hostname": "xenial-test.novalocal",
-     "local-ipv4": "10.5.0.6",
-     "placement": {
-      "availability-zone": "None"
-     },
-     "public-hostname": "xenial-test.novalocal",
-     "public-ipv4": "10.245.162.145",
-     "reservation-id": "r-fxm623oa",
-     "security-groups": "default"
-    },
-    "meta-data": {
-     "availability_zone": null,
-     "devices": [],
-     "hostname": "xenial-test.novalocal",
-     "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
-     "launch_index": 0,
-     "local-hostname": "xenial-test.novalocal",
-     "name": "xenial-test",
-     "project_id": "e0eb2d2538814...",
-     "random_seed": "A6yPN...",
-     "uuid": "3e39d278-0644-4728-9479-678f92..."
-    },
-    "network_json": {
-     "links": [
-      {
-       "ethernet_mac_address": "fa:16:3e:7d:74:9b",
-       "id": "tap9ca524d5-6e",
-       "mtu": 8958,
-       "type": "ovs",
-       "vif_id": "9ca524d5-6e5a-4809-936a-6901..."
-      }
-     ],
-     "networks": [
-      {
-       "id": "network0",
-       "link": "tap9ca524d5-6e",
-       "network_id": "c6adfc18-9753-42eb-b3ea-18b57e6b837f",
-       "type": "ipv4_dhcp"
-      }
-     ],
-     "services": [
-      {
-       "address": "10.10.160.2",
-       "type": "dns"
-      }
-     ]
-    },
-    "user-data": "I2Nsb3VkLWNvbmZpZ...",
-    "vendor-data": null
-   },
-   "v1": {
-    "availability-zone": null,
-    "cloud-name": "openstack",
-    "instance-id": "3e39d278-0644-4728-9479-678f9212d8f0",
-    "local-hostname": "xenial-test",
-    "region": null
-   }
-  }
-
+Any metadata processed by cloud-init's datasources is persisted as
+``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling
+to quickly introspect some of that data. See :ref:`instance_metadata` for
+more information.
 
 
 Datasource API
@@ -149,14 +60,14 @@ The current interface that a datasource object must provide is the following:
     # or does not exist)
     def device_name_to_device(self, name)
 
-    # gets the locale string this instance should be applying 
+    # gets the locale string this instance should be applying
     # which typically used to adjust the instances locale settings files
     def get_locale(self)
 
     @property
     def availability_zone(self)
 
-    # gets the instance id that was assigned to this instance by the 
+    # gets the instance id that was assigned to this instance by the
     # cloud provider or when said instance id does not exist in the backing
     # metadata this will return 'iid-datasource'
     def get_instance_id(self)
@@ -189,6 +100,7 @@ Follow for more information.
    datasources/nocloud.rst
    datasources/opennebula.rst
    datasources/openstack.rst
+   datasources/oracle.rst
    datasources/ovf.rst
    datasources/smartos.rst
    datasources/fallback.rst
diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst
new file mode 100644
index 0000000..f2383ce
--- /dev/null
+++ b/doc/rtd/topics/datasources/oracle.rst
@@ -0,0 +1,26 @@
+.. _datasource_oracle:
+
+Oracle
+======
+
+This datasource reads metadata, vendor-data and user-data from
+`Oracle Compute Infrastructure`_ (OCI).
+
+Oracle Platform
+---------------
+OCI provides bare metal and virtual machines.  In both cases, 
+the platform identifies itself via DMI data in the chassis asset tag
+with the string 'OracleCloud.com'.
+
+Oracle's platform provides a metadata service that mimics the 2013-10-17
+version of OpenStack metadata service.  Initially support for Oracle
+was done via the OpenStack datasource.
+
+Cloud-init has a specific datasource for Oracle in order to:
+ a. allow and support future growth of the OCI platform.
+ b. address small differences between OpenStack and Oracle metadata
+    implementation.
+
+
+.. _Oracle Compute Infrastructure: https://cloud.oracle.com/
+.. vi: textwidth=78
diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst
index cacc8a2..51363ea 100644
--- a/doc/rtd/topics/debugging.rst
+++ b/doc/rtd/topics/debugging.rst
@@ -45,7 +45,7 @@ subcommands default to reading /var/log/cloud-init.log.
 
 .. code-block:: shell-session
 
-    $ cloud-init analyze blame -i my-cloud-init.log
+    $ cloud-init analyze dump -i my-cloud-init.log
     [
      {
       "description": "running config modules",
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
index 1b0ff36..15234d2 100644
--- a/doc/rtd/topics/format.rst
+++ b/doc/rtd/topics/format.rst
@@ -1,6 +1,8 @@
-*******
-Formats
-*******
+.. _user_data_formats:
+
+*****************
+User-Data Formats
+*****************
 
 User data that will be acted upon by cloud-init must be in one of the following types.
 
@@ -65,6 +67,11 @@ Typically used by those who just want to execute a shell script.
 
 Begins with: ``#!`` or ``Content-Type: text/x-shellscript`` when using a MIME archive.
 
+.. note::
+   New in cloud-init v. 18.4: User-data scripts can also render cloud instance
+   metadata variables using jinja templating. See
+   :ref:`instance_metadata` for more information.
+
 Example
 -------
 
@@ -103,12 +110,18 @@ These things include:
 - certain ssh keys should be imported
 - *and many more...*
 
-**Note:** The file must be valid yaml syntax.
+.. note::
+   This file must be valid yaml syntax.
 
 See the :ref:`yaml_examples` section for a commented set of examples of supported cloud config formats.
 
 Begins with: ``#cloud-config`` or ``Content-Type: text/cloud-config`` when using a MIME archive.
 
+.. note::
+   New in cloud-init v. 18.4: Cloud config dta can also render cloud instance
+   metadata variables using jinja templating. See
+   :ref:`instance_metadata` for more information.
+
 Upstart Job
 ===========
 
diff --git a/doc/rtd/topics/instancedata.rst b/doc/rtd/topics/instancedata.rst
new file mode 100644
index 0000000..634e180
--- /dev/null
+++ b/doc/rtd/topics/instancedata.rst
@@ -0,0 +1,297 @@
+.. _instance_metadata:
+
+*****************
+Instance Metadata
+*****************
+
+What is a instance data?
+========================
+
+Instance data is the collection of all configuration data that cloud-init
+processes to configure the instance. This configuration typically
+comes from any number of sources:
+
+* cloud-provided metadata services (aka metadata)
+* custom config-drive attached to the instance
+* cloud-config seed files in the booted cloud image or distribution
+* vendordata provided from files or cloud metadata services
+* userdata provided at instance creation
+
+Each cloud provider presents unique configuration metadata in different
+formats to the instance. Cloud-init provides a cache of any crawled metadata
+as well as a versioned set of standardized instance data keys which it makes
+available on all platforms.
+
+Cloud-init produces a simple json object in
+``/run/cloud-init/instance-data.json`` which represents standardized and
+versioned representation of the metadata it consumes during initial boot. The
+intent is to provide the following benefits to users or scripts on any system
+deployed with cloud-init:
+
+* simple static object to query to obtain a instance's metadata
+* speed: avoid costly network transactions for metadata that is already cached
+  on the filesytem
+* reduce need to recrawl metadata services for static metadata that is already
+  cached
+* leverage cloud-init's best practices for crawling cloud-metadata services
+* avoid rolling unique metadata crawlers on each cloud platform to get
+  metadata configuration values
+
+Cloud-init stores any instance data processed in the following files:
+
+* ``/run/cloud-init/instance-data.json``: world-readable json containing
+  standardized keys, sensitive keys redacted
+* ``/run/cloud-init/instance-data-sensitive.json``: root-readable unredacted
+  json blob
+* ``/var/lib/cloud/instance/user-data.txt``: root-readable sensitive raw
+  userdata
+* ``/var/lib/cloud/instance/vendor-data.txt``: root-readable sensitive raw
+  vendordata
+
+Cloud-init redacts any security sensitive content from instance-data.json,
+stores ``/run/cloud-init/instance-data.json`` as a world-readable json file.
+Because user-data and vendor-data can contain passwords both of these files
+are readonly for *root* as well. The *root* user can also read
+``/run/cloud-init/instance-data-sensitive.json`` which is all instance data
+from instance-data.json as well as unredacted sensitive content.
+
+
+Format of instance-data.json
+============================
+
+The instance-data.json and instance-data-sensitive.json files are well-formed
+JSON and record the set of keys and values for any metadata processed by
+cloud-init. Cloud-init standardizes the format for this content so that it
+can be generalized across different cloud platforms.
+
+There are three basic top-level keys:
+
+* **base64_encoded_keys**: A list of forward-slash delimited key paths into
+  the instance-data.json object whose value is base64encoded for json
+  compatibility. Values at these paths should be decoded to get the original
+  value.
+
+* **sensitive_keys**: A list of forward-slash delimited key paths into
+  the instance-data.json object whose value is considered by the datasource as
+  'security sensitive'. Only the keys listed here will be redacted from
+  instance-data.json for non-root users.
+
+* **ds**: Datasource-specific metadata crawled for the specific cloud
+  platform. It should closely represent the structure of the cloud metadata
+  crawled. The structure of content and details provided are entirely
+  cloud-dependent. Mileage will vary depending on what the cloud exposes.
+  The content exposed under the 'ds' key is currently **experimental** and
+  expected to change slightly in the upcoming cloud-init release.
+
+* **v1**: Standardized cloud-init metadata keys, these keys are guaranteed to
+  exist on all cloud platforms. They will also retain their current behavior
+  and format and will be carried forward even if cloud-init introduces a new
+  version of standardized keys with **v2**.
+
+The standardized keys present:
+
++----------------------+-----------------------------------------------+---------------------------+
+|  Key path            | Description                                   | Examples                  |
++======================+===============================================+===========================+
+| v1.cloud_name        | The name of the cloud provided by metadata    | aws, openstack, azure,    |
+|                      | key 'cloud-name' or the cloud-init datasource | configdrive, nocloud,     |
+|                      | name which was discovered.                    | ovf, etc.                 |
++----------------------+-----------------------------------------------+---------------------------+
+| v1.instance_id       | Unique instance_id allocated by the cloud     | i-<somehash>              |
++----------------------+-----------------------------------------------+---------------------------+
+| v1.local_hostname    | The internal or local hostname of the system  | ip-10-41-41-70,           |
+|                      |                                               | <user-provided-hostname>  |
++----------------------+-----------------------------------------------+---------------------------+
+| v1.region            | The physical region/datacenter in which the   | us-east-2                 |
+|                      | instance is deployed                          |                           |
++----------------------+-----------------------------------------------+---------------------------+
+| v1.availability_zone | The physical availability zone in which the   | us-east-2b, nova, null    |
+|                      | instance is deployed                          |                           |
++----------------------+-----------------------------------------------+---------------------------+
+
+
+Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2
+instance:
+
+.. sourcecode:: json
+
+  {
+   "base64_encoded_keys": [],
+   "sensitive_keys": [],
+   "ds": {
+    "meta_data": {
+     "ami-id": "ami-014e1416b628b0cbf",
+     "ami-launch-index": "0",
+     "ami-manifest-path": "(unknown)",
+     "block-device-mapping": {
+      "ami": "/dev/sda1",
+      "ephemeral0": "sdb",
+      "ephemeral1": "sdc",
+      "root": "/dev/sda1"
+     },
+     "hostname": "ip-10-41-41-70.us-east-2.compute.internal",
+     "instance-action": "none",
+     "instance-id": "i-04fa31cfc55aa7976",
+     "instance-type": "t2.micro",
+     "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",
+     "local-ipv4": "10.41.41.70",
+     "mac": "06:b6:92:dd:9d:24",
+     "metrics": {
+      "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
+     },
+     "network": {
+      "interfaces": {
+       "macs": {
+	"06:b6:92:dd:9d:24": {
+	 "device-number": "0",
+	 "interface-id": "eni-08c0c9fdb99b6e6f4",
+	 "ipv4-associations": {
+	  "18.224.22.43": "10.41.41.70"
+	 },
+	 "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",
+	 "local-ipv4s": "10.41.41.70",
+	 "mac": "06:b6:92:dd:9d:24",
+	 "owner-id": "437526006925",
+	 "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",
+	 "public-ipv4s": "18.224.22.43",
+	 "security-group-ids": "sg-828247e9",
+	 "security-groups": "Cloud-init integration test secgroup",
+	 "subnet-id": "subnet-282f3053",
+	 "subnet-ipv4-cidr-block": "10.41.41.0/24",
+	 "subnet-ipv6-cidr-blocks": "2600:1f16:b80:ad00::/64",
+	 "vpc-id": "vpc-252ef24d",
+	 "vpc-ipv4-cidr-block": "10.41.0.0/16",
+	 "vpc-ipv4-cidr-blocks": "10.41.0.0/16",
+	 "vpc-ipv6-cidr-blocks": "2600:1f16:b80:ad00::/56"
+	}
+       }
+      }
+     },
+     "placement": {
+      "availability-zone": "us-east-2b"
+     },
+     "profile": "default-hvm",
+     "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",
+     "public-ipv4": "18.224.22.43",
+     "public-keys": {
+      "cloud-init-integration": [
+       "ssh-rsa
+  AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB
+  cloud-init-integration"
+      ]
+     },
+     "reservation-id": "r-06ab75e9346f54333",
+     "security-groups": "Cloud-init integration test secgroup",
+     "services": {
+      "domain": "amazonaws.com",
+      "partition": "aws"
+     }
+    }
+   },
+   "v1": {
+    "availability-zone": "us-east-2b",
+    "availability_zone": "us-east-2b",
+    "cloud-name": "aws",
+    "cloud_name": "aws",
+    "instance-id": "i-04fa31cfc55aa7976",
+    "instance_id": "i-04fa31cfc55aa7976",
+    "local-hostname": "ip-10-41-41-70",
+    "local_hostname": "ip-10-41-41-70",
+    "region": "us-east-2"
+   }
+  }
+
+
+Using instance-data
+===================
+
+As of cloud-init v. 18.4, any variables present in
+``/run/cloud-init/instance-data.json`` can be used in:
+
+* User-data scripts
+* Cloud config data
+* Command line interface via **cloud-init query** or
+  **cloud-init devel render**
+
+Many clouds allow users to provide user-data to an instance at
+the time the instance is launched. Cloud-init supports a number of
+:ref:`user_data_formats`.
+
+Both user-data scripts and **#cloud-config** data support jinja template
+rendering.
+When the first line of the provided user-data begins with,
+**## template: jinja** cloud-init will use jinja to render that file.
+Any instance-data-sensitive.json variables are surfaced as dot-delimited
+jinja template variables because cloud-config modules are run as 'root'
+user.
+
+
+Below are some examples of providing these types of user-data:
+
+* Cloud config calling home with the ec2 public hostname and avaliability-zone
+
+.. code-block:: shell-session
+
+  ## template: jinja
+  #cloud-config
+  runcmd:
+      - echo 'EC2 public hostname allocated to instance: {{
+        ds.meta_data.public_hostname }}' > /tmp/instance_metadata
+      - echo 'EC2 avaiability zone: {{ v1.availability_zone }}' >>
+        /tmp/instance_metadata
+      - curl -X POST -d '{"hostname": "{{ds.meta_data.public_hostname }}",
+        "availability-zone": "{{ v1.availability_zone }}"}'
+        https://example.com
+
+* Custom user-data script performing different operations based on region
+
+.. code-block:: shell-session
+
+   ## template: jinja
+   #!/bin/bash
+   {% if v1.region == 'us-east-2' -%}
+   echo 'Installing custom proxies for {{ v1.region }}
+   sudo apt-get install my-xtra-fast-stack
+   {%- endif %}
+   ...
+
+.. note::
+  Trying to reference jinja variables that don't exist in
+  instance-data.json will result in warnings in ``/var/log/cloud-init.log``
+  and the following string in your rendered user-data:
+  ``CI_MISSING_JINJA_VAR/<your_varname>``.
+
+Cloud-init also surfaces a commandline tool **cloud-init query** which can
+assist developers or scripts with obtaining instance metadata easily. See
+:ref:`cli_query` for more information.
+
+To cut down on keystrokes on the command line, cloud-init also provides
+top-level key aliases for any standardized ``v#`` keys present. The preceding
+``v1`` is not required of ``v1.var_name`` These aliases will represent the
+value of the highest versioned standard key. For example, ``cloud_name``
+value will be ``v2.cloud_name`` if both ``v1`` and ``v2`` keys are present in
+instance-data.json.
+The **query** command also publishes ``userdata`` and ``vendordata`` keys to
+the root user which will contain the decoded user and vendor data provided to
+this instance. Non-root users referencing userdata or vendordata keys will
+see only redacted values.
+
+.. code-block:: shell-session
+
+ # List all top-level instance-data keys available
+ % cloud-init query --list-keys
+
+ # Find your EC2 ami-id
+ % cloud-init query ds.metadata.ami_id
+
+ # Format your cloud_name and region using jinja template syntax
+ % cloud-init query --format 'cloud: {{ v1.cloud_name }} myregion: {{
+ % v1.region }}'
+
+.. note::
+  To save time designing a user-data template for a specific cloud's
+  instance-data.json, use the 'render' cloud-init command on an
+  instance booted on your favorite cloud. See :ref:`cli_devel` for more
+  information.
+
+.. vi: textwidth=78
diff --git a/integration-requirements.txt b/integration-requirements.txt
index 01baebd..880d988 100644
--- a/integration-requirements.txt
+++ b/integration-requirements.txt
@@ -5,16 +5,17 @@
 # the packages/pkg-deps.json file as well.
 #
 
+unittest2
 # ec2 backend
 boto3==1.5.9
 
 # ssh communication
-paramiko==2.4.0
+paramiko==2.4.1
+
 
 # lxd backend
 # 04/03/2018: enables use of lxd 3.0
 git+https://github.com/lxc/pylxd.git@4b8ab1802f9aee4eb29cf7b119dae0aa47150779
 
-
 # finds latest image information
 git+https://git.launchpad.net/simplestreams
diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py
index 75b5061..642745d 100644
--- a/tests/cloud_tests/collect.py
+++ b/tests/cloud_tests/collect.py
@@ -9,6 +9,7 @@ from cloudinit import util as c_util
 from tests.cloud_tests import (config, LOG, setup_image, util)
 from tests.cloud_tests.stage import (PlatformComponent, run_stage, run_single)
 from tests.cloud_tests import platforms
+from tests.cloud_tests.testcases import base, get_test_class
 
 
 def collect_script(instance, base_dir, script, script_name):
@@ -63,6 +64,7 @@ def collect_test_data(args, snapshot, os_name, test_name):
     res = ({}, 1)
 
     # load test config
+    test_name_in = test_name
     test_name = config.path_to_name(test_name)
     test_config = config.load_test_config(test_name)
     user_data = test_config['cloud_config']
@@ -75,6 +77,16 @@ def collect_test_data(args, snapshot, os_name, test_name):
         LOG.warning('test config %s is not enabled, skipping', test_name)
         return ({}, 0)
 
+    test_class = get_test_class(
+        config.name_to_module(test_name_in),
+        test_data={'platform': snapshot.platform_name, 'os_name': os_name},
+        test_conf=test_config['cloud_config'])
+    try:
+        test_class.maybeSkipTest()
+    except base.SkipTest as s:
+        LOG.warning('skipping test config %s: %s', test_name, s)
+        return ({}, 0)
+
     # if testcase requires a feature flag that the image does not support,
     # skip the testcase with a warning
     req_features = test_config.get('required_features', [])
diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py
index 95bc3b1..529e79c 100644
--- a/tests/cloud_tests/platforms/instances.py
+++ b/tests/cloud_tests/platforms/instances.py
@@ -97,7 +97,8 @@ class Instance(TargetBase):
             return self._ssh_client
 
         if not self.ssh_ip or not self.ssh_port:
-            raise ValueError
+            raise ValueError("Cannot ssh_connect, ssh_ip=%s ssh_port=%s" %
+                             (self.ssh_ip, self.ssh_port))
 
         client = paramiko.SSHClient()
         client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
diff --git a/tests/cloud_tests/platforms/lxd/instance.py b/tests/cloud_tests/platforms/lxd/instance.py
index d396519..83c97ab 100644
--- a/tests/cloud_tests/platforms/lxd/instance.py
+++ b/tests/cloud_tests/platforms/lxd/instance.py
@@ -12,6 +12,8 @@ from tests.cloud_tests.util import PlatformError
 
 from ..instances import Instance
 
+from pylxd import exceptions as pylxd_exc
+
 
 class LXDInstance(Instance):
     """LXD container backed instance."""
@@ -30,6 +32,9 @@ class LXDInstance(Instance):
         @param config: image config
         @param features: supported feature flags
         """
+        if not pylxd_container:
+            raise ValueError("Invalid value pylxd_container: %s" %
+                             pylxd_container)
         self._pylxd_container = pylxd_container
         super(LXDInstance, self).__init__(
             platform, name, properties, config, features)
@@ -40,9 +45,19 @@ class LXDInstance(Instance):
     @property
     def pylxd_container(self):
         """Property function."""
+        if self._pylxd_container is None:
+            raise RuntimeError(
+                "%s: Attempted use of pylxd_container after deletion." % self)
         self._pylxd_container.sync()
         return self._pylxd_container
 
+    def __str__(self):
+        return (
+            '%s(name=%s) status=%s' %
+            (self.__class__.__name__, self.name,
+             ("deleted" if self._pylxd_container is None else
+              self.pylxd_container.status)))
+
     def _execute(self, command, stdin=None, env=None):
         if env is None:
             env = {}
@@ -165,10 +180,27 @@ class LXDInstance(Instance):
         self.shutdown(wait=wait)
         self.start(wait=wait)
 
-    def shutdown(self, wait=True):
+    def shutdown(self, wait=True, retry=1):
         """Shutdown instance."""
-        if self.pylxd_container.status != 'Stopped':
+        if self.pylxd_container.status == 'Stopped':
+            return
+
+        try:
+            LOG.debug("%s: shutting down (wait=%s)", self, wait)
             self.pylxd_container.stop(wait=wait)
+        except (pylxd_exc.LXDAPIException, pylxd_exc.NotFound) as e:
+            # An exception happens here sometimes (LP: #1783198)
+            # LOG it, and try again.
+            LOG.warning(
+                ("%s: shutdown(retry=%d) caught %s in shutdown "
+                 "(response=%s): %s"),
+                self, retry, e.__class__.__name__, e.response, e)
+            if isinstance(e, pylxd_exc.NotFound):
+                LOG.debug("container_exists(%s) == %s",
+                          self.name, self.platform.container_exists(self.name))
+            if retry == 0:
+                raise e
+            return self.shutdown(wait=wait, retry=retry - 1)
 
     def start(self, wait=True, wait_for_cloud_init=False):
         """Start instance."""
@@ -189,12 +221,14 @@ class LXDInstance(Instance):
 
     def destroy(self):
         """Clean up instance."""
+        LOG.debug("%s: deleting container.", self)
         self.unfreeze()
         self.shutdown()
         self.pylxd_container.delete(wait=True)
+        self._pylxd_container = None
+
         if self.platform.container_exists(self.name):
-            raise OSError('container {} was not properly removed'
-                          .format(self.name))
+            raise OSError('%s: container was not properly removed' % self)
         if self._console_log_file and os.path.exists(self._console_log_file):
             os.unlink(self._console_log_file)
         shutil.rmtree(self.tmpd)
diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py
index 4e19570..39f4517 100644
--- a/tests/cloud_tests/setup_image.py
+++ b/tests/cloud_tests/setup_image.py
@@ -4,6 +4,7 @@
 
 from functools import partial
 import os
+import yaml
 
 from tests.cloud_tests import LOG
 from tests.cloud_tests import stage, util
@@ -220,7 +221,14 @@ def setup_image(args, image):
     calls = [partial(stage.run_single, desc, partial(func, args, image))
              for name, func, desc in handlers if getattr(args, name, None)]
 
-    LOG.info('setting up %s', image)
+    try:
+        data = yaml.load(image.read_data("/etc/cloud/build.info", decode=True))
+        info = ' '.join(["%s=%s" % (k, data.get(k))
+                         for k in ("build_name", "serial") if k in data])
+    except Exception as e:
+        info = "N/A (%s)" % e
+
+    LOG.info('setting up %s (%s)', image, info)
     res = stage.run_stage(
         'set up for {}'.format(image), calls, continue_after_error=False)
     return res
diff --git a/tests/cloud_tests/testcases.yaml b/tests/cloud_tests/testcases.yaml
index a16d1dd..fb9a5d2 100644
--- a/tests/cloud_tests/testcases.yaml
+++ b/tests/cloud_tests/testcases.yaml
@@ -27,6 +27,10 @@ base_test_data:
         package-versions: |
             #!/bin/sh
             dpkg-query --show
+        build.info: |
+            #!/bin/sh
+            binfo=/etc/cloud/build.info
+            [ -f "$binfo" ] && cat "$binfo" || echo "N/A"
         system.journal.gz: |
             #!/bin/sh
             [ -d /run/systemd ] || { echo "not systemd."; exit 0; }
diff --git a/tests/cloud_tests/testcases/__init__.py b/tests/cloud_tests/testcases/__init__.py
index bd548f5..6bb39f7 100644
--- a/tests/cloud_tests/testcases/__init__.py
+++ b/tests/cloud_tests/testcases/__init__.py
@@ -4,8 +4,7 @@
 
 import importlib
 import inspect
-import unittest
-from unittest.util import strclass
+import unittest2
 
 from cloudinit.util import read_conf
 
@@ -13,7 +12,7 @@ from tests.cloud_tests import config
 from tests.cloud_tests.testcases.base import CloudTestCase as base_test
 
 
-def discover_tests(test_name):
+def discover_test(test_name):
     """Discover tests in test file for 'testname'.
 
     @return_value: list of test classes
@@ -25,35 +24,48 @@ def discover_tests(test_name):
     except NameError:
         raise ValueError('no test verifier found at: {}'.format(testmod_name))
 
-    return [mod for name, mod in inspect.getmembers(testmod)
-            if inspect.isclass(mod) and base_test in inspect.getmro(mod) and
-            getattr(mod, '__test__', True)]
+    found = [mod for name, mod in inspect.getmembers(testmod)
+             if (inspect.isclass(mod)
+                 and base_test in inspect.getmro(mod)
+                 and getattr(mod, '__test__', True))]
+    if len(found) != 1:
+        raise RuntimeError(
+            "Unexpected situation, multiple tests for %s: %s" % (
+                test_name, found))
 
+    return found
 
-def get_suite(test_name, data, conf):
-    """Get test suite with all tests for 'testname'.
 
-    @return_value: a test suite
-    """
-    suite = unittest.TestSuite()
-    for test_class in discover_tests(test_name):
+def get_test_class(test_name, test_data, test_conf):
+    test_class = discover_test(test_name)[0]
+
+    class DynamicTestSubclass(test_class):
 
-        class tmp(test_class):
+        _realclass = test_class
+        data = test_data
+        conf = test_conf
+        release_conf = read_conf(config.RELEASES_CONF)['releases']
 
-            _realclass = test_class
+        def __str__(self):
+            return "%s (%s)" % (self._testMethodName,
+                                unittest2.util.strclass(self._realclass))
 
-            def __str__(self):
-                return "%s (%s)" % (self._testMethodName,
-                                    strclass(self._realclass))
+        @classmethod
+        def setUpClass(cls):
+            cls.maybeSkipTest()
 
-            @classmethod
-            def setUpClass(cls):
-                cls.data = data
-                cls.conf = conf
-                cls.release_conf = read_conf(config.RELEASES_CONF)['releases']
+    return DynamicTestSubclass
 
-        suite.addTest(unittest.defaultTestLoader.loadTestsFromTestCase(tmp))
 
+def get_suite(test_name, data, conf):
+    """Get test suite with all tests for 'testname'.
+
+    @return_value: a test suite
+    """
+    suite = unittest2.TestSuite()
+    suite.addTest(
+        unittest2.defaultTestLoader.loadTestsFromTestCase(
+            get_test_class(test_name, data, conf)))
     return suite
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/base.py b/tests/cloud_tests/testcases/base.py
index 696db8d..e18d601 100644
--- a/tests/cloud_tests/testcases/base.py
+++ b/tests/cloud_tests/testcases/base.py
@@ -5,15 +5,15 @@
 import crypt
 import json
 import re
-import unittest
+import unittest2
 
 
 from cloudinit import util as c_util
 
-SkipTest = unittest.SkipTest
+SkipTest = unittest2.SkipTest
 
 
-class CloudTestCase(unittest.TestCase):
+class CloudTestCase(unittest2.TestCase):
     """Base test class for verifiers."""
 
     # data gets populated in get_suite.setUpClass
@@ -31,6 +31,11 @@ class CloudTestCase(unittest.TestCase):
     def is_distro(self, distro_name):
         return self.os_cfg['os'] == distro_name
 
+    @classmethod
+    def maybeSkipTest(cls):
+        """Present to allow subclasses to override and raise a skipTest."""
+        pass
+
     def assertPackageInstalled(self, name, version=None):
         """Check dpkg-query --show output for matching package name.
 
@@ -167,11 +172,12 @@ class CloudTestCase(unittest.TestCase):
                 'Skipping instance-data.json test.'
                 ' OS: %s not bionic or newer' % self.os_name)
         instance_data = json.loads(out)
-        self.assertEqual(
-            ['ds/user-data'], instance_data['base64-encoded-keys'])
+        self.assertItemsEqual(
+            [],
+            instance_data['base64_encoded_keys'])
         ds = instance_data.get('ds', {})
         v1_data = instance_data.get('v1', {})
-        metadata = ds.get('meta-data', {})
+        metadata = ds.get('meta_data', {})
         macs = metadata.get(
             'network', {}).get('interfaces', {}).get('macs', {})
         if not macs:
@@ -187,10 +193,10 @@ class CloudTestCase(unittest.TestCase):
             metadata.get('placement', {}).get('availability-zone'),
             'Could not determine EC2 Availability zone placement')
         self.assertIsNotNone(
-            v1_data['availability-zone'], 'expected ec2 availability-zone')
-        self.assertEqual('aws', v1_data['cloud-name'])
-        self.assertIn('i-', v1_data['instance-id'])
-        self.assertIn('ip-', v1_data['local-hostname'])
+            v1_data['availability_zone'], 'expected ec2 availability_zone')
+        self.assertEqual('aws', v1_data['cloud_name'])
+        self.assertIn('i-', v1_data['instance_id'])
+        self.assertIn('ip-', v1_data['local_hostname'])
         self.assertIsNotNone(v1_data['region'], 'expected ec2 region')
 
     def test_instance_data_json_lxd(self):
@@ -213,16 +219,14 @@ class CloudTestCase(unittest.TestCase):
                 ' OS: %s not bionic or newer' % self.os_name)
         instance_data = json.loads(out)
         v1_data = instance_data.get('v1', {})
-        self.assertEqual(
-            ['ds/user-data', 'ds/vendor-data'],
-            sorted(instance_data['base64-encoded-keys']))
-        self.assertEqual('nocloud', v1_data['cloud-name'])
+        self.assertItemsEqual([], sorted(instance_data['base64_encoded_keys']))
+        self.assertEqual('nocloud', v1_data['cloud_name'])
         self.assertIsNone(
-            v1_data['availability-zone'],
-            'found unexpected lxd availability-zone %s' %
-            v1_data['availability-zone'])
-        self.assertIn('cloud-test', v1_data['instance-id'])
-        self.assertIn('cloud-test', v1_data['local-hostname'])
+            v1_data['availability_zone'],
+            'found unexpected lxd availability_zone %s' %
+            v1_data['availability_zone'])
+        self.assertIn('cloud-test', v1_data['instance_id'])
+        self.assertIn('cloud-test', v1_data['local_hostname'])
         self.assertIsNone(
             v1_data['region'],
             'found unexpected lxd region %s' % v1_data['region'])
@@ -248,18 +252,17 @@ class CloudTestCase(unittest.TestCase):
                 ' OS: %s not bionic or newer' % self.os_name)
         instance_data = json.loads(out)
         v1_data = instance_data.get('v1', {})
-        self.assertEqual(
-            ['ds/user-data'], instance_data['base64-encoded-keys'])
-        self.assertEqual('nocloud', v1_data['cloud-name'])
+        self.assertItemsEqual([], instance_data['base64_encoded_keys'])
+        self.assertEqual('nocloud', v1_data['cloud_name'])
         self.assertIsNone(
-            v1_data['availability-zone'],
-            'found unexpected kvm availability-zone %s' %
-            v1_data['availability-zone'])
+            v1_data['availability_zone'],
+            'found unexpected kvm availability_zone %s' %
+            v1_data['availability_zone'])
         self.assertIsNotNone(
             re.match(r'[\da-f]{8}(-[\da-f]{4}){3}-[\da-f]{12}',
-                     v1_data['instance-id']),
-            'kvm instance-id is not a UUID: %s' % v1_data['instance-id'])
-        self.assertIn('ubuntu', v1_data['local-hostname'])
+                     v1_data['instance_id']),
+            'kvm instance_id is not a UUID: %s' % v1_data['instance_id'])
+        self.assertIn('ubuntu', v1_data['local_hostname'])
         self.assertIsNone(
             v1_data['region'],
             'found unexpected lxd region %s' % v1_data['region'])
diff --git a/tests/cloud_tests/testcases/modules/lxd_bridge.py b/tests/cloud_tests/testcases/modules/lxd_bridge.py
index c0262ba..ea545e0 100644
--- a/tests/cloud_tests/testcases/modules/lxd_bridge.py
+++ b/tests/cloud_tests/testcases/modules/lxd_bridge.py
@@ -7,15 +7,25 @@ from tests.cloud_tests.testcases import base
 class TestLxdBridge(base.CloudTestCase):
     """Test LXD module."""
 
+    @classmethod
+    def maybeSkipTest(cls):
+        """Skip on cosmic for two reasons:
+        a.) LP: #1795036 - 'lxd init' fails on cosmic kernel.
+        b.) apt install lxd installs via snap which can be slow
+            as that will download core snap and lxd."""
+        os_name = cls.data.get('os_name', 'UNKNOWN')
+        if os_name == "cosmic":
+            raise base.SkipTest('Skipping test on cosmic (LP: #1795036).')
+
     def test_lxd(self):
         """Test lxd installed."""
         out = self.get_data_file('lxd')
-        self.assertIn('/usr/bin/lxd', out)
+        self.assertIn('/lxd', out)
 
     def test_lxc(self):
         """Test lxc installed."""
         out = self.get_data_file('lxc')
-        self.assertIn('/usr/bin/lxc', out)
+        self.assertIn('/lxc', out)
 
     def test_bridge(self):
         """Test bridge config."""
diff --git a/tests/cloud_tests/testcases/modules/lxd_dir.py b/tests/cloud_tests/testcases/modules/lxd_dir.py
index 1495674..797bafe 100644
--- a/tests/cloud_tests/testcases/modules/lxd_dir.py
+++ b/tests/cloud_tests/testcases/modules/lxd_dir.py
@@ -7,14 +7,24 @@ from tests.cloud_tests.testcases import base
 class TestLxdDir(base.CloudTestCase):
     """Test LXD module."""
 
+    @classmethod
+    def maybeSkipTest(cls):
+        """Skip on cosmic for two reasons:
+        a.) LP: #1795036 - 'lxd init' fails on cosmic kernel.
+        b.) apt install lxd installs via snap which can be slow
+            as that will download core snap and lxd."""
+        os_name = cls.data.get('os_name', 'UNKNOWN')
+        if os_name == "cosmic":
+            raise base.SkipTest('Skipping test on cosmic (LP: #1795036).')
+
     def test_lxd(self):
         """Test lxd installed."""
         out = self.get_data_file('lxd')
-        self.assertIn('/usr/bin/lxd', out)
+        self.assertIn('/lxd', out)
 
     def test_lxc(self):
         """Test lxc installed."""
         out = self.get_data_file('lxc')
-        self.assertIn('/usr/bin/lxc', out)
+        self.assertIn('/lxc', out)
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/testcases/modules/ntp_chrony.py b/tests/cloud_tests/testcases/modules/ntp_chrony.py
index 7d34177..0f4c3d0 100644
--- a/tests/cloud_tests/testcases/modules/ntp_chrony.py
+++ b/tests/cloud_tests/testcases/modules/ntp_chrony.py
@@ -1,7 +1,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 """cloud-init Integration Test Verify Script."""
-import unittest
+import unittest2
 
 from tests.cloud_tests.testcases import base
 
@@ -13,7 +13,7 @@ class TestNtpChrony(base.CloudTestCase):
         """Skip this suite of tests on lxd and artful or older."""
         if self.platform == 'lxd':
             if self.is_distro('ubuntu') and self.os_version_cmp('artful') <= 0:
-                raise unittest.SkipTest(
+                raise unittest2.SkipTest(
                     'No support for chrony on containers <= artful.'
                     ' LP: #1589780')
         return super(TestNtpChrony, self).setUp()
diff --git a/tests/cloud_tests/testcases/modules/snap.yaml b/tests/cloud_tests/testcases/modules/snap.yaml
index 44043f3..322199c 100644
--- a/tests/cloud_tests/testcases/modules/snap.yaml
+++ b/tests/cloud_tests/testcases/modules/snap.yaml
@@ -1,6 +1,9 @@
 #
 # Install snappy
 #
+# Aug 23, 2018: Disabled due to requiring a proxy for testing
+#    tests do not handle the proxy well at this time.
+enabled: False
 required_features:
   - snap
 cloud_config: |
diff --git a/tests/cloud_tests/testcases/modules/snappy.yaml b/tests/cloud_tests/testcases/modules/snappy.yaml
index 43f9329..8ac322a 100644
--- a/tests/cloud_tests/testcases/modules/snappy.yaml
+++ b/tests/cloud_tests/testcases/modules/snappy.yaml
@@ -1,6 +1,9 @@
 #
 # Install snappy
 #
+# Aug 17, 2018: Disabled due to requiring a proxy for testing
+#    tests do not handle the proxy well at this time.
+enabled: False
 required_features:
   - snap
 cloud_config: |
diff --git a/tests/cloud_tests/testcases/modules/write_files.py b/tests/cloud_tests/testcases/modules/write_files.py
index 7bd520f..526a2eb 100644
--- a/tests/cloud_tests/testcases/modules/write_files.py
+++ b/tests/cloud_tests/testcases/modules/write_files.py
@@ -14,8 +14,11 @@ class TestWriteFiles(base.CloudTestCase):
 
     def test_binary(self):
         """Test binary file reads as executable."""
-        out = self.get_data_file('file_binary')
-        self.assertIn('ELF 64-bit LSB executable, x86-64, version 1', out)
+        out = self.get_data_file('file_binary').strip()
+        md5 = "3801184b97bb8c6e63fa0e1eae2920d7"
+        sha256 = ("2c791c4037ea5bd7e928d6a87380f8ba7a803cd83d"
+                  "5e4f269e28f5090f0f2c9a")
+        self.assertIn(out, (md5 + "  -", sha256 + "  -"))
 
     def test_gzip(self):
         """Test gzip file shows up as a shell script."""
diff --git a/tests/cloud_tests/testcases/modules/write_files.yaml b/tests/cloud_tests/testcases/modules/write_files.yaml
index ce936b7..cc7ea4b 100644
--- a/tests/cloud_tests/testcases/modules/write_files.yaml
+++ b/tests/cloud_tests/testcases/modules/write_files.yaml
@@ -3,6 +3,13 @@
 #
 # NOTE: on trusty 'file' has an output formatting error for binary files and
 #       has 2 spaces in 'LSB  executable', which causes a failure here
+#
+# NOTE: the binary data can be any binary data, not only executables
+#       and can be generated via the base 64 command as such:
+#           $ base64 < hello > hello.txt
+#       the opposite is running:
+#           $ base64 -d < hello.txt > hello
+#
 required_features:
   - no_file_fmt_e
 cloud_config: |
@@ -19,9 +26,7 @@ cloud_config: |
           SMBDOPTIONS="-D"
       path: /root/file_text
   -   content: !!binary |
-          f0VMRgIBAQAAAAAAAAAAAAIAPgABAAAAwARAAAAAAABAAAAAAAAAAJAVAAAAAAAAAAAAAEAAOAAI
-          AEAAHgAdAAYAAAAFAAAAQAAAAAAAAABAAEAAAAAAAEAAQAAAAAAAwAEAAAAAAADAAQAAAAAAAAgA
-          AAAAAAAAAwAAAAQAAAAAAgAAAAAAAAACQAAAAAAAAAJAAAAAAAAcAAAAAAAAABwAAAAAAAAAAQAA
+          /Z/xrHR4WINT0UNoKPQKbuovp6+Js+JK
       path: /root/file_binary
       permissions: '0555'
   -   encoding: gzip
@@ -38,7 +43,9 @@ collect_scripts:
     file /root/file_text
   file_binary: |
     #!/bin/bash
-    file /root/file_binary
+    for hasher in md5sum sha256sum; do
+        $hasher </root/file_binary && break
+    done
   file_gzip: |
     #!/bin/bash
     file /root/file_gzip
diff --git a/tests/cloud_tests/verify.py b/tests/cloud_tests/verify.py
index bfb2744..9911ecf 100644
--- a/tests/cloud_tests/verify.py
+++ b/tests/cloud_tests/verify.py
@@ -3,7 +3,7 @@
 """Verify test results."""
 
 import os
-import unittest
+import unittest2
 
 from tests.cloud_tests import (config, LOG, util, testcases)
 
@@ -18,7 +18,7 @@ def verify_data(data_dir, platform, os_name, tests):
     @return_value: {<test_name>: {passed: True/False, failures: []}}
     """
     base_dir = os.sep.join((data_dir, platform, os_name))
-    runner = unittest.TextTestRunner(verbosity=util.current_verbosity())
+    runner = unittest2.TextTestRunner(verbosity=util.current_verbosity())
     res = {}
     for test_name in tests:
         LOG.debug('verifying test data for %s', test_name)
diff --git a/tests/unittests/test_builtin_handlers.py b/tests/unittests/test_builtin_handlers.py
index 9751ed9..abe820e 100644
--- a/tests/unittests/test_builtin_handlers.py
+++ b/tests/unittests/test_builtin_handlers.py
@@ -2,27 +2,34 @@
 
 """Tests of the built-in user data handlers."""
 
+import copy
 import os
 import shutil
 import tempfile
+from textwrap import dedent
 
-try:
-    from unittest import mock
-except ImportError:
-    import mock
 
-from cloudinit.tests import helpers as test_helpers
+from cloudinit.tests.helpers import (
+    FilesystemMockingTestCase, CiTestCase, mock, skipUnlessJinja)
 
 from cloudinit import handlers
 from cloudinit import helpers
 from cloudinit import util
 
-from cloudinit.handlers import upstart_job
+from cloudinit.handlers.cloud_config import CloudConfigPartHandler
+from cloudinit.handlers.jinja_template import (
+    JinjaTemplatePartHandler, convert_jinja_instance_data,
+    render_jinja_payload)
+from cloudinit.handlers.shell_script import ShellScriptPartHandler
+from cloudinit.handlers.upstart_job import UpstartJobPartHandler
 
 from cloudinit.settings import (PER_ALWAYS, PER_INSTANCE)
 
 
-class TestBuiltins(test_helpers.FilesystemMockingTestCase):
+class TestUpstartJobPartHandler(FilesystemMockingTestCase):
+
+    mpath = 'cloudinit.handlers.upstart_job.'
+
     def test_upstart_frequency_no_out(self):
         c_root = tempfile.mkdtemp()
         self.addCleanup(shutil.rmtree, c_root)
@@ -32,14 +39,13 @@ class TestBuiltins(test_helpers.FilesystemMockingTestCase):
             'cloud_dir': c_root,
             'upstart_dir': up_root,
         })
-        freq = PER_ALWAYS
-        h = upstart_job.UpstartJobPartHandler(paths)
+        h = UpstartJobPartHandler(paths)
         # No files should be written out when
         # the frequency is ! per-instance
         h.handle_part('', handlers.CONTENT_START,
                       None, None, None)
         h.handle_part('blah', 'text/upstart-job',
-                      'test.conf', 'blah', freq)
+                      'test.conf', 'blah', frequency=PER_ALWAYS)
         h.handle_part('', handlers.CONTENT_END,
                       None, None, None)
         self.assertEqual(0, len(os.listdir(up_root)))
@@ -48,7 +54,6 @@ class TestBuiltins(test_helpers.FilesystemMockingTestCase):
         # files should be written out when frequency is ! per-instance
         new_root = tempfile.mkdtemp()
         self.addCleanup(shutil.rmtree, new_root)
-        freq = PER_INSTANCE
 
         self.patchOS(new_root)
         self.patchUtils(new_root)
@@ -56,22 +61,297 @@ class TestBuiltins(test_helpers.FilesystemMockingTestCase):
             'upstart_dir': "/etc/upstart",
         })
 
-        upstart_job.SUITABLE_UPSTART = True
         util.ensure_dir("/run")
         util.ensure_dir("/etc/upstart")
 
-        with mock.patch.object(util, 'subp') as mockobj:
-            h = upstart_job.UpstartJobPartHandler(paths)
-            h.handle_part('', handlers.CONTENT_START,
-                          None, None, None)
-            h.handle_part('blah', 'text/upstart-job',
-                          'test.conf', 'blah', freq)
-            h.handle_part('', handlers.CONTENT_END,
-                          None, None, None)
+        with mock.patch(self.mpath + 'SUITABLE_UPSTART', return_value=True):
+            with mock.patch.object(util, 'subp') as m_subp:
+                h = UpstartJobPartHandler(paths)
+                h.handle_part('', handlers.CONTENT_START,
+                              None, None, None)
+                h.handle_part('blah', 'text/upstart-job',
+                              'test.conf', 'blah', frequency=PER_INSTANCE)
+                h.handle_part('', handlers.CONTENT_END,
+                              None, None, None)
 
-            self.assertEqual(len(os.listdir('/etc/upstart')), 1)
+        self.assertEqual(len(os.listdir('/etc/upstart')), 1)
 
-        mockobj.assert_called_once_with(
+        m_subp.assert_called_once_with(
             ['initctl', 'reload-configuration'], capture=False)
 
+
+class TestJinjaTemplatePartHandler(CiTestCase):
+
+    with_logs = True
+
+    mpath = 'cloudinit.handlers.jinja_template.'
+
+    def setUp(self):
+        super(TestJinjaTemplatePartHandler, self).setUp()
+        self.tmp = self.tmp_dir()
+        self.run_dir = os.path.join(self.tmp, 'run_dir')
+        util.ensure_dir(self.run_dir)
+        self.paths = helpers.Paths({
+            'cloud_dir': self.tmp, 'run_dir': self.run_dir})
+
+    def test_jinja_template_part_handler_defaults(self):
+        """On init, paths are saved and subhandler types are empty."""
+        h = JinjaTemplatePartHandler(self.paths)
+        self.assertEqual(['## template: jinja'], h.prefixes)
+        self.assertEqual(3, h.handler_version)
+        self.assertEqual(self.paths, h.paths)
+        self.assertEqual({}, h.sub_handlers)
+
+    def test_jinja_template_part_handler_looks_up_sub_handler_types(self):
+        """When sub_handlers are passed, init lists types of subhandlers."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        cloudconfig_handler = CloudConfigPartHandler(self.paths)
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler, cloudconfig_handler])
+        self.assertItemsEqual(
+            ['text/cloud-config', 'text/cloud-config-jsonp',
+             'text/x-shellscript'],
+            h.sub_handlers)
+
+    def test_jinja_template_part_handler_looks_up_subhandler_types(self):
+        """When sub_handlers are passed, init lists types of subhandlers."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        cloudconfig_handler = CloudConfigPartHandler(self.paths)
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler, cloudconfig_handler])
+        self.assertItemsEqual(
+            ['text/cloud-config', 'text/cloud-config-jsonp',
+             'text/x-shellscript'],
+            h.sub_handlers)
+
+    def test_jinja_template_handle_noop_on_content_signals(self):
+        """Perform no part handling when content type is CONTENT_SIGNALS."""
+        script_handler = ShellScriptPartHandler(self.paths)
+
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        with mock.patch.object(script_handler, 'handle_part') as m_handle_part:
+            h.handle_part(
+                data='data', ctype=handlers.CONTENT_START, filename='part-1',
+                payload='## template: jinja\n#!/bin/bash\necho himom',
+                frequency='freq', headers='headers')
+        m_handle_part.assert_not_called()
+
+    @skipUnlessJinja()
+    def test_jinja_template_handle_subhandler_v2_with_clean_payload(self):
+        """Call version 2 subhandler.handle_part with stripped payload."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        self.assertEqual(2, script_handler.handler_version)
+
+        # Create required instance-data.json file
+        instance_json = os.path.join(self.run_dir, 'instance-data.json')
+        instance_data = {'topkey': 'echo himom'}
+        util.write_file(instance_json, util.json_dumps(instance_data))
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        with mock.patch.object(script_handler, 'handle_part') as m_part:
+            # ctype with leading '!' not in handlers.CONTENT_SIGNALS
+            h.handle_part(
+                data='data', ctype="!" + handlers.CONTENT_START,
+                filename='part01',
+                payload='## template: jinja   \t \n#!/bin/bash\n{{ topkey }}',
+                frequency='freq', headers='headers')
+        m_part.assert_called_once_with(
+            'data', '!__begin__', 'part01', '#!/bin/bash\necho himom', 'freq')
+
+    @skipUnlessJinja()
+    def test_jinja_template_handle_subhandler_v3_with_clean_payload(self):
+        """Call version 3 subhandler.handle_part with stripped payload."""
+        cloudcfg_handler = CloudConfigPartHandler(self.paths)
+        self.assertEqual(3, cloudcfg_handler.handler_version)
+
+        # Create required instance-data.json file
+        instance_json = os.path.join(self.run_dir, 'instance-data.json')
+        instance_data = {'topkey': {'sub': 'runcmd: [echo hi]'}}
+        util.write_file(instance_json, util.json_dumps(instance_data))
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[cloudcfg_handler])
+        with mock.patch.object(cloudcfg_handler, 'handle_part') as m_part:
+            # ctype with leading '!' not in handlers.CONTENT_SIGNALS
+            h.handle_part(
+                data='data', ctype="!" + handlers.CONTENT_END,
+                filename='part01',
+                payload='## template: jinja\n#cloud-config\n{{ topkey.sub }}',
+                frequency='freq', headers='headers')
+        m_part.assert_called_once_with(
+            'data', '!__end__', 'part01', '#cloud-config\nruncmd: [echo hi]',
+            'freq', 'headers')
+
+    def test_jinja_template_handle_errors_on_missing_instance_data_json(self):
+        """If instance-data is absent, raise an error from handle_part."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        with self.assertRaises(RuntimeError) as context_manager:
+            h.handle_part(
+                data='data', ctype="!" + handlers.CONTENT_START,
+                filename='part01',
+                payload='## template: jinja  \n#!/bin/bash\necho himom',
+                frequency='freq', headers='headers')
+        script_file = os.path.join(script_handler.script_dir, 'part01')
+        self.assertEqual(
+            'Cannot render jinja template vars. Instance data not yet present'
+            ' at {}/instance-data.json'.format(
+                self.run_dir), str(context_manager.exception))
+        self.assertFalse(
+            os.path.exists(script_file),
+            'Unexpected file created %s' % script_file)
+
+    @skipUnlessJinja()
+    def test_jinja_template_handle_renders_jinja_content(self):
+        """When present, render jinja variables from instance-data.json."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        instance_json = os.path.join(self.run_dir, 'instance-data.json')
+        instance_data = {'topkey': {'subkey': 'echo himom'}}
+        util.write_file(instance_json, util.json_dumps(instance_data))
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        h.handle_part(
+            data='data', ctype="!" + handlers.CONTENT_START,
+            filename='part01',
+            payload=(
+                '## template: jinja  \n'
+                '#!/bin/bash\n'
+                '{{ topkey.subkey|default("nosubkey") }}'),
+            frequency='freq', headers='headers')
+        script_file = os.path.join(script_handler.script_dir, 'part01')
+        self.assertNotIn(
+            'Instance data not yet present at {}/instance-data.json'.format(
+                self.run_dir),
+            self.logs.getvalue())
+        self.assertEqual(
+            '#!/bin/bash\necho himom', util.load_file(script_file))
+
+    @skipUnlessJinja()
+    def test_jinja_template_handle_renders_jinja_content_missing_keys(self):
+        """When specified jinja variable is undefined, log a warning."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        instance_json = os.path.join(self.run_dir, 'instance-data.json')
+        instance_data = {'topkey': {'subkey': 'echo himom'}}
+        util.write_file(instance_json, util.json_dumps(instance_data))
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        h.handle_part(
+            data='data', ctype="!" + handlers.CONTENT_START,
+            filename='part01',
+            payload='## template: jinja  \n#!/bin/bash\n{{ goodtry }}',
+            frequency='freq', headers='headers')
+        script_file = os.path.join(script_handler.script_dir, 'part01')
+        self.assertTrue(
+            os.path.exists(script_file),
+            'Missing expected file %s' % script_file)
+        self.assertIn(
+            "WARNING: Could not render jinja template variables in file"
+            " 'part01': 'goodtry'\n",
+            self.logs.getvalue())
+
+
+class TestConvertJinjaInstanceData(CiTestCase):
+
+    def test_convert_instance_data_hyphens_to_underscores(self):
+        """Replace hyphenated keys with underscores in instance-data."""
+        data = {'hyphenated-key': 'hyphenated-val',
+                'underscore_delim_key': 'underscore_delimited_val'}
+        expected_data = {'hyphenated_key': 'hyphenated-val',
+                         'underscore_delim_key': 'underscore_delimited_val'}
+        self.assertEqual(
+            expected_data,
+            convert_jinja_instance_data(data=data))
+
+    def test_convert_instance_data_promotes_versioned_keys_to_top_level(self):
+        """Any versioned keys are promoted as top-level keys
+
+        This provides any cloud-init standardized keys up at a top-level to
+        allow ease of reference for users. Intsead of v1.availability_zone,
+        the name availability_zone can be used in templates.
+        """
+        data = {'ds': {'dskey1': 1, 'dskey2': 2},
+                'v1': {'v1key1': 'v1.1'},
+                'v2': {'v2key1': 'v2.1'}}
+        expected_data = copy.deepcopy(data)
+        expected_data.update({'v1key1': 'v1.1', 'v2key1': 'v2.1'})
+
+        converted_data = convert_jinja_instance_data(data=data)
+        self.assertItemsEqual(
+            ['ds', 'v1', 'v2', 'v1key1', 'v2key1'], converted_data.keys())
+        self.assertEqual(
+            expected_data,
+            converted_data)
+
+    def test_convert_instance_data_most_recent_version_of_promoted_keys(self):
+        """The most-recent versioned key value is promoted to top-level."""
+        data = {'v1': {'key1': 'old v1 key1', 'key2': 'old v1 key2'},
+                'v2': {'key1': 'newer v2 key1', 'key3': 'newer v2 key3'},
+                'v3': {'key1': 'newest v3 key1'}}
+        expected_data = copy.deepcopy(data)
+        expected_data.update(
+            {'key1': 'newest v3 key1', 'key2': 'old v1 key2',
+             'key3': 'newer v2 key3'})
+
+        converted_data = convert_jinja_instance_data(data=data)
+        self.assertEqual(
+            expected_data,
+            converted_data)
+
+    def test_convert_instance_data_decodes_decode_paths(self):
+        """Any decode_paths provided are decoded by convert_instance_data."""
+        data = {'key1': {'subkey1': 'aGkgbW9t'}, 'key2': 'aGkgZGFk'}
+        expected_data = copy.deepcopy(data)
+        expected_data['key1']['subkey1'] = 'hi mom'
+
+        converted_data = convert_jinja_instance_data(
+            data=data, decode_paths=('key1/subkey1',))
+        self.assertEqual(
+            expected_data,
+            converted_data)
+
+
+class TestRenderJinjaPayload(CiTestCase):
+
+    with_logs = True
+
+    @skipUnlessJinja()
+    def test_render_jinja_payload_logs_jinja_vars_on_debug(self):
+        """When debug is True, log jinja varables available."""
+        payload = (
+            '## template: jinja\n#!/bin/sh\necho hi from {{ v1.hostname }}')
+        instance_data = {'v1': {'hostname': 'foo'}, 'instance-id': 'iid'}
+        expected_log = dedent("""\
+            DEBUG: Converted jinja variables
+            {
+             "hostname": "foo",
+             "instance_id": "iid",
+             "v1": {
+              "hostname": "foo"
+             }
+            }
+            """)
+        self.assertEqual(
+            render_jinja_payload(
+                payload=payload, payload_fn='myfile',
+                instance_data=instance_data, debug=True),
+            '#!/bin/sh\necho hi from foo')
+        self.assertEqual(expected_log, self.logs.getvalue())
+
+    @skipUnlessJinja()
+    def test_render_jinja_payload_replaces_missing_variables_and_warns(self):
+        """Warn on missing jinja variables and replace the absent variable."""
+        payload = (
+            '## template: jinja\n#!/bin/sh\necho hi from {{ NOTHERE }}')
+        instance_data = {'v1': {'hostname': 'foo'}, 'instance-id': 'iid'}
+        self.assertEqual(
+            render_jinja_payload(
+                payload=payload, payload_fn='myfile',
+                instance_data=instance_data),
+            '#!/bin/sh\necho hi from CI_MISSING_JINJA_VAR/NOTHERE')
+        expected_log = (
+            'WARNING: Could not render jinja template variables in file'
+            " 'myfile': 'NOTHERE'")
+        self.assertIn(expected_log, self.logs.getvalue())
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 0c0f427..199d69b 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -208,8 +208,7 @@ class TestCLI(test_helpers.FilesystemMockingTestCase):
         for subcommand in expected_subcommands:
             self.assertIn(subcommand, error)
 
-    @mock.patch('cloudinit.config.schema.handle_schema_args')
-    def test_wb_devel_schema_subcommand_parser(self, m_schema):
+    def test_wb_devel_schema_subcommand_parser(self):
         """The subcommand cloud-init schema calls the correct subparser."""
         exit_code = self._call_main(['cloud-init', 'devel', 'schema'])
         self.assertEqual(1, exit_code)
diff --git a/tests/unittests/test_datasource/test_altcloud.py b/tests/unittests/test_datasource/test_altcloud.py
index 3253f3a..ff35904 100644
--- a/tests/unittests/test_datasource/test_altcloud.py
+++ b/tests/unittests/test_datasource/test_altcloud.py
@@ -262,64 +262,56 @@ class TestUserDataRhevm(CiTestCase):
     '''
     Test to exercise method: DataSourceAltCloud.user_data_rhevm()
     '''
-    cmd_pass = ['true']
-    cmd_fail = ['false']
-    cmd_not_found = ['bogus bad command']
-
     def setUp(self):
         '''Set up.'''
         self.paths = helpers.Paths({'cloud_dir': '/tmp'})
-        self.mount_dir = tempfile.mkdtemp()
+        self.mount_dir = self.tmp_dir()
         _write_user_data_files(self.mount_dir, 'test user data')
-
-    def tearDown(self):
-        # Reset
-
-        _remove_user_data_files(self.mount_dir)
-
-        # Attempt to remove the temp dir ignoring errors
-        try:
-            shutil.rmtree(self.mount_dir)
-        except OSError:
-            pass
-
-        dsac.CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
-        dsac.CMD_PROBE_FLOPPY = ['modprobe', 'floppy']
-        dsac.CMD_UDEVADM_SETTLE = ['udevadm', 'settle',
-                                   '--quiet', '--timeout=5']
+        self.add_patch(
+            'cloudinit.sources.DataSourceAltCloud.modprobe_floppy',
+            'm_modprobe_floppy', return_value=None)
+        self.add_patch(
+            'cloudinit.sources.DataSourceAltCloud.util.udevadm_settle',
+            'm_udevadm_settle', return_value=('', ''))
+        self.add_patch(
+            'cloudinit.sources.DataSourceAltCloud.util.mount_cb',
+            'm_mount_cb')
 
     def test_mount_cb_fails(self):
         '''Test user_data_rhevm() where mount_cb fails.'''
 
-        dsac.CMD_PROBE_FLOPPY = self.cmd_pass
+        self.m_mount_cb.side_effect = util.MountFailedError("Failed Mount")
         dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
         self.assertEqual(False, dsrc.user_data_rhevm())
 
     def test_modprobe_fails(self):
         '''Test user_data_rhevm() where modprobe fails.'''
 
-        dsac.CMD_PROBE_FLOPPY = self.cmd_fail
+        self.m_modprobe_floppy.side_effect = util.ProcessExecutionError(
+            "Failed modprobe")
         dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
         self.assertEqual(False, dsrc.user_data_rhevm())
 
     def test_no_modprobe_cmd(self):
         '''Test user_data_rhevm() with no modprobe command.'''
 
-        dsac.CMD_PROBE_FLOPPY = self.cmd_not_found
+        self.m_modprobe_floppy.side_effect = util.ProcessExecutionError(
+            "No such file or dir")
         dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
         self.assertEqual(False, dsrc.user_data_rhevm())
 
     def test_udevadm_fails(self):
         '''Test user_data_rhevm() where udevadm fails.'''
 
-        dsac.CMD_UDEVADM_SETTLE = self.cmd_fail
+        self.m_udevadm_settle.side_effect = util.ProcessExecutionError(
+            "Failed settle.")
         dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
         self.assertEqual(False, dsrc.user_data_rhevm())
 
     def test_no_udevadm_cmd(self):
         '''Test user_data_rhevm() with no udevadm command.'''
 
-        dsac.CMD_UDEVADM_SETTLE = self.cmd_not_found
+        self.m_udevadm_settle.side_effect = OSError("No such file or dir")
         dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
         self.assertEqual(False, dsrc.user_data_rhevm())
 
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index e82716e..4e428b7 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -1,15 +1,21 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+from cloudinit import distros
 from cloudinit import helpers
-from cloudinit.sources import DataSourceAzure as dsaz
+from cloudinit import url_helper
+from cloudinit.sources import (
+    UNSET, DataSourceAzure as dsaz, InvalidMetaDataException)
 from cloudinit.util import (b64e, decode_binary, load_file, write_file,
                             find_freebsd_part, get_path_dev_freebsd,
                             MountFailedError)
 from cloudinit.version import version_string as vs
-from cloudinit.tests.helpers import (CiTestCase, TestCase, populate_dir, mock,
-                                     ExitStack, PY26, SkipTest)
+from cloudinit.tests.helpers import (
+    HttprettyTestCase, CiTestCase, populate_dir, mock, wrap_and_call,
+    ExitStack, PY26, SkipTest)
 
 import crypt
+import httpretty
+import json
 import os
 import stat
 import xml.etree.ElementTree as ET
@@ -77,6 +83,106 @@ def construct_valid_ovf_env(data=None, pubkeys=None,
     return content
 
 
+NETWORK_METADATA = {
+    "network": {
+        "interface": [
+            {
+                "macAddress": "000D3A047598",
+                "ipv6": {
+                    "ipAddress": []
+                },
+                "ipv4": {
+                    "subnet": [
+                        {
+                           "prefix": "24",
+                           "address": "10.0.0.0"
+                        }
+                    ],
+                    "ipAddress": [
+                        {
+                           "privateIpAddress": "10.0.0.4",
+                           "publicIpAddress": "104.46.124.81"
+                        }
+                    ]
+                }
+            }
+        ]
+    }
+}
+
+
+class TestGetMetadataFromIMDS(HttprettyTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestGetMetadataFromIMDS, self).setUp()
+        self.network_md_url = dsaz.IMDS_URL + "instance?api-version=2017-12-01"
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
+    @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
+    @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
+    def test_get_metadata_does_not_dhcp_if_network_is_up(
+            self, m_net_is_up, m_dhcp, m_readurl):
+        """Do not perform DHCP setup when nic is already up."""
+        m_net_is_up.return_value = True
+        m_readurl.return_value = url_helper.StringResponse(
+            json.dumps(NETWORK_METADATA).encode('utf-8'))
+        self.assertEqual(
+            NETWORK_METADATA,
+            dsaz.get_metadata_from_imds('eth9', retries=3))
+
+        m_net_is_up.assert_called_with('eth9')
+        m_dhcp.assert_not_called()
+        self.assertIn(
+            "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
+            self.logs.getvalue())
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
+    @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
+    @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
+    def test_get_metadata_performs_dhcp_when_network_is_down(
+            self, m_net_is_up, m_dhcp, m_readurl):
+        """Perform DHCP setup when nic is not up."""
+        m_net_is_up.return_value = False
+        m_readurl.return_value = url_helper.StringResponse(
+            json.dumps(NETWORK_METADATA).encode('utf-8'))
+
+        self.assertEqual(
+            NETWORK_METADATA,
+            dsaz.get_metadata_from_imds('eth9', retries=2))
+
+        m_net_is_up.assert_called_with('eth9')
+        m_dhcp.assert_called_with('eth9')
+        self.assertIn(
+            "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
+            self.logs.getvalue())
+
+        m_readurl.assert_called_with(
+            self.network_md_url, exception_cb=mock.ANY,
+            headers={'Metadata': 'true'}, retries=2, timeout=1)
+
+    @mock.patch('cloudinit.url_helper.time.sleep')
+    @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
+    def test_get_metadata_from_imds_empty_when_no_imds_present(
+            self, m_net_is_up, m_sleep):
+        """Return empty dict when IMDS network metadata is absent."""
+        httpretty.register_uri(
+            httpretty.GET,
+            dsaz.IMDS_URL + 'instance?api-version=2017-12-01',
+            body={}, status=404)
+
+        m_net_is_up.return_value = True  # skips dhcp
+
+        self.assertEqual({}, dsaz.get_metadata_from_imds('eth9', retries=2))
+
+        m_net_is_up.assert_called_with('eth9')
+        self.assertEqual([mock.call(1), mock.call(1)], m_sleep.call_args_list)
+        self.assertIn(
+            "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
+            self.logs.getvalue())
+
+
 class TestAzureDataSource(CiTestCase):
 
     with_logs = True
@@ -95,8 +201,19 @@ class TestAzureDataSource(CiTestCase):
         self.patches = ExitStack()
         self.addCleanup(self.patches.close)
 
-        self.patches.enter_context(mock.patch.object(dsaz, '_get_random_seed'))
-
+        self.patches.enter_context(mock.patch.object(
+            dsaz, '_get_random_seed', return_value='wild'))
+        self.m_get_metadata_from_imds = self.patches.enter_context(
+            mock.patch.object(
+                dsaz, 'get_metadata_from_imds',
+                mock.MagicMock(return_value=NETWORK_METADATA)))
+        self.m_fallback_nic = self.patches.enter_context(
+            mock.patch('cloudinit.sources.net.find_fallback_nic',
+                       return_value='eth9'))
+        self.m_remove_ubuntu_network_scripts = self.patches.enter_context(
+            mock.patch.object(
+                dsaz, 'maybe_remove_ubuntu_network_config_scripts',
+                mock.MagicMock()))
         super(TestAzureDataSource, self).setUp()
 
     def apply_patches(self, patches):
@@ -137,7 +254,7 @@ scbus-1 on xpt0 bus 0
         ])
         return dsaz
 
-    def _get_ds(self, data, agent_command=None):
+    def _get_ds(self, data, agent_command=None, distro=None):
 
         def dsdevs():
             return data.get('dsdevs', [])
@@ -186,8 +303,11 @@ scbus-1 on xpt0 bus 0
                 side_effect=_wait_for_files)),
         ])
 
+        if distro is not None:
+            distro_cls = distros.fetch(distro)
+            distro = distro_cls(distro, data.get('sys_cfg', {}), self.paths)
         dsrc = dsaz.DataSourceAzure(
-            data.get('sys_cfg', {}), distro=None, paths=self.paths)
+            data.get('sys_cfg', {}), distro=distro, paths=self.paths)
         if agent_command is not None:
             dsrc.ds_cfg['agent_command'] = agent_command
 
@@ -260,29 +380,20 @@ fdescfs            /dev/fd          fdescfs rw              0 0
             res = get_path_dev_freebsd('/etc', mnt_list)
             self.assertIsNotNone(res)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.util.read_dmi_data')
-    def test_non_azure_dmi_chassis_asset_tag(self, m_read_dmi_data):
-        """Report non-azure when DMI's chassis asset tag doesn't match.
-
-        Return False when the asset tag doesn't match Azure's static
-        AZURE_CHASSIS_ASSET_TAG.
-        """
+    @mock.patch('cloudinit.sources.DataSourceAzure._is_platform_viable')
+    def test_call_is_platform_viable_seed(self, m_is_platform_viable):
+        """Check seed_dir using _is_platform_viable and return False."""
         # Return a non-matching asset tag value
-        nonazure_tag = dsaz.AZURE_CHASSIS_ASSET_TAG + 'X'
-        m_read_dmi_data.return_value = nonazure_tag
+        m_is_platform_viable.return_value = False
         dsrc = dsaz.DataSourceAzure(
             {}, distro=None, paths=self.paths)
         self.assertFalse(dsrc.get_data())
-        self.assertEqual(
-            "DEBUG: Non-Azure DMI asset tag '{0}' discovered.\n".format(
-                nonazure_tag),
-            self.logs.getvalue())
+        m_is_platform_viable.assert_called_with(dsrc.seed_dir)
 
     def test_basic_seed_dir(self):
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata),
                 'sys_cfg': {}}
-
         dsrc = self._get_ds(data)
         ret = dsrc.get_data()
         self.assertTrue(ret)
@@ -291,6 +402,82 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         self.assertTrue(os.path.isfile(
             os.path.join(self.waagent_d, 'ovf-env.xml')))
 
+    def test_get_data_non_ubuntu_will_not_remove_network_scripts(self):
+        """get_data on non-Ubuntu will not remove ubuntu net scripts."""
+        odata = {'HostName': "myhost", 'UserName': "myuser"}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': {}}
+
+        dsrc = self._get_ds(data, distro='debian')
+        dsrc.get_data()
+        self.m_remove_ubuntu_network_scripts.assert_not_called()
+
+    def test_get_data_on_ubuntu_will_remove_network_scripts(self):
+        """get_data will remove ubuntu net scripts on Ubuntu distro."""
+        odata = {'HostName': "myhost", 'UserName': "myuser"}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': {}}
+
+        dsrc = self._get_ds(data, distro='ubuntu')
+        dsrc.get_data()
+        self.m_remove_ubuntu_network_scripts.assert_called_once_with()
+
+    def test_crawl_metadata_returns_structured_data_and_caches_nothing(self):
+        """Return all structured metadata and cache no class attributes."""
+        yaml_cfg = "{agent_command: my_command}\n"
+        odata = {'HostName': "myhost", 'UserName': "myuser",
+                 'UserData': {'text': 'FOOBAR', 'encoding': 'plain'},
+                 'dscfg': {'text': yaml_cfg, 'encoding': 'plain'}}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': {}}
+        dsrc = self._get_ds(data)
+        expected_cfg = {
+            'PreprovisionedVm': False,
+            'datasource': {'Azure': {'agent_command': 'my_command'}},
+            'system_info': {'default_user': {'name': u'myuser'}}}
+        expected_metadata = {
+            'azure_data': {
+                'configurationsettype': 'LinuxProvisioningConfiguration'},
+            'imds': {'network': {'interface': [{
+                'ipv4': {'ipAddress': [
+                     {'privateIpAddress': '10.0.0.4',
+                      'publicIpAddress': '104.46.124.81'}],
+                      'subnet': [{'address': '10.0.0.0', 'prefix': '24'}]},
+                'ipv6': {'ipAddress': []},
+                'macAddress': '000D3A047598'}]}},
+            'instance-id': 'test-instance-id',
+            'local-hostname': u'myhost',
+            'random_seed': 'wild'}
+
+        crawled_metadata = dsrc.crawl_metadata()
+
+        self.assertItemsEqual(
+            crawled_metadata.keys(),
+            ['cfg', 'files', 'metadata', 'userdata_raw'])
+        self.assertEqual(crawled_metadata['cfg'], expected_cfg)
+        self.assertEqual(
+            list(crawled_metadata['files'].keys()), ['ovf-env.xml'])
+        self.assertIn(
+            b'<HostName>myhost</HostName>',
+            crawled_metadata['files']['ovf-env.xml'])
+        self.assertEqual(crawled_metadata['metadata'], expected_metadata)
+        self.assertEqual(crawled_metadata['userdata_raw'], 'FOOBAR')
+        self.assertEqual(dsrc.userdata_raw, None)
+        self.assertEqual(dsrc.metadata, {})
+        self.assertEqual(dsrc._metadata_imds, UNSET)
+        self.assertFalse(os.path.isfile(
+            os.path.join(self.waagent_d, 'ovf-env.xml')))
+
+    def test_crawl_metadata_raises_invalid_metadata_on_error(self):
+        """crawl_metadata raises an exception on invalid ovf-env.xml."""
+        data = {'ovfcontent': "BOGUS", 'sys_cfg': {}}
+        dsrc = self._get_ds(data)
+        error_msg = ('BrokenAzureDataSource: Invalid ovf-env.xml:'
+                     ' syntax error: line 1, column 0')
+        with self.assertRaises(InvalidMetaDataException) as cm:
+            dsrc.crawl_metadata()
+        self.assertEqual(str(cm.exception), error_msg)
+
     def test_waagent_d_has_0700_perms(self):
         # we expect /var/lib/waagent to be created 0700
         dsrc = self._get_ds({'ovfcontent': construct_valid_ovf_env()})
@@ -314,6 +501,20 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         self.assertTrue(ret)
         self.assertEqual(data['agent_invoked'], cfg['agent_command'])
 
+    def test_network_config_set_from_imds(self):
+        """Datasource.network_config returns IMDS network data."""
+        odata = {}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
+        expected_network_config = {
+            'ethernets': {
+                'eth0': {'set-name': 'eth0',
+                         'match': {'macaddress': '00:0d:3a:04:75:98'},
+                         'dhcp4': True}},
+            'version': 2}
+        dsrc = self._get_ds(data)
+        dsrc.get_data()
+        self.assertEqual(expected_network_config, dsrc.network_config)
+
     def test_user_cfg_set_agent_command(self):
         # set dscfg in via base64 encoded yaml
         cfg = {'agent_command': "my_command"}
@@ -579,12 +780,34 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         self.assertEqual(
             [mock.call("/dev/cd0")], m_check_fbsd_cdrom.call_args_list)
 
+    @mock.patch('cloudinit.net.generate_fallback_config')
+    def test_imds_network_config(self, mock_fallback):
+        """Network config is generated from IMDS network data when present."""
+        odata = {'HostName': "myhost", 'UserName': "myuser"}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': {}}
+
+        dsrc = self._get_ds(data)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+
+        expected_cfg = {
+            'ethernets': {
+                'eth0': {'dhcp4': True,
+                         'match': {'macaddress': '00:0d:3a:04:75:98'},
+                         'set-name': 'eth0'}},
+            'version': 2}
+
+        self.assertEqual(expected_cfg, dsrc.network_config)
+        mock_fallback.assert_not_called()
+
     @mock.patch('cloudinit.net.get_interface_mac')
     @mock.patch('cloudinit.net.get_devicelist')
     @mock.patch('cloudinit.net.device_driver')
     @mock.patch('cloudinit.net.generate_fallback_config')
-    def test_network_config(self, mock_fallback, mock_dd,
-                            mock_devlist, mock_get_mac):
+    def test_fallback_network_config(self, mock_fallback, mock_dd,
+                                     mock_devlist, mock_get_mac):
+        """On absent IMDS network data, generate network fallback config."""
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata),
                 'sys_cfg': {}}
@@ -605,6 +828,8 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         mock_get_mac.return_value = '00:11:22:33:44:55'
 
         dsrc = self._get_ds(data)
+        # Represent empty response from network imds
+        self.m_get_metadata_from_imds.return_value = {}
         ret = dsrc.get_data()
         self.assertTrue(ret)
 
@@ -617,8 +842,9 @@ fdescfs            /dev/fd          fdescfs rw              0 0
     @mock.patch('cloudinit.net.get_devicelist')
     @mock.patch('cloudinit.net.device_driver')
     @mock.patch('cloudinit.net.generate_fallback_config')
-    def test_network_config_blacklist(self, mock_fallback, mock_dd,
-                                      mock_devlist, mock_get_mac):
+    def test_fallback_network_config_blacklist(self, mock_fallback, mock_dd,
+                                               mock_devlist, mock_get_mac):
+        """On absent network metadata, blacklist mlx from fallback config."""
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata),
                 'sys_cfg': {}}
@@ -649,6 +875,8 @@ fdescfs            /dev/fd          fdescfs rw              0 0
         mock_get_mac.return_value = '00:11:22:33:44:55'
 
         dsrc = self._get_ds(data)
+        # Represent empty response from network imds
+        self.m_get_metadata_from_imds.return_value = {}
         ret = dsrc.get_data()
         self.assertTrue(ret)
 
@@ -689,9 +917,12 @@ class TestAzureBounce(CiTestCase):
             mock.patch.object(dsaz, 'get_metadata_from_fabric',
                               mock.MagicMock(return_value={})))
         self.patches.enter_context(
-            mock.patch.object(dsaz.util, 'which', lambda x: True))
+            mock.patch.object(dsaz, 'get_metadata_from_imds',
+                              mock.MagicMock(return_value={})))
         self.patches.enter_context(
-            mock.patch.object(dsaz, '_get_random_seed'))
+            mock.patch.object(dsaz.util, 'which', lambda x: True))
+        self.patches.enter_context(mock.patch.object(
+            dsaz, '_get_random_seed', return_value='wild'))
 
         def _dmi_mocks(key):
             if key == 'system-uuid':
@@ -719,9 +950,12 @@ class TestAzureBounce(CiTestCase):
             mock.patch.object(dsaz, 'set_hostname'))
         self.subp = self.patches.enter_context(
             mock.patch('cloudinit.sources.DataSourceAzure.util.subp'))
+        self.find_fallback_nic = self.patches.enter_context(
+            mock.patch('cloudinit.net.find_fallback_nic', return_value='eth9'))
 
     def tearDown(self):
         self.patches.close()
+        super(TestAzureBounce, self).tearDown()
 
     def _get_ds(self, ovfcontent=None, agent_command=None):
         if ovfcontent is not None:
@@ -927,7 +1161,7 @@ class TestLoadAzureDsDir(CiTestCase):
             str(context_manager.exception))
 
 
-class TestReadAzureOvf(TestCase):
+class TestReadAzureOvf(CiTestCase):
 
     def test_invalid_xml_raises_non_azure_ds(self):
         invalid_xml = "<foo>" + construct_valid_ovf_env(data={})
@@ -1188,6 +1422,25 @@ class TestCanDevBeReformatted(CiTestCase):
                       "(datasource.Azure.never_destroy_ntfs)", msg)
 
 
+class TestClearCachedData(CiTestCase):
+
+    def test_clear_cached_attrs_clears_imds(self):
+        """All class attributes are reset to defaults, including imds data."""
+        tmp = self.tmp_dir()
+        paths = helpers.Paths(
+            {'cloud_dir': tmp, 'run_dir': tmp})
+        dsrc = dsaz.DataSourceAzure({}, distro=None, paths=paths)
+        clean_values = [dsrc.metadata, dsrc.userdata, dsrc._metadata_imds]
+        dsrc.metadata = 'md'
+        dsrc.userdata = 'ud'
+        dsrc._metadata_imds = 'imds'
+        dsrc._dirty_cache = True
+        dsrc.clear_cached_attrs()
+        self.assertEqual(
+            [dsrc.metadata, dsrc.userdata, dsrc._metadata_imds],
+            clean_values)
+
+
 class TestAzureNetExists(CiTestCase):
 
     def test_azure_net_must_exist_for_legacy_objpkl(self):
@@ -1398,4 +1651,94 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         self.assertEqual(m_net.call_count, 1)
 
 
+class TestRemoveUbuntuNetworkConfigScripts(CiTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestRemoveUbuntuNetworkConfigScripts, self).setUp()
+        self.tmp = self.tmp_dir()
+
+    def test_remove_network_scripts_removes_both_files_and_directories(self):
+        """Any files or directories in paths are removed when present."""
+        file1 = self.tmp_path('file1', dir=self.tmp)
+        subdir = self.tmp_path('sub1', dir=self.tmp)
+        subfile = self.tmp_path('leaf1', dir=subdir)
+        write_file(file1, 'file1content')
+        write_file(subfile, 'leafcontent')
+        dsaz.maybe_remove_ubuntu_network_config_scripts(paths=[subdir, file1])
+
+        for path in (file1, subdir, subfile):
+            self.assertFalse(os.path.exists(path),
+                             'Found unremoved: %s' % path)
+
+        expected_logs = [
+            'INFO: Removing Ubuntu extended network scripts because cloud-init'
+            ' updates Azure network configuration on the following event:'
+            ' System boot.',
+            'Recursively deleting %s' % subdir,
+            'Attempting to remove %s' % file1]
+        for log in expected_logs:
+            self.assertIn(log, self.logs.getvalue())
+
+    def test_remove_network_scripts_only_attempts_removal_if_path_exists(self):
+        """Any files or directories absent are skipped without error."""
+        dsaz.maybe_remove_ubuntu_network_config_scripts(paths=[
+            self.tmp_path('nodirhere/', dir=self.tmp),
+            self.tmp_path('notfilehere', dir=self.tmp)])
+        self.assertNotIn('/not/a', self.logs.getvalue())  # No delete logs
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.os.path.exists')
+    def test_remove_network_scripts_default_removes_stock_scripts(self,
+                                                                  m_exists):
+        """Azure's stock ubuntu image scripts and artifacts are removed."""
+        # Report path absent on all to avoid delete operation
+        m_exists.return_value = False
+        dsaz.maybe_remove_ubuntu_network_config_scripts()
+        calls = m_exists.call_args_list
+        for path in dsaz.UBUNTU_EXTENDED_NETWORK_SCRIPTS:
+            self.assertIn(mock.call(path), calls)
+
+
+class TestWBIsPlatformViable(CiTestCase):
+    """White box tests for _is_platform_viable."""
+    with_logs = True
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.read_dmi_data')
+    def test_true_on_non_azure_chassis(self, m_read_dmi_data):
+        """Return True if DMI chassis-asset-tag is AZURE_CHASSIS_ASSET_TAG."""
+        m_read_dmi_data.return_value = dsaz.AZURE_CHASSIS_ASSET_TAG
+        self.assertTrue(dsaz._is_platform_viable('doesnotmatter'))
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.os.path.exists')
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.read_dmi_data')
+    def test_true_on_azure_ovf_env_in_seed_dir(self, m_read_dmi_data, m_exist):
+        """Return True if ovf-env.xml exists in known seed dirs."""
+        # Non-matching Azure chassis-asset-tag
+        m_read_dmi_data.return_value = dsaz.AZURE_CHASSIS_ASSET_TAG + 'X'
+
+        m_exist.return_value = True
+        self.assertTrue(dsaz._is_platform_viable('/some/seed/dir'))
+        m_exist.called_once_with('/other/seed/dir')
+
+    def test_false_on_no_matching_azure_criteria(self):
+        """Report non-azure on unmatched asset tag, ovf-env absent and no dev.
+
+        Return False when the asset tag doesn't match Azure's static
+        AZURE_CHASSIS_ASSET_TAG, no ovf-env.xml files exist in known seed dirs
+        and no devices have a label starting with prefix 'rd_rdfe_'.
+        """
+        self.assertFalse(wrap_and_call(
+            'cloudinit.sources.DataSourceAzure',
+            {'os.path.exists': False,
+             # Non-matching Azure chassis-asset-tag
+             'util.read_dmi_data': dsaz.AZURE_CHASSIS_ASSET_TAG + 'X',
+             'util.which': None},
+            dsaz._is_platform_viable, 'doesnotmatter'))
+        self.assertIn(
+            "DEBUG: Non-Azure DMI asset tag '{0}' discovered.\n".format(
+                dsaz.AZURE_CHASSIS_ASSET_TAG + 'X'),
+            self.logs.getvalue())
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_cloudsigma.py b/tests/unittests/test_datasource/test_cloudsigma.py
index f6a59b6..380ad1b 100644
--- a/tests/unittests/test_datasource/test_cloudsigma.py
+++ b/tests/unittests/test_datasource/test_cloudsigma.py
@@ -42,6 +42,9 @@ class CepkoMock(Cepko):
 class DataSourceCloudSigmaTest(test_helpers.CiTestCase):
     def setUp(self):
         super(DataSourceCloudSigmaTest, self).setUp()
+        self.add_patch(
+            "cloudinit.sources.DataSourceCloudSigma.util.is_container",
+            "m_is_container", return_value=False)
         self.paths = helpers.Paths({'run_dir': self.tmp_dir()})
         self.datasource = DataSourceCloudSigma.DataSourceCloudSigma(
             "", "", paths=self.paths)
diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py
index 0d35dc2..6b01a4e 100644
--- a/tests/unittests/test_datasource/test_common.py
+++ b/tests/unittests/test_datasource/test_common.py
@@ -20,6 +20,7 @@ from cloudinit.sources import (
     DataSourceNoCloud as NoCloud,
     DataSourceOpenNebula as OpenNebula,
     DataSourceOpenStack as OpenStack,
+    DataSourceOracle as Oracle,
     DataSourceOVF as OVF,
     DataSourceScaleway as Scaleway,
     DataSourceSmartOS as SmartOS,
@@ -37,10 +38,12 @@ DEFAULT_LOCAL = [
     IBMCloud.DataSourceIBMCloud,
     NoCloud.DataSourceNoCloud,
     OpenNebula.DataSourceOpenNebula,
+    Oracle.DataSourceOracle,
     OVF.DataSourceOVF,
     SmartOS.DataSourceSmartOS,
     Ec2.DataSourceEc2Local,
     OpenStack.DataSourceOpenStackLocal,
+    Scaleway.DataSourceScaleway,
 ]
 
 DEFAULT_NETWORK = [
@@ -55,7 +58,6 @@ DEFAULT_NETWORK = [
     NoCloud.DataSourceNoCloudNet,
     OpenStack.DataSourceOpenStack,
     OVF.DataSourceOVFNet,
-    Scaleway.DataSourceScaleway,
 ]
 
 
diff --git a/tests/unittests/test_datasource/test_configdrive.py b/tests/unittests/test_datasource/test_configdrive.py
index 68400f2..231619c 100644
--- a/tests/unittests/test_datasource/test_configdrive.py
+++ b/tests/unittests/test_datasource/test_configdrive.py
@@ -136,6 +136,7 @@ NETWORK_DATA_3 = {
     ]
 }
 
+BOND_MAC = "fa:16:3e:b3:72:36"
 NETWORK_DATA_BOND = {
     "services": [
         {"type": "dns", "address": "1.1.1.191"},
@@ -163,7 +164,7 @@ NETWORK_DATA_BOND = {
         {"bond_links": ["eth0", "eth1"],
          "bond_miimon": 100, "bond_mode": "4",
          "bond_xmit_hash_policy": "layer3+4",
-         "ethernet_mac_address": "0c:c4:7a:34:6e:3c",
+         "ethernet_mac_address": BOND_MAC,
          "id": "bond0", "type": "bond"},
         {"ethernet_mac_address": "fa:16:3e:b3:72:30",
          "id": "vlan2", "type": "vlan", "vlan_id": 602,
@@ -224,6 +225,9 @@ class TestConfigDriveDataSource(CiTestCase):
 
     def setUp(self):
         super(TestConfigDriveDataSource, self).setUp()
+        self.add_patch(
+            "cloudinit.sources.DataSourceConfigDrive.util.find_devs_with",
+            "m_find_devs_with", return_value=[])
         self.tmp = self.tmp_dir()
 
     def test_ec2_metadata(self):
@@ -642,7 +646,7 @@ class TestConvertNetworkData(CiTestCase):
             routes)
         eni_renderer = eni.Renderer()
         eni_renderer.render_network_state(
-            network_state.parse_net_config_data(ncfg), self.tmp)
+            network_state.parse_net_config_data(ncfg), target=self.tmp)
         with open(os.path.join(self.tmp, "etc",
                                "network", "interfaces"), 'r') as f:
             eni_rendering = f.read()
@@ -664,7 +668,7 @@ class TestConvertNetworkData(CiTestCase):
         eni_renderer = eni.Renderer()
 
         eni_renderer.render_network_state(
-            network_state.parse_net_config_data(ncfg), self.tmp)
+            network_state.parse_net_config_data(ncfg), target=self.tmp)
         with open(os.path.join(self.tmp, "etc",
                                "network", "interfaces"), 'r') as f:
             eni_rendering = f.read()
@@ -688,6 +692,9 @@ class TestConvertNetworkData(CiTestCase):
         self.assertIn("auto oeth0", eni_rendering)
         self.assertIn("auto oeth1", eni_rendering)
         self.assertIn("auto bond0", eni_rendering)
+        # The bond should have the given mac address
+        pos = eni_rendering.find("auto bond0")
+        self.assertIn(BOND_MAC, eni_rendering[pos:])
 
     def test_vlan(self):
         # light testing of vlan config conversion and eni rendering
@@ -695,7 +702,7 @@ class TestConvertNetworkData(CiTestCase):
                                           known_macs=KNOWN_MACS)
         eni_renderer = eni.Renderer()
         eni_renderer.render_network_state(
-            network_state.parse_net_config_data(ncfg), self.tmp)
+            network_state.parse_net_config_data(ncfg), target=self.tmp)
         with open(os.path.join(self.tmp, "etc",
                                "network", "interfaces"), 'r') as f:
             eni_rendering = f.read()
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index cdbd1e1..21931eb 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -25,6 +25,8 @@ class TestNoCloudDataSource(CiTestCase):
 
         self.mocks.enter_context(
             mock.patch.object(util, 'get_cmdline', return_value=self.cmdline))
+        self.mocks.enter_context(
+            mock.patch.object(util, 'read_dmi_data', return_value=None))
 
     def test_nocloud_seed_dir(self):
         md = {'instance-id': 'IID', 'dsmode': 'local'}
diff --git a/tests/unittests/test_datasource/test_opennebula.py b/tests/unittests/test_datasource/test_opennebula.py
index ab42f34..6159101 100644
--- a/tests/unittests/test_datasource/test_opennebula.py
+++ b/tests/unittests/test_datasource/test_opennebula.py
@@ -43,6 +43,7 @@ DS_PATH = "cloudinit.sources.DataSourceOpenNebula"
 
 class TestOpenNebulaDataSource(CiTestCase):
     parsed_user = None
+    allowed_subp = ['bash']
 
     def setUp(self):
         super(TestOpenNebulaDataSource, self).setUp()
@@ -354,6 +355,412 @@ class TestOpenNebulaNetwork(unittest.TestCase):
 
     system_nics = ('eth0', 'ens3')
 
+    def test_context_devname(self):
+        """Verify context_devname correctly returns mac and name."""
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH1_MAC': '02:00:0a:12:0f:0f', }
+        expected = {
+            '02:00:0a:12:01:01': 'ETH0',
+            '02:00:0a:12:0f:0f': 'ETH1', }
+        net = ds.OpenNebulaNetwork(context)
+        self.assertEqual(expected, net.context_devname)
+
+    def test_get_nameservers(self):
+        """
+        Verify get_nameservers('device') correctly returns DNS server addresses
+        and search domains.
+        """
+        context = {
+            'DNS': '1.2.3.8',
+            'ETH0_DNS': '1.2.3.6 1.2.3.7',
+            'ETH0_SEARCH_DOMAIN': 'example.com example.org', }
+        expected = {
+            'addresses': ['1.2.3.6', '1.2.3.7', '1.2.3.8'],
+            'search': ['example.com', 'example.org']}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_nameservers('eth0')
+        self.assertEqual(expected, val)
+
+    def test_get_mtu(self):
+        """Verify get_mtu('device') correctly returns MTU size."""
+        context = {'ETH0_MTU': '1280'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_mtu('eth0')
+        self.assertEqual('1280', val)
+
+    def test_get_ip(self):
+        """Verify get_ip('device') correctly returns IPv4 address."""
+        context = {'ETH0_IP': PUBLIC_IP}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip('eth0', MACADDR)
+        self.assertEqual(PUBLIC_IP, val)
+
+    def test_get_ip_emptystring(self):
+        """
+        Verify get_ip('device') correctly returns IPv4 address.
+        It returns IP address created by MAC address if ETH0_IP has empty
+        string.
+        """
+        context = {'ETH0_IP': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip('eth0', MACADDR)
+        self.assertEqual(IP_BY_MACADDR, val)
+
+    def test_get_ip6(self):
+        """
+        Verify get_ip6('device') correctly returns IPv6 address.
+        In this case, IPv6 address is Given by ETH0_IP6.
+        """
+        context = {
+            'ETH0_IP6': IP6_GLOBAL,
+            'ETH0_IP6_ULA': '', }
+        expected = [IP6_GLOBAL]
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip6('eth0')
+        self.assertEqual(expected, val)
+
+    def test_get_ip6_ula(self):
+        """
+        Verify get_ip6('device') correctly returns IPv6 address.
+        In this case, IPv6 address is Given by ETH0_IP6_ULA.
+        """
+        context = {
+            'ETH0_IP6': '',
+            'ETH0_IP6_ULA': IP6_ULA, }
+        expected = [IP6_ULA]
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip6('eth0')
+        self.assertEqual(expected, val)
+
+    def test_get_ip6_dual(self):
+        """
+        Verify get_ip6('device') correctly returns IPv6 address.
+        In this case, IPv6 addresses are Given by ETH0_IP6 and ETH0_IP6_ULA.
+        """
+        context = {
+            'ETH0_IP6': IP6_GLOBAL,
+            'ETH0_IP6_ULA': IP6_ULA, }
+        expected = [IP6_GLOBAL, IP6_ULA]
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip6('eth0')
+        self.assertEqual(expected, val)
+
+    def test_get_ip6_prefix(self):
+        """
+        Verify get_ip6_prefix('device') correctly returns IPv6 prefix.
+        """
+        context = {'ETH0_IP6_PREFIX_LENGTH': IP6_PREFIX}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip6_prefix('eth0')
+        self.assertEqual(IP6_PREFIX, val)
+
+    def test_get_ip6_prefix_emptystring(self):
+        """
+        Verify get_ip6_prefix('device') correctly returns IPv6 prefix.
+        It returns default value '64' if ETH0_IP6_PREFIX_LENGTH has empty
+        string.
+        """
+        context = {'ETH0_IP6_PREFIX_LENGTH': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_ip6_prefix('eth0')
+        self.assertEqual('64', val)
+
+    def test_get_gateway(self):
+        """
+        Verify get_gateway('device') correctly returns IPv4 default gateway
+        address.
+        """
+        context = {'ETH0_GATEWAY': '1.2.3.5'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_gateway('eth0')
+        self.assertEqual('1.2.3.5', val)
+
+    def test_get_gateway6(self):
+        """
+        Verify get_gateway6('device') correctly returns IPv6 default gateway
+        address.
+        """
+        context = {'ETH0_GATEWAY6': IP6_GW}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_gateway6('eth0')
+        self.assertEqual(IP6_GW, val)
+
+    def test_get_mask(self):
+        """
+        Verify get_mask('device') correctly returns IPv4 subnet mask.
+        """
+        context = {'ETH0_MASK': '255.255.0.0'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_mask('eth0')
+        self.assertEqual('255.255.0.0', val)
+
+    def test_get_mask_emptystring(self):
+        """
+        Verify get_mask('device') correctly returns IPv4 subnet mask.
+        It returns default value '255.255.255.0' if ETH0_MASK has empty string.
+        """
+        context = {'ETH0_MASK': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_mask('eth0')
+        self.assertEqual('255.255.255.0', val)
+
+    def test_get_network(self):
+        """
+        Verify get_network('device') correctly returns IPv4 network address.
+        """
+        context = {'ETH0_NETWORK': '1.2.3.0'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_network('eth0', MACADDR)
+        self.assertEqual('1.2.3.0', val)
+
+    def test_get_network_emptystring(self):
+        """
+        Verify get_network('device') correctly returns IPv4 network address.
+        It returns network address created by MAC address if ETH0_NETWORK has
+        empty string.
+        """
+        context = {'ETH0_NETWORK': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_network('eth0', MACADDR)
+        self.assertEqual('10.18.1.0', val)
+
+    def test_get_field(self):
+        """
+        Verify get_field('device', 'name') returns *context* value.
+        """
+        context = {'ETH9_DUMMY': 'DUMMY_VALUE'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_field('eth9', 'dummy')
+        self.assertEqual('DUMMY_VALUE', val)
+
+    def test_get_field_withdefaultvalue(self):
+        """
+        Verify get_field('device', 'name', 'default value') returns *context*
+        value.
+        """
+        context = {'ETH9_DUMMY': 'DUMMY_VALUE'}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_field('eth9', 'dummy', 'DEFAULT_VALUE')
+        self.assertEqual('DUMMY_VALUE', val)
+
+    def test_get_field_withdefaultvalue_emptycontext(self):
+        """
+        Verify get_field('device', 'name', 'default value') returns *default*
+        value if context value is empty string.
+        """
+        context = {'ETH9_DUMMY': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_field('eth9', 'dummy', 'DEFAULT_VALUE')
+        self.assertEqual('DEFAULT_VALUE', val)
+
+    def test_get_field_emptycontext(self):
+        """
+        Verify get_field('device', 'name') returns None if context value is
+        empty string.
+        """
+        context = {'ETH9_DUMMY': ''}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_field('eth9', 'dummy')
+        self.assertEqual(None, val)
+
+    def test_get_field_nonecontext(self):
+        """
+        Verify get_field('device', 'name') returns None if context value is
+        None.
+        """
+        context = {'ETH9_DUMMY': None}
+        net = ds.OpenNebulaNetwork(context)
+        val = net.get_field('eth9', 'dummy')
+        self.assertEqual(None, val)
+
+    @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
+    def test_gen_conf_gateway(self, m_get_phys_by_mac):
+        """Test rendering with/without IPv4 gateway"""
+        self.maxDiff = None
+        # empty ETH0_GATEWAY
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_GATEWAY': '', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+        # set ETH0_GATEWAY
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_GATEWAY': '1.2.3.5', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'gateway4': '1.2.3.5',
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+    @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
+    def test_gen_conf_gateway6(self, m_get_phys_by_mac):
+        """Test rendering with/without IPv6 gateway"""
+        self.maxDiff = None
+        # empty ETH0_GATEWAY6
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_GATEWAY6': '', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+        # set ETH0_GATEWAY6
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_GATEWAY6': IP6_GW, }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'gateway6': IP6_GW,
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+    @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
+    def test_gen_conf_ipv6address(self, m_get_phys_by_mac):
+        """Test rendering with/without IPv6 address"""
+        self.maxDiff = None
+        # empty ETH0_IP6, ETH0_IP6_ULA, ETH0_IP6_PREFIX_LENGTH
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_IP6': '',
+            'ETH0_IP6_ULA': '',
+            'ETH0_IP6_PREFIX_LENGTH': '', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+        # set ETH0_IP6, ETH0_IP6_ULA, ETH0_IP6_PREFIX_LENGTH
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_IP6': IP6_GLOBAL,
+            'ETH0_IP6_PREFIX_LENGTH': IP6_PREFIX,
+            'ETH0_IP6_ULA': IP6_ULA, }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [
+                            IP_BY_MACADDR + '/' + IP4_PREFIX,
+                            IP6_GLOBAL + '/' + IP6_PREFIX,
+                            IP6_ULA + '/' + IP6_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+    @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
+    def test_gen_conf_dns(self, m_get_phys_by_mac):
+        """Test rendering with/without DNS server, search domain"""
+        self.maxDiff = None
+        # empty DNS, ETH0_DNS, ETH0_SEARCH_DOMAIN
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'DNS': '',
+            'ETH0_DNS': '',
+            'ETH0_SEARCH_DOMAIN': '', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+        # set DNS, ETH0_DNS, ETH0_SEARCH_DOMAIN
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'DNS': '1.2.3.8',
+            'ETH0_DNS': '1.2.3.6 1.2.3.7',
+            'ETH0_SEARCH_DOMAIN': 'example.com example.org', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'nameservers': {
+                            'addresses': ['1.2.3.6', '1.2.3.7', '1.2.3.8'],
+                            'search': ['example.com', 'example.org']},
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+    @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
+    def test_gen_conf_mtu(self, m_get_phys_by_mac):
+        """Test rendering with/without MTU"""
+        self.maxDiff = None
+        # empty ETH0_MTU
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_MTU': '', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
+        # set ETH0_MTU
+        context = {
+            'ETH0_MAC': '02:00:0a:12:01:01',
+            'ETH0_MTU': '1280', }
+        for nic in self.system_nics:
+            expected = {
+                'version': 2,
+                'ethernets': {
+                    nic: {
+                        'mtu': '1280',
+                        'match': {'macaddress': MACADDR},
+                        'addresses': [IP_BY_MACADDR + '/' + IP4_PREFIX]}}}
+            m_get_phys_by_mac.return_value = {MACADDR: nic}
+            net = ds.OpenNebulaNetwork(context)
+            self.assertEqual(net.gen_conf(), expected)
+
     @mock.patch(DS_PATH + ".get_physical_nics_by_mac")
     def test_eth0(self, m_get_phys_by_mac):
         for nic in self.system_nics:
@@ -395,7 +802,6 @@ class TestOpenNebulaNetwork(unittest.TestCase):
                         'match': {'macaddress': MACADDR},
                         'addresses': [IP_BY_MACADDR + '/16'],
                         'gateway4': '1.2.3.5',
-                        'gateway6': None,
                         'nameservers': {
                             'addresses': ['1.2.3.6', '1.2.3.7', '1.2.3.8']}}}}
 
@@ -494,7 +900,6 @@ class TestOpenNebulaNetwork(unittest.TestCase):
                     'match': {'macaddress': MAC_1},
                     'addresses': ['10.3.1.3/16'],
                     'gateway4': '10.3.0.1',
-                    'gateway6': None,
                     'nameservers': {
                         'addresses': ['10.3.1.2', '1.2.3.8'],
                         'search': [
diff --git a/tests/unittests/test_datasource/test_openstack.py b/tests/unittests/test_datasource/test_openstack.py
index 585acc3..a731f1e 100644
--- a/tests/unittests/test_datasource/test_openstack.py
+++ b/tests/unittests/test_datasource/test_openstack.py
@@ -12,11 +12,11 @@ import re
 from cloudinit.tests import helpers as test_helpers
 
 from six.moves.urllib.parse import urlparse
-from six import StringIO
+from six import StringIO, text_type
 
 from cloudinit import helpers
 from cloudinit import settings
-from cloudinit.sources import convert_vendordata, UNSET
+from cloudinit.sources import BrokenMetadata, convert_vendordata, UNSET
 from cloudinit.sources import DataSourceOpenStack as ds
 from cloudinit.sources.helpers import openstack
 from cloudinit import util
@@ -186,7 +186,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             if k.endswith('meta_data.json'):
                 os_files[k] = json.dumps(os_meta)
         _register_uris(self.VERSION, {}, {}, os_files)
-        self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
+        self.assertRaises(BrokenMetadata, _read_metadata_service)
 
     def test_userdata_empty(self):
         os_files = copy.deepcopy(OS_FILES)
@@ -217,7 +217,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             if k.endswith('vendor_data.json'):
                 os_files[k] = '{'  # some invalid json
         _register_uris(self.VERSION, {}, {}, os_files)
-        self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
+        self.assertRaises(BrokenMetadata, _read_metadata_service)
 
     def test_metadata_invalid(self):
         os_files = copy.deepcopy(OS_FILES)
@@ -225,7 +225,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase):
             if k.endswith('meta_data.json'):
                 os_files[k] = '{'  # some invalid json
         _register_uris(self.VERSION, {}, {}, os_files)
-        self.assertRaises(openstack.BrokenMetadata, _read_metadata_service)
+        self.assertRaises(BrokenMetadata, _read_metadata_service)
 
     @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
     def test_datasource(self, m_dhcp):
@@ -510,6 +510,27 @@ class TestDetectOpenStack(test_helpers.CiTestCase):
             ds.detect_openstack(),
             'Expected detect_openstack == True on OpenTelekomCloud')
 
+    @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
+    def test_detect_openstack_oraclecloud_chassis_asset_tag(self, m_dmi,
+                                                            m_is_x86):
+        """Return True on OpenStack reporting Oracle cloud asset-tag."""
+        m_is_x86.return_value = True
+
+        def fake_dmi_read(dmi_key):
+            if dmi_key == 'system-product-name':
+                return 'Standard PC (i440FX + PIIX, 1996)'  # No match
+            if dmi_key == 'chassis-asset-tag':
+                return 'OracleCloud.com'
+            assert False, 'Unexpected dmi read of %s' % dmi_key
+
+        m_dmi.side_effect = fake_dmi_read
+        self.assertTrue(
+            ds.detect_openstack(accept_oracle=True),
+            'Expected detect_openstack == True on OracleCloud.com')
+        self.assertFalse(
+            ds.detect_openstack(accept_oracle=False),
+            'Expected detect_openstack == False.')
+
     @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env')
     @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data')
     def test_detect_openstack_by_proc_1_environ(self, m_dmi, m_proc_env,
@@ -534,4 +555,94 @@ class TestDetectOpenStack(test_helpers.CiTestCase):
         m_proc_env.assert_called_with(1)
 
 
+class TestMetadataReader(test_helpers.HttprettyTestCase):
+    """Test the MetadataReader."""
+    burl = 'http://169.254.169.254/'
+    md_base = {
+        'availability_zone': 'myaz1',
+        'hostname': 'sm-foo-test.novalocal',
+        "keys": [{"data": PUBKEY, "name": "brickies", "type": "ssh"}],
+        'launch_index': 0,
+        'name': 'sm-foo-test',
+        'public_keys': {'mykey': PUBKEY},
+        'project_id': '6a103f813b774b9fb15a4fcd36e1c056',
+        'uuid': 'b0fa911b-69d4-4476-bbe2-1c92bff6535c'}
+
+    def register(self, path, body=None, status=200):
+        content = (body if not isinstance(body, text_type)
+                   else body.encode('utf-8'))
+        hp.register_uri(
+            hp.GET, self.burl + "openstack" + path, status=status,
+            body=content)
+
+    def register_versions(self, versions):
+        self.register("", '\n'.join(versions))
+        self.register("/", '\n'.join(versions))
+
+    def register_version(self, version, data):
+        content = '\n'.join(sorted(data.keys()))
+        self.register(version, content)
+        self.register(version + "/", content)
+        for path, content in data.items():
+            self.register("/%s/%s" % (version, path), content)
+            self.register("/%s/%s" % (version, path), content)
+        if 'user_data' not in data:
+            self.register("/%s/user_data" % version, "nodata", status=404)
+
+    def test__find_working_version(self):
+        """Test a working version ignores unsupported."""
+        unsup = "2016-11-09"
+        self.register_versions(
+            [openstack.OS_FOLSOM, openstack.OS_LIBERTY, unsup,
+             openstack.OS_LATEST])
+        self.assertEqual(
+            openstack.OS_LIBERTY,
+            openstack.MetadataReader(self.burl)._find_working_version())
+
+    def test__find_working_version_uses_latest(self):
+        """'latest' should be used if no supported versions."""
+        unsup1, unsup2 = ("2016-11-09", '2017-06-06')
+        self.register_versions([unsup1, unsup2, openstack.OS_LATEST])
+        self.assertEqual(
+            openstack.OS_LATEST,
+            openstack.MetadataReader(self.burl)._find_working_version())
+
+    def test_read_v2_os_ocata(self):
+        """Validate return value of read_v2 for os_ocata data."""
+        md = copy.deepcopy(self.md_base)
+        md['devices'] = []
+        network_data = {'links': [], 'networks': [], 'services': []}
+        vendor_data = {}
+        vendor_data2 = {"static": {}}
+
+        data = {
+            'meta_data.json': json.dumps(md),
+            'network_data.json': json.dumps(network_data),
+            'vendor_data.json': json.dumps(vendor_data),
+            'vendor_data2.json': json.dumps(vendor_data2),
+        }
+
+        self.register_versions([openstack.OS_OCATA, openstack.OS_LATEST])
+        self.register_version(openstack.OS_OCATA, data)
+
+        mock_read_ec2 = test_helpers.mock.MagicMock(
+            return_value={'instance-id': 'unused-ec2'})
+        expected_md = copy.deepcopy(md)
+        expected_md.update(
+            {'instance-id': md['uuid'], 'local-hostname': md['hostname']})
+        expected = {
+            'userdata': '',  # Annoying, no user-data results in empty string.
+            'version': 2,
+            'metadata': expected_md,
+            'vendordata': vendor_data,
+            'networkdata': network_data,
+            'ec2-metadata': mock_read_ec2.return_value,
+            'files': {},
+        }
+        reader = openstack.MetadataReader(self.burl)
+        reader._read_ec2_metadata = mock_read_ec2
+        self.assertEqual(expected, reader.read_v2())
+        self.assertEqual(1, mock_read_ec2.call_count)
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_ovf.py b/tests/unittests/test_datasource/test_ovf.py
index fc4eb36..9d52eb9 100644
--- a/tests/unittests/test_datasource/test_ovf.py
+++ b/tests/unittests/test_datasource/test_ovf.py
@@ -124,7 +124,9 @@ class TestDatasourceOVF(CiTestCase):
         ds = self.datasource(sys_cfg={}, distro={}, paths=paths)
         retcode = wrap_and_call(
             'cloudinit.sources.DataSourceOVF',
-            {'util.read_dmi_data': None},
+            {'util.read_dmi_data': None,
+             'transport_iso9660': (False, None, None),
+             'transport_vmware_guestd': (False, None, None)},
             ds.get_data)
         self.assertFalse(retcode, 'Expected False return from ds.get_data')
         self.assertIn(
@@ -138,7 +140,9 @@ class TestDatasourceOVF(CiTestCase):
             paths=paths)
         retcode = wrap_and_call(
             'cloudinit.sources.DataSourceOVF',
-            {'util.read_dmi_data': 'vmware'},
+            {'util.read_dmi_data': 'vmware',
+             'transport_iso9660': (False, None, None),
+             'transport_vmware_guestd': (False, None, None)},
             ds.get_data)
         self.assertFalse(retcode, 'Expected False return from ds.get_data')
         self.assertIn(
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index e4e9bb2..c2bc7a0 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -176,11 +176,18 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.vendordata_url = \
             DataSourceScaleway.BUILTIN_DS_CONFIG['vendordata_url']
 
+        self.add_patch('cloudinit.sources.DataSourceScaleway.on_scaleway',
+                       '_m_on_scaleway', return_value=True)
+        self.add_patch(
+            'cloudinit.sources.DataSourceScaleway.net.find_fallback_nic',
+            '_m_find_fallback_nic', return_value='scalewaynic0')
+
+    @mock.patch('cloudinit.sources.DataSourceScaleway.EphemeralDHCPv4')
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
     @mock.patch('time.sleep', return_value=None)
-    def test_metadata_ok(self, sleep, m_get_cmdline):
+    def test_metadata_ok(self, sleep, m_get_cmdline, dhcpv4):
         """
         get_data() returns metadata, user data and vendor data.
         """
@@ -211,11 +218,12 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.region)
         self.assertEqual(sleep.call_count, 0)
 
+    @mock.patch('cloudinit.sources.DataSourceScaleway.EphemeralDHCPv4')
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
     @mock.patch('time.sleep', return_value=None)
-    def test_metadata_404(self, sleep, m_get_cmdline):
+    def test_metadata_404(self, sleep, m_get_cmdline, dhcpv4):
         """
         get_data() returns metadata, but no user data nor vendor data.
         """
@@ -234,11 +242,12 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.get_vendordata_raw())
         self.assertEqual(sleep.call_count, 0)
 
+    @mock.patch('cloudinit.sources.DataSourceScaleway.EphemeralDHCPv4')
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
     @mock.patch('cloudinit.util.get_cmdline')
     @mock.patch('time.sleep', return_value=None)
-    def test_metadata_rate_limit(self, sleep, m_get_cmdline):
+    def test_metadata_rate_limit(self, sleep, m_get_cmdline, dhcpv4):
         """
         get_data() is rate limited two times by the metadata API when fetching
         user data.
@@ -262,3 +271,67 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertEqual(self.datasource.get_userdata_raw(),
                          DataResponses.FAKE_USER_DATA)
         self.assertEqual(sleep.call_count, 2)
+
+    @mock.patch('cloudinit.sources.DataSourceScaleway.net.find_fallback_nic')
+    @mock.patch('cloudinit.util.get_cmdline')
+    def test_network_config_ok(self, m_get_cmdline, fallback_nic):
+        """
+        network_config will only generate IPv4 config if no ipv6 data is
+        available in the metadata
+        """
+        m_get_cmdline.return_value = 'scaleway'
+        fallback_nic.return_value = 'ens2'
+        self.datasource.metadata['ipv6'] = None
+
+        netcfg = self.datasource.network_config
+        resp = {'version': 1,
+                'config': [{
+                     'type': 'physical',
+                     'name': 'ens2',
+                     'subnets': [{'type': 'dhcp4'}]}]
+                }
+        self.assertEqual(netcfg, resp)
+
+    @mock.patch('cloudinit.sources.DataSourceScaleway.net.find_fallback_nic')
+    @mock.patch('cloudinit.util.get_cmdline')
+    def test_network_config_ipv6_ok(self, m_get_cmdline, fallback_nic):
+        """
+        network_config will only generate IPv4/v6 configs if ipv6 data is
+        available in the metadata
+        """
+        m_get_cmdline.return_value = 'scaleway'
+        fallback_nic.return_value = 'ens2'
+        self.datasource.metadata['ipv6'] = {
+                'address': '2000:abc:4444:9876::42:999',
+                'gateway': '2000:abc:4444:9876::42:000',
+                'netmask': '127',
+                }
+
+        netcfg = self.datasource.network_config
+        resp = {'version': 1,
+                'config': [{
+                     'type': 'physical',
+                     'name': 'ens2',
+                     'subnets': [{'type': 'dhcp4'},
+                                 {'type': 'static',
+                                  'address': '2000:abc:4444:9876::42:999',
+                                  'gateway': '2000:abc:4444:9876::42:000',
+                                  'netmask': '127', }
+                                 ]
+
+                     }]
+                }
+        self.assertEqual(netcfg, resp)
+
+    @mock.patch('cloudinit.sources.DataSourceScaleway.net.find_fallback_nic')
+    @mock.patch('cloudinit.util.get_cmdline')
+    def test_network_config_existing(self, m_get_cmdline, fallback_nic):
+        """
+        network_config() should return the same data if a network config
+        already exists
+        """
+        m_get_cmdline.return_value = 'scaleway'
+        self.datasource._network_config = '0xdeadbeef'
+
+        netcfg = self.datasource.network_config
+        self.assertEqual(netcfg, '0xdeadbeef')
diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
index dca0b3d..46d67b9 100644
--- a/tests/unittests/test_datasource/test_smartos.py
+++ b/tests/unittests/test_datasource/test_smartos.py
@@ -20,10 +20,8 @@ import multiprocessing
 import os
 import os.path
 import re
-import shutil
 import signal
 import stat
-import tempfile
 import unittest2
 import uuid
 
@@ -31,15 +29,27 @@ from cloudinit import serial
 from cloudinit.sources import DataSourceSmartOS
 from cloudinit.sources.DataSourceSmartOS import (
     convert_smartos_network_data as convert_net,
-    SMARTOS_ENV_KVM, SERIAL_DEVICE, get_smartos_environ)
+    SMARTOS_ENV_KVM, SERIAL_DEVICE, get_smartos_environ,
+    identify_file)
 
 import six
 
 from cloudinit import helpers as c_helpers
-from cloudinit.util import (b64e, subp)
+from cloudinit.util import (
+    b64e, subp, ProcessExecutionError, which, write_file)
 
-from cloudinit.tests.helpers import mock, FilesystemMockingTestCase, TestCase
+from cloudinit.tests.helpers import (
+    CiTestCase, mock, FilesystemMockingTestCase, skipIf)
 
+
+try:
+    import serial as _pyserial
+    assert _pyserial  # avoid pyflakes error F401: import unused
+    HAS_PYSERIAL = True
+except ImportError:
+    HAS_PYSERIAL = False
+
+DSMOS = 'cloudinit.sources.DataSourceSmartOS'
 SDC_NICS = json.loads("""
 [
     {
@@ -366,37 +376,33 @@ class PsuedoJoyentClient(object):
 
 
 class TestSmartOSDataSource(FilesystemMockingTestCase):
+    jmc_cfact = None
+    get_smartos_environ = None
+
     def setUp(self):
         super(TestSmartOSDataSource, self).setUp()
 
-        dsmos = 'cloudinit.sources.DataSourceSmartOS'
-        patcher = mock.patch(dsmos + ".jmc_client_factory")
-        self.jmc_cfact = patcher.start()
-        self.addCleanup(patcher.stop)
-        patcher = mock.patch(dsmos + ".get_smartos_environ")
-        self.get_smartos_environ = patcher.start()
-        self.addCleanup(patcher.stop)
-
-        self.tmp = tempfile.mkdtemp()
-        self.addCleanup(shutil.rmtree, self.tmp)
-        self.paths = c_helpers.Paths(
-            {'cloud_dir': self.tmp, 'run_dir': self.tmp})
-
-        self.legacy_user_d = os.path.join(self.tmp, 'legacy_user_tmp')
+        self.add_patch(DSMOS + ".get_smartos_environ", "get_smartos_environ")
+        self.add_patch(DSMOS + ".jmc_client_factory", "jmc_cfact")
+        self.legacy_user_d = self.tmp_path('legacy_user_tmp')
         os.mkdir(self.legacy_user_d)
-
-        self.orig_lud = DataSourceSmartOS.LEGACY_USER_D
-        DataSourceSmartOS.LEGACY_USER_D = self.legacy_user_d
-
-    def tearDown(self):
-        DataSourceSmartOS.LEGACY_USER_D = self.orig_lud
-        super(TestSmartOSDataSource, self).tearDown()
+        self.add_patch(DSMOS + ".LEGACY_USER_D", "m_legacy_user_d",
+                       autospec=False, new=self.legacy_user_d)
+        self.add_patch(DSMOS + ".identify_file", "m_identify_file",
+                       return_value="text/plain")
 
     def _get_ds(self, mockdata=None, mode=DataSourceSmartOS.SMARTOS_ENV_KVM,
                 sys_cfg=None, ds_cfg=None):
         self.jmc_cfact.return_value = PsuedoJoyentClient(mockdata)
         self.get_smartos_environ.return_value = mode
 
+        tmpd = self.tmp_dir()
+        dirs = {'cloud_dir': self.tmp_path('cloud_dir', tmpd),
+                'run_dir': self.tmp_path('run_dir')}
+        for d in dirs.values():
+            os.mkdir(d)
+        paths = c_helpers.Paths(dirs)
+
         if sys_cfg is None:
             sys_cfg = {}
 
@@ -405,7 +411,7 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
             sys_cfg['datasource']['SmartOS'] = ds_cfg
 
         return DataSourceSmartOS.DataSourceSmartOS(
-            sys_cfg, distro=None, paths=self.paths)
+            sys_cfg, distro=None, paths=paths)
 
     def test_no_base64(self):
         ds_cfg = {'no_base64_decode': ['test_var1'], 'all_base': True}
@@ -493,6 +499,7 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
                          dsrc.metadata['user-script'])
 
         legacy_script_f = "%s/user-script" % self.legacy_user_d
+        print("legacy_script_f=%s" % legacy_script_f)
         self.assertTrue(os.path.exists(legacy_script_f))
         self.assertTrue(os.path.islink(legacy_script_f))
         user_script_perm = oct(os.stat(legacy_script_f)[stat.ST_MODE])[-3:]
@@ -640,6 +647,28 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
                          mydscfg['disk_aliases']['FOO'])
 
 
+class TestIdentifyFile(CiTestCase):
+    """Test the 'identify_file' utility."""
+    @skipIf(not which("file"), "command 'file' not available.")
+    def test_file_happy_path(self):
+        """Test file is available and functional on plain text."""
+        fname = self.tmp_path("myfile")
+        write_file(fname, "plain text content here\n")
+        with self.allow_subp(["file"]):
+            self.assertEqual("text/plain", identify_file(fname))
+
+    @mock.patch(DSMOS + ".util.subp")
+    def test_returns_none_on_error(self, m_subp):
+        """On 'file' execution error, None should be returned."""
+        m_subp.side_effect = ProcessExecutionError("FILE_FAILED", exit_code=99)
+        fname = self.tmp_path("myfile")
+        write_file(fname, "plain text content here\n")
+        self.assertEqual(None, identify_file(fname))
+        self.assertEqual(
+            [mock.call(["file", "--brief", "--mime-type", fname])],
+            m_subp.call_args_list)
+
+
 class ShortReader(object):
     """Implements a 'read' interface for bytes provided.
     much like io.BytesIO but the 'endbyte' acts as if EOF.
@@ -893,7 +922,7 @@ class TestJoyentMetadataClient(FilesystemMockingTestCase):
         self.assertEqual(client.list(), [])
 
 
-class TestNetworkConversion(TestCase):
+class TestNetworkConversion(CiTestCase):
     def test_convert_simple(self):
         expected = {
             'version': 1,
@@ -1058,7 +1087,8 @@ class TestNetworkConversion(TestCase):
                       "Only supported on KVM and bhyve guests under SmartOS")
 @unittest2.skipUnless(os.access(SERIAL_DEVICE, os.W_OK),
                       "Requires write access to " + SERIAL_DEVICE)
-class TestSerialConcurrency(TestCase):
+@unittest2.skipUnless(HAS_PYSERIAL is True, "pyserial not available")
+class TestSerialConcurrency(CiTestCase):
     """
        This class tests locking on an actual serial port, and as such can only
        be run in a kvm or bhyve guest running on a SmartOS host.  A test run on
@@ -1066,7 +1096,11 @@ class TestSerialConcurrency(TestCase):
        there is only one session over a connection.  In contrast, in the
        absence of proper locking multiple processes opening the same serial
        port can corrupt each others' exchanges with the metadata server.
+
+       This takes on the order of 2 to 3 minutes to run.
     """
+    allowed_subp = ['mdata-get']
+
     def setUp(self):
         self.mdata_proc = multiprocessing.Process(target=self.start_mdata_loop)
         self.mdata_proc.start()
@@ -1097,7 +1131,7 @@ class TestSerialConcurrency(TestCase):
         keys = [tup[0] for tup in ds.SMARTOS_ATTRIB_MAP.values()]
         keys.extend(ds.SMARTOS_ATTRIB_JSON.values())
 
-        client = ds.jmc_client_factory()
+        client = ds.jmc_client_factory(smartos_type=SMARTOS_ENV_KVM)
         self.assertIsNotNone(client)
 
         # The behavior that we are testing for was observed mdata-get running
diff --git a/tests/unittests/test_distros/test_create_users.py b/tests/unittests/test_distros/test_create_users.py
index 07176ca..c3f258d 100644
--- a/tests/unittests/test_distros/test_create_users.py
+++ b/tests/unittests/test_distros/test_create_users.py
@@ -1,7 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import re
+
 from cloudinit import distros
-from cloudinit.tests.helpers import (TestCase, mock)
+from cloudinit import ssh_util
+from cloudinit.tests.helpers import (CiTestCase, mock)
 
 
 class MyBaseDistro(distros.Distro):
@@ -44,8 +47,12 @@ class MyBaseDistro(distros.Distro):
 
 @mock.patch("cloudinit.distros.util.system_is_snappy", return_value=False)
 @mock.patch("cloudinit.distros.util.subp")
-class TestCreateUser(TestCase):
+class TestCreateUser(CiTestCase):
+
+    with_logs = True
+
     def setUp(self):
+        super(TestCreateUser, self).setUp()
         self.dist = MyBaseDistro()
 
     def _useradd2call(self, args):
@@ -153,4 +160,84 @@ class TestCreateUser(TestCase):
             [self._useradd2call([user, '-m']),
              mock.call(['passwd', '-l', user])])
 
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_setup_ssh_authorized_keys_with_string(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """ssh_authorized_keys allows string and calls setup_user_keys."""
+        user = 'foouser'
+        self.dist.create_user(user, ssh_authorized_keys='mykey')
+        self.assertEqual(
+            m_subp.call_args_list,
+            [self._useradd2call([user, '-m']),
+             mock.call(['passwd', '-l', user])])
+        m_setup_user_keys.assert_called_once_with(set(['mykey']), user)
+
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_setup_ssh_authorized_keys_with_list(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """ssh_authorized_keys allows lists and calls setup_user_keys."""
+        user = 'foouser'
+        self.dist.create_user(user, ssh_authorized_keys=['key1', 'key2'])
+        self.assertEqual(
+            m_subp.call_args_list,
+            [self._useradd2call([user, '-m']),
+             mock.call(['passwd', '-l', user])])
+        m_setup_user_keys.assert_called_once_with(set(['key1', 'key2']), user)
+
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_setup_ssh_authorized_keys_with_integer(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """ssh_authorized_keys warns on non-iterable/string type."""
+        user = 'foouser'
+        self.dist.create_user(user, ssh_authorized_keys=-1)
+        m_setup_user_keys.assert_called_once_with(set([]), user)
+        match = re.match(
+            r'.*WARNING: Invalid type \'<(type|class) \'int\'>\' detected for'
+            ' \'ssh_authorized_keys\'.*',
+            self.logs.getvalue(),
+            re.DOTALL)
+        self.assertIsNotNone(
+            match, 'Missing ssh_authorized_keys invalid type warning')
+
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_create_user_with_ssh_redirect_user_no_cloud_keys(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """Log a warning when trying to redirect a user no cloud ssh keys."""
+        user = 'foouser'
+        self.dist.create_user(user, ssh_redirect_user='someuser')
+        self.assertIn(
+            'WARNING: Unable to disable ssh logins for foouser given '
+            'ssh_redirect_user: someuser. No cloud public-keys present.\n',
+            self.logs.getvalue())
+        m_setup_user_keys.assert_not_called()
+
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_create_user_with_ssh_redirect_user_with_cloud_keys(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """Disable ssh when ssh_redirect_user and cloud ssh keys are set."""
+        user = 'foouser'
+        self.dist.create_user(
+            user, ssh_redirect_user='someuser', cloud_public_ssh_keys=['key1'])
+        disable_prefix = ssh_util.DISABLE_USER_OPTS
+        disable_prefix = disable_prefix.replace('$USER', 'someuser')
+        disable_prefix = disable_prefix.replace('$DISABLE_USER', user)
+        m_setup_user_keys.assert_called_once_with(
+            set(['key1']), 'foouser', options=disable_prefix)
+
+    @mock.patch('cloudinit.ssh_util.setup_user_keys')
+    def test_create_user_with_ssh_redirect_user_does_not_disable_auth_keys(
+            self, m_setup_user_keys, m_subp, m_is_snappy):
+        """Do not disable ssh_authorized_keys when ssh_redirect_user is set."""
+        user = 'foouser'
+        self.dist.create_user(
+            user, ssh_authorized_keys='auth1', ssh_redirect_user='someuser',
+            cloud_public_ssh_keys=['key1'])
+        disable_prefix = ssh_util.DISABLE_USER_OPTS
+        disable_prefix = disable_prefix.replace('$USER', 'someuser')
+        disable_prefix = disable_prefix.replace('$DISABLE_USER', user)
+        self.assertEqual(
+            m_setup_user_keys.call_args_list,
+            [mock.call(set(['auth1']), user),  # not disabled
+             mock.call(set(['key1']), 'foouser', options=disable_prefix)])
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py
index 7765e40..6e33935 100644
--- a/tests/unittests/test_distros/test_netconfig.py
+++ b/tests/unittests/test_distros/test_netconfig.py
@@ -2,24 +2,19 @@
 
 import os
 from six import StringIO
-import stat
 from textwrap import dedent
 
 try:
     from unittest import mock
 except ImportError:
     import mock
-try:
-    from contextlib import ExitStack
-except ImportError:
-    from contextlib2 import ExitStack
 
 from cloudinit import distros
 from cloudinit.distros.parsers.sys_conf import SysConf
 from cloudinit import helpers
-from cloudinit.net import eni
 from cloudinit import settings
-from cloudinit.tests.helpers import FilesystemMockingTestCase
+from cloudinit.tests.helpers import (
+    FilesystemMockingTestCase, dir2dict, populate_dir)
 from cloudinit import util
 
 
@@ -39,6 +34,19 @@ auto eth1
 iface eth1 inet dhcp
 '''
 
+BASE_NET_CFG_FROM_V2 = '''
+auto lo
+iface lo inet loopback
+
+auto eth0
+iface eth0 inet static
+    address 192.168.1.5/24
+    gateway 192.168.1.254
+
+auto eth1
+iface eth1 inet dhcp
+'''
+
 BASE_NET_CFG_IPV6 = '''
 auto lo
 iface lo inet loopback
@@ -82,7 +90,7 @@ V1_NET_CFG = {'config': [{'name': 'eth0',
                           'type': 'physical'}],
               'version': 1}
 
-V1_NET_CFG_OUTPUT = """
+V1_NET_CFG_OUTPUT = """\
 # This file is generated from information provided by
 # the datasource.  Changes to it will not persist across an instance.
 # To disable cloud-init's network configuration capabilities, write a file
@@ -116,7 +124,7 @@ V1_NET_CFG_IPV6 = {'config': [{'name': 'eth0',
                    'version': 1}
 
 
-V1_TO_V2_NET_CFG_OUTPUT = """
+V1_TO_V2_NET_CFG_OUTPUT = """\
 # This file is generated from information provided by
 # the datasource.  Changes to it will not persist across an instance.
 # To disable cloud-init's network configuration capabilities, write a file
@@ -145,7 +153,7 @@ V2_NET_CFG = {
 }
 
 
-V2_TO_V2_NET_CFG_OUTPUT = """
+V2_TO_V2_NET_CFG_OUTPUT = """\
 # This file is generated from information provided by
 # the datasource.  Changes to it will not persist across an instance.
 # To disable cloud-init's network configuration capabilities, write a file
@@ -176,21 +184,10 @@ class WriteBuffer(object):
         return self.buffer.getvalue()
 
 
-class TestNetCfgDistro(FilesystemMockingTestCase):
-
-    frbsd_ifout = """\
-hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
-        options=51b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,TSO4,LRO>
-        ether 00:15:5d:4c:73:00
-        inet6 fe80::215:5dff:fe4c:7300%hn0 prefixlen 64 scopeid 0x2
-        inet 10.156.76.127 netmask 0xfffffc00 broadcast 10.156.79.255
-        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
-        media: Ethernet autoselect (10Gbase-T <full-duplex>)
-        status: active
-"""
+class TestNetCfgDistroBase(FilesystemMockingTestCase):
 
     def setUp(self):
-        super(TestNetCfgDistro, self).setUp()
+        super(TestNetCfgDistroBase, self).setUp()
         self.add_patch('cloudinit.util.system_is_snappy', 'm_snappy')
         self.add_patch('cloudinit.util.system_info', 'm_sysinfo')
         self.m_sysinfo.return_value = {'dist': ('Distro', '99.1', 'Codename')}
@@ -204,144 +201,6 @@ hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
         paths = helpers.Paths({})
         return cls(dname, cfg.get('system_info'), paths)
 
-    def test_simple_write_ub(self):
-        ub_distro = self._get_distro('ubuntu')
-        with ExitStack() as mocks:
-            write_bufs = {}
-
-            def replace_write(filename, content, mode=0o644, omode="wb"):
-                buf = WriteBuffer()
-                buf.mode = mode
-                buf.omode = omode
-                buf.write(content)
-                write_bufs[filename] = buf
-
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-
-            ub_distro.apply_network(BASE_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 1)
-            eni_name = '/etc/network/interfaces.d/50-cloud-init.cfg'
-            self.assertIn(eni_name, write_bufs)
-            write_buf = write_bufs[eni_name]
-            self.assertEqual(str(write_buf).strip(), BASE_NET_CFG.strip())
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_apply_network_config_eni_ub(self):
-        ub_distro = self._get_distro('ubuntu')
-        with ExitStack() as mocks:
-            write_bufs = {}
-
-            def replace_write(filename, content, mode=0o644, omode="wb"):
-                buf = WriteBuffer()
-                buf.mode = mode
-                buf.omode = omode
-                buf.write(content)
-                write_bufs[filename] = buf
-
-            # eni availability checks
-            mocks.enter_context(
-                mock.patch.object(util, 'which', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(eni, 'available', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(util, 'ensure_dir'))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-            mocks.enter_context(
-                mock.patch("cloudinit.net.eni.glob.glob",
-                           return_value=[]))
-
-            ub_distro.apply_network_config(V1_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 2)
-            eni_name = '/etc/network/interfaces.d/50-cloud-init.cfg'
-            self.assertIn(eni_name, write_bufs)
-            write_buf = write_bufs[eni_name]
-            self.assertEqual(str(write_buf).strip(), V1_NET_CFG_OUTPUT.strip())
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_apply_network_config_v1_to_netplan_ub(self):
-        renderers = ['netplan']
-        devlist = ['eth0', 'lo']
-        ub_distro = self._get_distro('ubuntu', renderers=renderers)
-        with ExitStack() as mocks:
-            write_bufs = {}
-
-            def replace_write(filename, content, mode=0o644, omode="wb"):
-                buf = WriteBuffer()
-                buf.mode = mode
-                buf.omode = omode
-                buf.write(content)
-                write_bufs[filename] = buf
-
-            mocks.enter_context(
-                mock.patch.object(util, 'which', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'ensure_dir'))
-            mocks.enter_context(
-                mock.patch.object(util, 'subp', return_value=(0, 0)))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-            mocks.enter_context(
-                mock.patch("cloudinit.net.netplan.get_devicelist",
-                           return_value=devlist))
-
-            ub_distro.apply_network_config(V1_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 1)
-            netplan_name = '/etc/netplan/50-cloud-init.yaml'
-            self.assertIn(netplan_name, write_bufs)
-            write_buf = write_bufs[netplan_name]
-            self.assertEqual(str(write_buf).strip(),
-                             V1_TO_V2_NET_CFG_OUTPUT.strip())
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_apply_network_config_v2_passthrough_ub(self):
-        renderers = ['netplan']
-        devlist = ['eth0', 'lo']
-        ub_distro = self._get_distro('ubuntu', renderers=renderers)
-        with ExitStack() as mocks:
-            write_bufs = {}
-
-            def replace_write(filename, content, mode=0o644, omode="wb"):
-                buf = WriteBuffer()
-                buf.mode = mode
-                buf.omode = omode
-                buf.write(content)
-                write_bufs[filename] = buf
-
-            mocks.enter_context(
-                mock.patch.object(util, 'which', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'ensure_dir'))
-            mocks.enter_context(
-                mock.patch.object(util, 'subp', return_value=(0, 0)))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-            # FreeBSD does not have '/sys/class/net' file,
-            # so we need mock here.
-            mocks.enter_context(
-                mock.patch.object(os, 'listdir', return_value=devlist))
-            ub_distro.apply_network_config(V2_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 1)
-            netplan_name = '/etc/netplan/50-cloud-init.yaml'
-            self.assertIn(netplan_name, write_bufs)
-            write_buf = write_bufs[netplan_name]
-            self.assertEqual(str(write_buf).strip(),
-                             V2_TO_V2_NET_CFG_OUTPUT.strip())
-            self.assertEqual(write_buf.mode, 0o644)
-
     def assertCfgEquals(self, blob1, blob2):
         b1 = dict(SysConf(blob1.strip().splitlines()))
         b2 = dict(SysConf(blob2.strip().splitlines()))
@@ -353,6 +212,20 @@ hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
         for (k, v) in b1.items():
             self.assertEqual(v, b2[k])
 
+
+class TestNetCfgDistroFreebsd(TestNetCfgDistroBase):
+
+    frbsd_ifout = """\
+hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
+        options=51b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,TSO4,LRO>
+        ether 00:15:5d:4c:73:00
+        inet6 fe80::215:5dff:fe4c:7300%hn0 prefixlen 64 scopeid 0x2
+        inet 10.156.76.127 netmask 0xfffffc00 broadcast 10.156.79.255
+        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
+        media: Ethernet autoselect (10Gbase-T <full-duplex>)
+        status: active
+"""
+
     @mock.patch('cloudinit.distros.freebsd.Distro.get_ifconfig_list')
     @mock.patch('cloudinit.distros.freebsd.Distro.get_ifconfig_ifname_out')
     def test_get_ip_nic_freebsd(self, ifname_out, iflist):
@@ -376,349 +249,59 @@ hn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
         res = frbsd_distro.generate_fallback_config()
         self.assertIsNotNone(res)
 
-    def test_simple_write_rh(self):
-        rh_distro = self._get_distro('rhel')
-
-        write_bufs = {}
-
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        with ExitStack() as mocks:
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', return_value=''))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-
-            rh_distro.apply_network(BASE_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 4)
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-lo',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-lo']
-            expected_buf = '''
-DEVICE="lo"
-ONBOOT=yes
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth0',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth0']
-            expected_buf = '''
-DEVICE="eth0"
-BOOTPROTO="static"
-NETMASK="255.255.255.0"
-IPADDR="192.168.1.5"
-ONBOOT=yes
-GATEWAY="192.168.1.254"
-BROADCAST="192.168.1.0"
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth1',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth1']
-            expected_buf = '''
-DEVICE="eth1"
-BOOTPROTO="dhcp"
-ONBOOT=yes
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network', write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network']
-            expected_buf = '''
-# Created by cloud-init v. 0.7
-NETWORKING=yes
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_apply_network_config_rh(self):
-        renderers = ['sysconfig']
-        rh_distro = self._get_distro('rhel', renderers=renderers)
-
-        write_bufs = {}
-
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        with ExitStack() as mocks:
-            # sysconfig availability checks
-            mocks.enter_context(
-                mock.patch.object(util, 'which', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', return_value=''))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=True))
-
-            rh_distro.apply_network_config(V1_NET_CFG, False)
-
-            self.assertEqual(len(write_bufs), 5)
-
-            # eth0
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth0',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth0']
-            expected_buf = '''
-# Created by cloud-init on instance boot automatically, do not edit.
-#
-BOOTPROTO=none
-DEFROUTE=yes
-DEVICE=eth0
-GATEWAY=192.168.1.254
-IPADDR=192.168.1.5
-NETMASK=255.255.255.0
-NM_CONTROLLED=no
-ONBOOT=yes
-TYPE=Ethernet
-USERCTL=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            # eth1
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth1',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth1']
-            expected_buf = '''
-# Created by cloud-init on instance boot automatically, do not edit.
-#
-BOOTPROTO=dhcp
-DEVICE=eth1
-NM_CONTROLLED=no
-ONBOOT=yes
-TYPE=Ethernet
-USERCTL=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network', write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network']
-            expected_buf = '''
-# Created by cloud-init v. 0.7
-NETWORKING=yes
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_write_ipv6_rhel(self):
-        rh_distro = self._get_distro('rhel')
-
-        write_bufs = {}
-
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        with ExitStack() as mocks:
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', return_value=''))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=False))
-            rh_distro.apply_network(BASE_NET_CFG_IPV6, False)
-
-            self.assertEqual(len(write_bufs), 4)
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-lo',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-lo']
-            expected_buf = '''
-DEVICE="lo"
-ONBOOT=yes
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth0',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth0']
-            expected_buf = '''
-DEVICE="eth0"
-BOOTPROTO="static"
-NETMASK="255.255.255.0"
-IPADDR="192.168.1.5"
-ONBOOT=yes
-GATEWAY="192.168.1.254"
-BROADCAST="192.168.1.0"
-IPV6INIT=yes
-IPV6ADDR="2607:f0d0:1002:0011::2"
-IPV6_DEFAULTGW="2607:f0d0:1002:0011::1"
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth1',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth1']
-            expected_buf = '''
-DEVICE="eth1"
-BOOTPROTO="static"
-NETMASK="255.255.255.0"
-IPADDR="192.168.1.6"
-ONBOOT=no
-GATEWAY="192.168.1.254"
-BROADCAST="192.168.1.0"
-IPV6INIT=yes
-IPV6ADDR="2607:f0d0:1002:0011::3"
-IPV6_DEFAULTGW="2607:f0d0:1002:0011::1"
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network', write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network']
-            expected_buf = '''
-# Created by cloud-init v. 0.7
-NETWORKING=yes
-NETWORKING_IPV6=yes
-IPV6_AUTOCONF=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-    def test_apply_network_config_ipv6_rh(self):
-        renderers = ['sysconfig']
-        rh_distro = self._get_distro('rhel', renderers=renderers)
-
-        write_bufs = {}
-
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        with ExitStack() as mocks:
-            mocks.enter_context(
-                mock.patch.object(util, 'which', return_value=True))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', return_value=''))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'isfile', return_value=True))
-
-            rh_distro.apply_network_config(V1_NET_CFG_IPV6, False)
-
-            self.assertEqual(len(write_bufs), 5)
-
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth0',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth0']
-            expected_buf = '''
-# Created by cloud-init on instance boot automatically, do not edit.
-#
-BOOTPROTO=none
-DEFROUTE=yes
-DEVICE=eth0
-IPV6ADDR=2607:f0d0:1002:0011::2/64
-IPV6INIT=yes
-IPV6_DEFAULTGW=2607:f0d0:1002:0011::1
-NM_CONTROLLED=no
-ONBOOT=yes
-TYPE=Ethernet
-USERCTL=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-            self.assertIn('/etc/sysconfig/network-scripts/ifcfg-eth1',
-                          write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network-scripts/ifcfg-eth1']
-            expected_buf = '''
-# Created by cloud-init on instance boot automatically, do not edit.
-#
-BOOTPROTO=dhcp
-DEVICE=eth1
-NM_CONTROLLED=no
-ONBOOT=yes
-TYPE=Ethernet
-USERCTL=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
-            self.assertIn('/etc/sysconfig/network', write_bufs)
-            write_buf = write_bufs['/etc/sysconfig/network']
-            expected_buf = '''
-# Created by cloud-init v. 0.7
-NETWORKING=yes
-NETWORKING_IPV6=yes
-IPV6_AUTOCONF=no
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
-
     def test_simple_write_freebsd(self):
         fbsd_distro = self._get_distro('freebsd')
 
-        write_bufs = {}
+        rc_conf = '/etc/rc.conf'
         read_bufs = {
-            '/etc/rc.conf': '',
-            '/etc/resolv.conf': '',
+            rc_conf: 'initial-rc-conf-not-validated',
+            '/etc/resolv.conf': 'initial-resolv-conf-not-validated',
         }
 
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        def replace_read(fname, read_cb=None, quiet=False):
-            if fname not in read_bufs:
-                if fname in write_bufs:
-                    return str(write_bufs[fname])
-                raise IOError("%s not found" % fname)
-            else:
-                if fname in write_bufs:
-                    return str(write_bufs[fname])
-                return read_bufs[fname]
-
-        with ExitStack() as mocks:
-            mocks.enter_context(
-                mock.patch.object(util, 'subp', return_value=('vtnet0', '')))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'exists', return_value=False))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', replace_read))
-
-            fbsd_distro.apply_network(BASE_NET_CFG, False)
-
-            self.assertIn('/etc/rc.conf', write_bufs)
-            write_buf = write_bufs['/etc/rc.conf']
-            expected_buf = '''
-ifconfig_vtnet0="192.168.1.5 netmask 255.255.255.0"
-ifconfig_vtnet1="DHCP"
-defaultrouter="192.168.1.254"
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
+        tmpd = self.tmp_dir()
+        populate_dir(tmpd, read_bufs)
+        with self.reRooted(tmpd):
+            with mock.patch("cloudinit.distros.freebsd.util.subp",
+                            return_value=('vtnet0', '')):
+                fbsd_distro.apply_network(BASE_NET_CFG, False)
+                results = dir2dict(tmpd)
+
+        self.assertIn(rc_conf, results)
+        self.assertCfgEquals(
+            dedent('''\
+                ifconfig_vtnet0="192.168.1.5 netmask 255.255.255.0"
+                ifconfig_vtnet1="DHCP"
+                defaultrouter="192.168.1.254"
+                '''), results[rc_conf])
+        self.assertEqual(0o644, get_mode(rc_conf, tmpd))
+
+    def test_simple_write_freebsd_from_v2eni(self):
+        fbsd_distro = self._get_distro('freebsd')
+
+        rc_conf = '/etc/rc.conf'
+        read_bufs = {
+            rc_conf: 'initial-rc-conf-not-validated',
+            '/etc/resolv.conf': 'initial-resolv-conf-not-validated',
+        }
 
-    def test_apply_network_config_fallback(self):
+        tmpd = self.tmp_dir()
+        populate_dir(tmpd, read_bufs)
+        with self.reRooted(tmpd):
+            with mock.patch("cloudinit.distros.freebsd.util.subp",
+                            return_value=('vtnet0', '')):
+                fbsd_distro.apply_network(BASE_NET_CFG_FROM_V2, False)
+                results = dir2dict(tmpd)
+
+        self.assertIn(rc_conf, results)
+        self.assertCfgEquals(
+            dedent('''\
+                ifconfig_vtnet0="192.168.1.5 netmask 255.255.255.0"
+                ifconfig_vtnet1="DHCP"
+                defaultrouter="192.168.1.254"
+                '''), results[rc_conf])
+        self.assertEqual(0o644, get_mode(rc_conf, tmpd))
+
+    def test_apply_network_config_fallback_freebsd(self):
         fbsd_distro = self._get_distro('freebsd')
 
         # a weak attempt to verify that we don't have an implementation
@@ -735,89 +318,293 @@ defaultrouter="192.168.1.254"
                         "subnets": [{"type": "dhcp"}]}],
             'version': 1}
 
-        write_bufs = {}
+        rc_conf = '/etc/rc.conf'
         read_bufs = {
-            '/etc/rc.conf': '',
-            '/etc/resolv.conf': '',
+            rc_conf: 'initial-rc-conf-not-validated',
+            '/etc/resolv.conf': 'initial-resolv-conf-not-validated',
         }
 
-        def replace_write(filename, content, mode=0o644, omode="wb"):
-            buf = WriteBuffer()
-            buf.mode = mode
-            buf.omode = omode
-            buf.write(content)
-            write_bufs[filename] = buf
-
-        def replace_read(fname, read_cb=None, quiet=False):
-            if fname not in read_bufs:
-                if fname in write_bufs:
-                    return str(write_bufs[fname])
-                raise IOError("%s not found" % fname)
-            else:
-                if fname in write_bufs:
-                    return str(write_bufs[fname])
-                return read_bufs[fname]
-
-        with ExitStack() as mocks:
-            mocks.enter_context(
-                mock.patch.object(util, 'subp', return_value=('vtnet0', '')))
-            mocks.enter_context(
-                mock.patch.object(os.path, 'exists', return_value=False))
-            mocks.enter_context(
-                mock.patch.object(util, 'write_file', replace_write))
-            mocks.enter_context(
-                mock.patch.object(util, 'load_file', replace_read))
-
-            fbsd_distro.apply_network_config(mynetcfg, bring_up=False)
-
-            self.assertIn('/etc/rc.conf', write_bufs)
-            write_buf = write_bufs['/etc/rc.conf']
-            expected_buf = '''
-ifconfig_vtnet0="DHCP"
-'''
-            self.assertCfgEquals(expected_buf, str(write_buf))
-            self.assertEqual(write_buf.mode, 0o644)
+        tmpd = self.tmp_dir()
+        populate_dir(tmpd, read_bufs)
+        with self.reRooted(tmpd):
+            with mock.patch("cloudinit.distros.freebsd.util.subp",
+                            return_value=('vtnet0', '')):
+                fbsd_distro.apply_network_config(mynetcfg, bring_up=False)
+                results = dir2dict(tmpd)
+
+        self.assertIn(rc_conf, results)
+        self.assertCfgEquals('ifconfig_vtnet0="DHCP"', results[rc_conf])
+        self.assertEqual(0o644, get_mode(rc_conf, tmpd))
+
+
+class TestNetCfgDistroUbuntuEni(TestNetCfgDistroBase):
+
+    def setUp(self):
+        super(TestNetCfgDistroUbuntuEni, self).setUp()
+        self.distro = self._get_distro('ubuntu', renderers=['eni'])
+
+    def eni_path(self):
+        return '/etc/network/interfaces.d/50-cloud-init.cfg'
+
+    def _apply_and_verify_eni(self, apply_fn, config, expected_cfgs=None,
+                              bringup=False):
+        if not expected_cfgs:
+            raise ValueError('expected_cfg must not be None')
+
+        tmpd = None
+        with mock.patch('cloudinit.net.eni.available') as m_avail:
+            m_avail.return_value = True
+            with self.reRooted(tmpd) as tmpd:
+                apply_fn(config, bringup)
+
+        results = dir2dict(tmpd)
+        for cfgpath, expected in expected_cfgs.items():
+            print("----------")
+            print(expected)
+            print("^^^^ expected | rendered VVVVVVV")
+            print(results[cfgpath])
+            print("----------")
+            self.assertEqual(expected, results[cfgpath])
+            self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+
+    def test_apply_network_config_eni_ub(self):
+        expected_cfgs = {
+            self.eni_path(): V1_NET_CFG_OUTPUT,
+        }
+        # ub_distro.apply_network_config(V1_NET_CFG, False)
+        self._apply_and_verify_eni(self.distro.apply_network_config,
+                                   V1_NET_CFG,
+                                   expected_cfgs=expected_cfgs.copy())
 
-    def test_simple_write_opensuse(self):
-        """Opensuse network rendering writes appropriate sysconfg files."""
-        tmpdir = self.tmp_dir()
-        self.patchOS(tmpdir)
-        self.patchUtils(tmpdir)
-        distro = self._get_distro('opensuse')
 
-        distro.apply_network(BASE_NET_CFG, False)
+class TestNetCfgDistroUbuntuNetplan(TestNetCfgDistroBase):
+    def setUp(self):
+        super(TestNetCfgDistroUbuntuNetplan, self).setUp()
+        self.distro = self._get_distro('ubuntu', renderers=['netplan'])
+        self.devlist = ['eth0', 'lo']
+
+    def _apply_and_verify_netplan(self, apply_fn, config, expected_cfgs=None,
+                                  bringup=False):
+        if not expected_cfgs:
+            raise ValueError('expected_cfg must not be None')
+
+        tmpd = None
+        with mock.patch('cloudinit.net.netplan.available',
+                        return_value=True):
+            with mock.patch("cloudinit.net.netplan.get_devicelist",
+                            return_value=self.devlist):
+                with self.reRooted(tmpd) as tmpd:
+                    apply_fn(config, bringup)
+
+        results = dir2dict(tmpd)
+        for cfgpath, expected in expected_cfgs.items():
+            print("----------")
+            print(expected)
+            print("^^^^ expected | rendered VVVVVVV")
+            print(results[cfgpath])
+            print("----------")
+            self.assertEqual(expected, results[cfgpath])
+            self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+
+    def netplan_path(self):
+            return '/etc/netplan/50-cloud-init.yaml'
 
-        lo_path = os.path.join(tmpdir, 'etc/sysconfig/network/ifcfg-lo')
-        eth0_path = os.path.join(tmpdir, 'etc/sysconfig/network/ifcfg-eth0')
-        eth1_path = os.path.join(tmpdir, 'etc/sysconfig/network/ifcfg-eth1')
+    def test_apply_network_config_v1_to_netplan_ub(self):
         expected_cfgs = {
-            lo_path: dedent('''
-                STARTMODE="auto"
-                USERCONTROL="no"
-                FIREWALL="no"
-                '''),
-            eth0_path: dedent('''
-                BOOTPROTO="static"
-                BROADCAST="192.168.1.0"
-                GATEWAY="192.168.1.254"
-                IPADDR="192.168.1.5"
-                NETMASK="255.255.255.0"
-                STARTMODE="auto"
-                USERCONTROL="no"
-                ETHTOOL_OPTIONS=""
-                '''),
-            eth1_path: dedent('''
-                BOOTPROTO="dhcp"
-                STARTMODE="auto"
-                USERCONTROL="no"
-                ETHTOOL_OPTIONS=""
-                ''')
+            self.netplan_path(): V1_TO_V2_NET_CFG_OUTPUT,
         }
-        for cfgpath in (lo_path, eth0_path, eth1_path):
-            self.assertCfgEquals(
-                expected_cfgs[cfgpath],
-                util.load_file(cfgpath))
-            file_stat = os.stat(cfgpath)
-            self.assertEqual(0o644, stat.S_IMODE(file_stat.st_mode))
+
+        # ub_distro.apply_network_config(V1_NET_CFG, False)
+        self._apply_and_verify_netplan(self.distro.apply_network_config,
+                                       V1_NET_CFG,
+                                       expected_cfgs=expected_cfgs.copy())
+
+    def test_apply_network_config_v2_passthrough_ub(self):
+        expected_cfgs = {
+            self.netplan_path(): V2_TO_V2_NET_CFG_OUTPUT,
+        }
+        # ub_distro.apply_network_config(V2_NET_CFG, False)
+        self._apply_and_verify_netplan(self.distro.apply_network_config,
+                                       V2_NET_CFG,
+                                       expected_cfgs=expected_cfgs.copy())
+
+
+class TestNetCfgDistroRedhat(TestNetCfgDistroBase):
+
+    def setUp(self):
+        super(TestNetCfgDistroRedhat, self).setUp()
+        self.distro = self._get_distro('rhel', renderers=['sysconfig'])
+
+    def ifcfg_path(self, ifname):
+        return '/etc/sysconfig/network-scripts/ifcfg-%s' % ifname
+
+    def control_path(self):
+        return '/etc/sysconfig/network'
+
+    def _apply_and_verify(self, apply_fn, config, expected_cfgs=None,
+                          bringup=False):
+        if not expected_cfgs:
+            raise ValueError('expected_cfg must not be None')
+
+        tmpd = None
+        with mock.patch('cloudinit.net.sysconfig.available') as m_avail:
+            m_avail.return_value = True
+            with self.reRooted(tmpd) as tmpd:
+                apply_fn(config, bringup)
+
+        results = dir2dict(tmpd)
+        for cfgpath, expected in expected_cfgs.items():
+            self.assertCfgEquals(expected, results[cfgpath])
+            self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+
+    def test_apply_network_config_rh(self):
+        expected_cfgs = {
+            self.ifcfg_path('eth0'): dedent("""\
+                BOOTPROTO=none
+                DEFROUTE=yes
+                DEVICE=eth0
+                GATEWAY=192.168.1.254
+                IPADDR=192.168.1.5
+                NETMASK=255.255.255.0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            self.ifcfg_path('eth1'): dedent("""\
+                BOOTPROTO=dhcp
+                DEVICE=eth1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            self.control_path(): dedent("""\
+                NETWORKING=yes
+                """),
+        }
+        # rh_distro.apply_network_config(V1_NET_CFG, False)
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG,
+                               expected_cfgs=expected_cfgs.copy())
+
+    def test_apply_network_config_ipv6_rh(self):
+        expected_cfgs = {
+            self.ifcfg_path('eth0'): dedent("""\
+                BOOTPROTO=none
+                DEFROUTE=yes
+                DEVICE=eth0
+                IPV6ADDR=2607:f0d0:1002:0011::2/64
+                IPV6INIT=yes
+                IPV6_DEFAULTGW=2607:f0d0:1002:0011::1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            self.ifcfg_path('eth1'): dedent("""\
+                BOOTPROTO=dhcp
+                DEVICE=eth1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            self.control_path(): dedent("""\
+                NETWORKING=yes
+                NETWORKING_IPV6=yes
+                IPV6_AUTOCONF=no
+                """),
+            }
+        # rh_distro.apply_network_config(V1_NET_CFG_IPV6, False)
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG_IPV6,
+                               expected_cfgs=expected_cfgs.copy())
+
+
+class TestNetCfgDistroOpensuse(TestNetCfgDistroBase):
+
+    def setUp(self):
+        super(TestNetCfgDistroOpensuse, self).setUp()
+        self.distro = self._get_distro('opensuse', renderers=['sysconfig'])
+
+    def ifcfg_path(self, ifname):
+        return '/etc/sysconfig/network/ifcfg-%s' % ifname
+
+    def _apply_and_verify(self, apply_fn, config, expected_cfgs=None,
+                          bringup=False):
+        if not expected_cfgs:
+            raise ValueError('expected_cfg must not be None')
+
+        tmpd = None
+        with mock.patch('cloudinit.net.sysconfig.available') as m_avail:
+            m_avail.return_value = True
+            with self.reRooted(tmpd) as tmpd:
+                apply_fn(config, bringup)
+
+        results = dir2dict(tmpd)
+        for cfgpath, expected in expected_cfgs.items():
+            self.assertCfgEquals(expected, results[cfgpath])
+            self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+
+    def test_apply_network_config_opensuse(self):
+        """Opensuse uses apply_network_config and renders sysconfig"""
+        expected_cfgs = {
+            self.ifcfg_path('eth0'): dedent("""\
+                BOOTPROTO=none
+                DEFROUTE=yes
+                DEVICE=eth0
+                GATEWAY=192.168.1.254
+                IPADDR=192.168.1.5
+                NETMASK=255.255.255.0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            self.ifcfg_path('eth1'): dedent("""\
+                BOOTPROTO=dhcp
+                DEVICE=eth1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+        }
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG,
+                               expected_cfgs=expected_cfgs.copy())
+
+    def test_apply_network_config_ipv6_opensuse(self):
+        """Opensuse uses apply_network_config and renders sysconfig w/ipv6"""
+        expected_cfgs = {
+            self.ifcfg_path('eth0'): dedent("""\
+                BOOTPROTO=none
+                DEFROUTE=yes
+                DEVICE=eth0
+                IPV6ADDR=2607:f0d0:1002:0011::2/64
+                IPV6INIT=yes
+                IPV6_DEFAULTGW=2607:f0d0:1002:0011::1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+            """),
+            self.ifcfg_path('eth1'): dedent("""\
+                BOOTPROTO=dhcp
+                DEVICE=eth1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                TYPE=Ethernet
+                USERCTL=no
+            """),
+        }
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG_IPV6,
+                               expected_cfgs=expected_cfgs.copy())
+
+
+def get_mode(path, target=None):
+    return os.stat(util.target_path(target, path)).st_mode & 0o777
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 64d9f9f..46778e9 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -12,6 +12,7 @@ from cloudinit.tests.helpers import (
 
 from cloudinit.sources import DataSourceIBMCloud as ds_ibm
 from cloudinit.sources import DataSourceSmartOS as ds_smartos
+from cloudinit.sources import DataSourceOracle as ds_oracle
 
 UNAME_MYSYS = ("Linux bart 4.4.0-62-generic #83-Ubuntu "
                "SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 GNU/Linux")
@@ -88,6 +89,7 @@ CallReturn = namedtuple('CallReturn',
 
 class DsIdentifyBase(CiTestCase):
     dsid_path = os.path.realpath('tools/ds-identify')
+    allowed_subp = ['sh']
 
     def call(self, rootd=None, mocks=None, func="main", args=None, files=None,
              policy_dmi=DI_DEFAULT_POLICY,
@@ -598,6 +600,18 @@ class TestIsIBMProvisioning(DsIdentifyBase):
         self.assertIn("from current boot", ret.stderr)
 
 
+class TestOracle(DsIdentifyBase):
+    def test_found_by_chassis(self):
+        """Simple positive test of Oracle by chassis id."""
+        self._test_ds_found('Oracle')
+
+    def test_not_found(self):
+        """Simple negative test of Oracle."""
+        mycfg = copy.deepcopy(VALID_CFG['Oracle'])
+        mycfg['files'][P_CHASSIS_ASSET_TAG] = "Not Oracle"
+        self._check_via_dict(mycfg, rc=RC_NOT_FOUND)
+
+
 def blkid_out(disks=None):
     """Convert a list of disk dictionaries into blkid content."""
     if disks is None:
@@ -838,6 +852,12 @@ VALID_CFG = {
              },
         ],
     },
+    'Oracle': {
+        'ds': 'Oracle',
+        'files': {
+            P_CHASSIS_ASSET_TAG: ds_oracle.CHASSIS_ASSET_TAG + '\n',
+        }
+    },
     'SmartOS-bhyve': {
         'ds': 'SmartOS',
         'mocks': [
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index 7a64c23..90fe6ee 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -48,6 +48,10 @@ ADD_APT_REPO_MATCH = r"^[\w-]+:\w"
 
 TARGET = None
 
+MOCK_LSB_RELEASE_DATA = {
+    'id': 'Ubuntu', 'description': 'Ubuntu 18.04.1 LTS',
+    'release': '18.04', 'codename': 'bionic'}
+
 
 class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
     """TestAptSourceConfig
@@ -64,6 +68,9 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
         self.aptlistfile3 = os.path.join(self.tmp, "single-deb3.list")
         self.join = os.path.join
         self.matcher = re.compile(ADD_APT_REPO_MATCH).search
+        self.add_patch(
+            'cloudinit.config.cc_apt_configure.util.lsb_release',
+            'm_lsb_release', return_value=MOCK_LSB_RELEASE_DATA.copy())
 
     @staticmethod
     def _add_apt_sources(*args, **kwargs):
@@ -76,7 +83,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
         Get the most basic default mrror and release info to be used in tests
         """
         params = {}
-        params['RELEASE'] = util.lsb_release()['codename']
+        params['RELEASE'] = MOCK_LSB_RELEASE_DATA['release']
         arch = 'amd64'
         params['MIRROR'] = cc_apt_configure.\
             get_default_mirrors(arch)["PRIMARY"]
@@ -464,7 +471,7 @@ class TestAptSourceConfig(t_help.FilesystemMockingTestCase):
                              'uri':
                              'http://testsec.ubuntu.com/%s/' % component}]}
         post = ("%s_dists_%s-updates_InRelease" %
-                (component, util.lsb_release()['codename']))
+                (component, MOCK_LSB_RELEASE_DATA['codename']))
         fromfn = ("%s/%s_%s" % (pre, archive, post))
         tofn = ("%s/test.ubuntu.com_%s" % (pre, post))
 
@@ -942,7 +949,8 @@ deb http://ubuntu.com/ubuntu/ xenial-proposed main""")
         self.assertEqual(
             orig, cc_apt_configure.disable_suites(["proposed"], orig, rel))
 
-    def test_apt_v3_mirror_search_dns(self):
+    @mock.patch("cloudinit.util.get_hostname", return_value='abc.localdomain')
+    def test_apt_v3_mirror_search_dns(self, m_get_hostname):
         """test_apt_v3_mirror_search_dns - Test searching dns patterns"""
         pmir = "phit"
         smir = "shit"
diff --git a/tests/unittests/test_handler/test_handler_bootcmd.py b/tests/unittests/test_handler/test_handler_bootcmd.py
index b137526..a76760f 100644
--- a/tests/unittests/test_handler/test_handler_bootcmd.py
+++ b/tests/unittests/test_handler/test_handler_bootcmd.py
@@ -118,7 +118,8 @@ class TestBootcmd(CiTestCase):
             'echo {0} $INSTANCE_ID > {1}'.format(my_id, out_file)]}
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
-            handle('cc_bootcmd', valid_config, cc, LOG, [])
+            with self.allow_subp(['/bin/sh']):
+                handle('cc_bootcmd', valid_config, cc, LOG, [])
         self.assertEqual(my_id + ' iid-datasource-none\n',
                          util.load_file(out_file))
 
@@ -128,12 +129,13 @@ class TestBootcmd(CiTestCase):
         valid_config = {'bootcmd': ['exit 1']}  # Script with error
 
         with mock.patch(self._etmpfile_path, FakeExtendedTempFile):
-            with self.assertRaises(util.ProcessExecutionError) as ctxt_manager:
-                handle('does-not-matter', valid_config, cc, LOG, [])
+            with self.allow_subp(['/bin/sh']):
+                with self.assertRaises(util.ProcessExecutionError) as ctxt:
+                    handle('does-not-matter', valid_config, cc, LOG, [])
         self.assertIn(
             'Unexpected error while running command.\n'
             "Command: ['/bin/sh',",
-            str(ctxt_manager.exception))
+            str(ctxt.exception))
         self.assertIn(
             'Failed to run bootcmd module does-not-matter',
             self.logs.getvalue())
diff --git a/tests/unittests/test_handler/test_handler_chef.py b/tests/unittests/test_handler/test_handler_chef.py
index f4bbd66..b16532e 100644
--- a/tests/unittests/test_handler/test_handler_chef.py
+++ b/tests/unittests/test_handler/test_handler_chef.py
@@ -36,13 +36,21 @@ class TestInstallChefOmnibus(HttprettyTestCase):
 
     @mock.patch("cloudinit.config.cc_chef.OMNIBUS_URL", OMNIBUS_URL_HTTP)
     def test_install_chef_from_omnibus_runs_chef_url_content(self):
-        """install_chef_from_omnibus runs downloaded OMNIBUS_URL as script."""
-        chef_outfile = self.tmp_path('chef.out', self.new_root)
-        response = '#!/bin/bash\necho "Hi Mom" > {0}'.format(chef_outfile)
+        """install_chef_from_omnibus calls subp_blob_in_tempfile."""
+        response = b'#!/bin/bash\necho "Hi Mom"'
         httpretty.register_uri(
             httpretty.GET, cc_chef.OMNIBUS_URL, body=response, status=200)
-        cc_chef.install_chef_from_omnibus()
-        self.assertEqual('Hi Mom\n', util.load_file(chef_outfile))
+        ret = (None, None)  # stdout, stderr but capture=False
+
+        with mock.patch("cloudinit.config.cc_chef.util.subp_blob_in_tempfile",
+                        return_value=ret) as m_subp_blob:
+            cc_chef.install_chef_from_omnibus()
+        # admittedly whitebox, but assuming subp_blob_in_tempfile works
+        # this should be fine.
+        self.assertEqual(
+            [mock.call(blob=response, args=[], basename='chef-omnibus-install',
+                       capture=False)],
+            m_subp_blob.call_args_list)
 
     @mock.patch('cloudinit.config.cc_chef.url_helper.readurl')
     @mock.patch('cloudinit.config.cc_chef.util.subp_blob_in_tempfile')
diff --git a/tests/unittests/test_handler/test_handler_etc_hosts.py b/tests/unittests/test_handler/test_handler_etc_hosts.py
index ced05a8..d854afc 100644
--- a/tests/unittests/test_handler/test_handler_etc_hosts.py
+++ b/tests/unittests/test_handler/test_handler_etc_hosts.py
@@ -49,6 +49,7 @@ class TestHostsFile(t_help.FilesystemMockingTestCase):
         if '192.168.1.1\tblah.blah.us\tblah' not in contents:
             self.assertIsNone('Default etc/hosts content modified')
 
+    @t_help.skipUnlessJinja()
     def test_write_etc_hosts_suse_template(self):
         cfg = {
             'manage_etc_hosts': 'template',
diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
index 4dd7e09..2478ebc 100644
--- a/tests/unittests/test_handler/test_handler_lxd.py
+++ b/tests/unittests/test_handler/test_handler_lxd.py
@@ -43,12 +43,12 @@ class TestLxd(t_help.CiTestCase):
         self.assertTrue(mock_util.which.called)
         # no bridge config, so maybe_cleanup should not be called.
         self.assertFalse(m_maybe_clean.called)
-        init_call = mock_util.subp.call_args_list[0][0][0]
-        self.assertEqual(init_call,
-                         ['lxd', 'init', '--auto',
-                          '--network-address=0.0.0.0',
-                          '--storage-backend=zfs',
-                          '--storage-pool=poolname'])
+        self.assertEqual(
+            [mock.call(['lxd', 'waitready', '--timeout=300']),
+             mock.call(
+                 ['lxd', 'init', '--auto', '--network-address=0.0.0.0',
+                  '--storage-backend=zfs', '--storage-pool=poolname'])],
+            mock_util.subp.call_args_list)
 
     @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 6fe3659..0f22e57 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -3,6 +3,7 @@
 from cloudinit.config import cc_ntp
 from cloudinit.sources import DataSourceNone
 from cloudinit import (distros, helpers, cloud, util)
+
 from cloudinit.tests.helpers import (
     CiTestCase, FilesystemMockingTestCase, mock, skipUnlessJsonSchema)
 
diff --git a/tests/unittests/test_handler/test_handler_resizefs.py b/tests/unittests/test_handler/test_handler_resizefs.py
index f92175f..feca56c 100644
--- a/tests/unittests/test_handler/test_handler_resizefs.py
+++ b/tests/unittests/test_handler/test_handler_resizefs.py
@@ -150,10 +150,12 @@ class TestResizefs(CiTestCase):
         self.assertEqual(('growfs', '-y', devpth),
                          _resize_ufs(mount_point, devpth))
 
+    @mock.patch('cloudinit.util.is_container', return_value=False)
     @mock.patch('cloudinit.util.get_mount_info')
     @mock.patch('cloudinit.util.get_device_info_from_zpool')
     @mock.patch('cloudinit.util.parse_mount')
-    def test_handle_zfs_root(self, mount_info, zpool_info, parse_mount):
+    def test_handle_zfs_root(self, mount_info, zpool_info, parse_mount,
+                             is_container):
         devpth = 'vmzroot/ROOT/freebsd'
         disk = 'gpt/system'
         fs_type = 'zfs'
@@ -354,8 +356,10 @@ class TestMaybeGetDevicePathAsWritableBlock(CiTestCase):
             ('btrfs', 'filesystem', 'resize', 'max', '/'),
             _resize_btrfs("/", "/dev/sda1"))
 
+    @mock.patch('cloudinit.util.is_container', return_value=True)
     @mock.patch('cloudinit.util.is_FreeBSD')
-    def test_maybe_get_writable_device_path_zfs_freebsd(self, freebsd):
+    def test_maybe_get_writable_device_path_zfs_freebsd(self, freebsd,
+                                                        m_is_container):
         freebsd.return_value = True
         info = 'dev=gpt/system mnt_point=/ path=/'
         devpth = maybe_get_writable_device_path('gpt/system', info, LOG)
diff --git a/tests/unittests/test_handler/test_schema.py b/tests/unittests/test_handler/test_schema.py
index fb266fa..1bad07f 100644
--- a/tests/unittests/test_handler/test_schema.py
+++ b/tests/unittests/test_handler/test_schema.py
@@ -4,7 +4,7 @@ from cloudinit.config.schema import (
     CLOUD_CONFIG_HEADER, SchemaValidationError, annotated_cloudconfig_file,
     get_schema_doc, get_schema, validate_cloudconfig_file,
     validate_cloudconfig_schema, main)
-from cloudinit.util import subp, write_file
+from cloudinit.util import write_file
 
 from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJsonSchema
 
@@ -406,8 +406,14 @@ class CloudTestsIntegrationTest(CiTestCase):
         integration_testdir = os.path.sep.join(
             [testsdir, 'cloud_tests', 'testcases'])
         errors = []
-        out, _ = subp(['find', integration_testdir, '-name', '*yaml'])
-        for filename in out.splitlines():
+
+        yaml_files = []
+        for root, _dirnames, filenames in os.walk(integration_testdir):
+            yaml_files.extend([os.path.join(root, f)
+                               for f in filenames if f.endswith(".yaml")])
+        self.assertTrue(len(yaml_files) > 0)
+
+        for filename in yaml_files:
             test_cfg = safe_load(open(filename))
             cloud_config = test_cfg.get('cloud_config')
             if cloud_config:
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 5ab61cf..5d9c7d9 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -1,6 +1,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import net
+from cloudinit import distros
 from cloudinit.net import cmdline
 from cloudinit.net import (
     eni, interface_has_own_mac, natural_sort_key, netplan, network_state,
@@ -129,7 +130,40 @@ OS_SAMPLES = [
         'in_macs': {
             'fa:16:3e:ed:9a:59': 'eth0',
         },
-        'out_sysconfig': [
+        'out_sysconfig_opensuse': [
+            ('etc/sysconfig/network/ifcfg-eth0',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=none
+DEFROUTE=yes
+DEVICE=eth0
+GATEWAY=172.19.3.254
+HWADDR=fa:16:3e:ed:9a:59
+IPADDR=172.19.1.34
+NETMASK=255.255.252.0
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+""".lstrip()),
+            ('etc/resolv.conf',
+             """
+; Created by cloud-init on instance boot automatically, do not edit.
+;
+nameserver 172.19.0.12
+""".lstrip()),
+            ('etc/NetworkManager/conf.d/99-cloud-init.conf',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+[main]
+dns = none
+""".lstrip()),
+            ('etc/udev/rules.d/70-persistent-net.rules',
+             "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
+                      'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
+        'out_sysconfig_rhel': [
             ('etc/sysconfig/network-scripts/ifcfg-eth0',
              """
 # Created by cloud-init on instance boot automatically, do not edit.
@@ -162,6 +196,7 @@ dns = none
             ('etc/udev/rules.d/70-persistent-net.rules',
              "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
                       'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))]
+
     },
     {
         'in_data': {
@@ -195,7 +230,42 @@ dns = none
         'in_macs': {
             'fa:16:3e:ed:9a:59': 'eth0',
         },
-        'out_sysconfig': [
+        'out_sysconfig_opensuse': [
+            ('etc/sysconfig/network/ifcfg-eth0',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=none
+DEFROUTE=yes
+DEVICE=eth0
+GATEWAY=172.19.3.254
+HWADDR=fa:16:3e:ed:9a:59
+IPADDR=172.19.1.34
+IPADDR1=10.0.0.10
+NETMASK=255.255.252.0
+NETMASK1=255.255.255.0
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+""".lstrip()),
+            ('etc/resolv.conf',
+             """
+; Created by cloud-init on instance boot automatically, do not edit.
+;
+nameserver 172.19.0.12
+""".lstrip()),
+            ('etc/NetworkManager/conf.d/99-cloud-init.conf',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+[main]
+dns = none
+""".lstrip()),
+            ('etc/udev/rules.d/70-persistent-net.rules',
+             "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
+                      'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
+        'out_sysconfig_rhel': [
             ('etc/sysconfig/network-scripts/ifcfg-eth0',
              """
 # Created by cloud-init on instance boot automatically, do not edit.
@@ -230,6 +300,7 @@ dns = none
             ('etc/udev/rules.d/70-persistent-net.rules',
              "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
                       'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))]
+
     },
     {
         'in_data': {
@@ -283,7 +354,44 @@ dns = none
         'in_macs': {
             'fa:16:3e:ed:9a:59': 'eth0',
         },
-        'out_sysconfig': [
+        'out_sysconfig_opensuse': [
+            ('etc/sysconfig/network/ifcfg-eth0',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=none
+DEFROUTE=yes
+DEVICE=eth0
+GATEWAY=172.19.3.254
+HWADDR=fa:16:3e:ed:9a:59
+IPADDR=172.19.1.34
+IPV6ADDR=2001:DB8::10/64
+IPV6ADDR_SECONDARIES="2001:DB9::10/64 2001:DB10::10/64"
+IPV6INIT=yes
+IPV6_DEFAULTGW=2001:DB8::1
+NETMASK=255.255.252.0
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+""".lstrip()),
+            ('etc/resolv.conf',
+             """
+; Created by cloud-init on instance boot automatically, do not edit.
+;
+nameserver 172.19.0.12
+""".lstrip()),
+            ('etc/NetworkManager/conf.d/99-cloud-init.conf',
+             """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+[main]
+dns = none
+""".lstrip()),
+            ('etc/udev/rules.d/70-persistent-net.rules',
+             "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
+                      'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
+        'out_sysconfig_rhel': [
             ('etc/sysconfig/network-scripts/ifcfg-eth0',
              """
 # Created by cloud-init on instance boot automatically, do not edit.
@@ -643,6 +751,7 @@ iface br0 inet static
     bridge_stp off
     bridge_waitport 1 eth3
     bridge_waitport 2 eth4
+    hwaddress bb:bb:bb:bb:bb:aa
 
 # control-alias br0
 iface br0 inet6 static
@@ -708,6 +817,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                         interfaces:
                         - eth1
                         - eth2
+                        macaddress: aa:bb:cc:dd:ee:ff
                         parameters:
                             mii-monitor-interval: 100
                             mode: active-backup
@@ -720,6 +830,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                         interfaces:
                         - eth3
                         - eth4
+                        macaddress: bb:bb:bb:bb:bb:aa
                         nameservers:
                             addresses:
                             - 8.8.8.8
@@ -803,6 +914,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                 IPV6ADDR=2001:1::1/64
                 IPV6INIT=yes
                 IPV6_DEFAULTGW=2001:4800:78ff:1b::1
+                MACADDR=bb:bb:bb:bb:bb:aa
                 NETMASK=255.255.255.0
                 NM_CONTROLLED=no
                 ONBOOT=yes
@@ -973,6 +1085,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                       use_tempaddr: 1
                       forwarding: 1
                       # basically anything in /proc/sys/net/ipv6/conf/.../
+                  mac_address: bb:bb:bb:bb:bb:aa
                   params:
                       bridge_ageing: 250
                       bridge_bridgeprio: 22
@@ -1075,6 +1188,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                      interfaces:
                      - bond0s0
                      - bond0s1
+                     macaddress: aa:bb:cc:dd:e8:ff
                      mtu: 9000
                      parameters:
                          mii-monitor-interval: 100
@@ -1148,7 +1262,59 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
              version: 2
         """),
 
-        'expected_sysconfig': {
+        'expected_sysconfig_opensuse': {
+            'ifcfg-bond0': textwrap.dedent("""\
+        BONDING_MASTER=yes
+        BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 miimon=100"
+        BONDING_SLAVE0=bond0s0
+        BONDING_SLAVE1=bond0s1
+        BOOTPROTO=none
+        DEFROUTE=yes
+        DEVICE=bond0
+        GATEWAY=192.168.0.1
+        MACADDR=aa:bb:cc:dd:e8:ff
+        IPADDR=192.168.0.2
+        IPADDR1=192.168.1.2
+        IPV6ADDR=2001:1::1/92
+        IPV6INIT=yes
+        MTU=9000
+        NETMASK=255.255.255.0
+        NETMASK1=255.255.255.0
+        NM_CONTROLLED=no
+        ONBOOT=yes
+        TYPE=Bond
+        USERCTL=no
+        """),
+            'ifcfg-bond0s0': textwrap.dedent("""\
+        BOOTPROTO=none
+        DEVICE=bond0s0
+        HWADDR=aa:bb:cc:dd:e8:00
+        MASTER=bond0
+        NM_CONTROLLED=no
+        ONBOOT=yes
+        SLAVE=yes
+        TYPE=Ethernet
+        USERCTL=no
+        """),
+            'ifroute-bond0': textwrap.dedent("""\
+        ADDRESS0=10.1.3.0
+        GATEWAY0=192.168.0.3
+        NETMASK0=255.255.255.0
+        """),
+            'ifcfg-bond0s1': textwrap.dedent("""\
+        BOOTPROTO=none
+        DEVICE=bond0s1
+        HWADDR=aa:bb:cc:dd:e8:01
+        MASTER=bond0
+        NM_CONTROLLED=no
+        ONBOOT=yes
+        SLAVE=yes
+        TYPE=Ethernet
+        USERCTL=no
+        """),
+        },
+
+        'expected_sysconfig_rhel': {
             'ifcfg-bond0': textwrap.dedent("""\
         BONDING_MASTER=yes
         BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 miimon=100"
@@ -1487,6 +1653,12 @@ def _setup_test(tmp_dir, mock_get_devicelist, mock_read_sys_net,
 
 class TestGenerateFallbackConfig(CiTestCase):
 
+    def setUp(self):
+        super(TestGenerateFallbackConfig, self).setUp()
+        self.add_patch(
+            "cloudinit.util.get_cmdline", "m_get_cmdline",
+            return_value="root=/dev/sda1")
+
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
     @mock.patch("cloudinit.net.get_devicelist")
@@ -1521,7 +1693,7 @@ class TestGenerateFallbackConfig(CiTestCase):
         # don't set rulepath so eni writes them
         renderer = eni.Renderer(
             {'eni_path': 'interfaces', 'netrules_path': 'netrules'})
-        renderer.render_network_state(ns, render_dir)
+        renderer.render_network_state(ns, target=render_dir)
 
         self.assertTrue(os.path.exists(os.path.join(render_dir,
                                                     'interfaces')))
@@ -1585,7 +1757,7 @@ iface eth0 inet dhcp
         # don't set rulepath so eni writes them
         renderer = eni.Renderer(
             {'eni_path': 'interfaces', 'netrules_path': 'netrules'})
-        renderer.render_network_state(ns, render_dir)
+        renderer.render_network_state(ns, target=render_dir)
 
         self.assertTrue(os.path.exists(os.path.join(render_dir,
                                                     'interfaces')))
@@ -1676,7 +1848,7 @@ iface eth1 inet dhcp
         self.assertEqual(0, mock_settle.call_count)
 
 
-class TestSysConfigRendering(CiTestCase):
+class TestRhelSysConfigRendering(CiTestCase):
 
     with_logs = True
 
@@ -1684,6 +1856,13 @@ class TestSysConfigRendering(CiTestCase):
     header = ('# Created by cloud-init on instance boot automatically, '
               'do not edit.\n#\n')
 
+    expected_name = 'expected_sysconfig'
+
+    def _get_renderer(self):
+        distro_cls = distros.fetch('rhel')
+        return sysconfig.Renderer(
+            config=distro_cls.renderer_configs.get('sysconfig'))
+
     def _render_and_read(self, network_config=None, state=None, dir=None):
         if dir is None:
             dir = self.tmp_dir()
@@ -1695,8 +1874,8 @@ class TestSysConfigRendering(CiTestCase):
         else:
             raise ValueError("Expected data or state, got neither")
 
-        renderer = sysconfig.Renderer()
-        renderer.render_network_state(ns, dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=dir)
         return dir2dict(dir)
 
     def _compare_files_to_expected(self, expected, found):
@@ -1722,12 +1901,13 @@ class TestSysConfigRendering(CiTestCase):
         if missing:
             raise AssertionError("Missing headers in: %s" % missing)
 
+    @mock.patch("cloudinit.net.util.get_cmdline", return_value="root=myroot")
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
     @mock.patch("cloudinit.net.get_devicelist")
     def test_default_generation(self, mock_get_devicelist,
                                 mock_read_sys_net,
-                                mock_sys_dev_path):
+                                mock_sys_dev_path, m_get_cmdline):
         tmp_dir = self.tmp_dir()
         _setup_test(tmp_dir, mock_get_devicelist,
                     mock_read_sys_net, mock_sys_dev_path)
@@ -1739,8 +1919,8 @@ class TestSysConfigRendering(CiTestCase):
         render_dir = os.path.join(tmp_dir, "render")
         os.makedirs(render_dir)
 
-        renderer = sysconfig.Renderer()
-        renderer.render_network_state(ns, render_dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
 
         render_file = 'etc/sysconfig/network-scripts/ifcfg-eth1000'
         with open(os.path.join(render_dir, render_file)) as fh:
@@ -1791,9 +1971,9 @@ USERCTL=no
         network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
         ns = network_state.parse_net_config_data(network_cfg,
                                                  skip_broken=False)
-        renderer = sysconfig.Renderer()
+        renderer = self._get_renderer()
         with self.assertRaises(ValueError):
-            renderer.render_network_state(ns, render_dir)
+            renderer.render_network_state(ns, target=render_dir)
         self.assertEqual([], os.listdir(render_dir))
 
     def test_multiple_ipv6_default_gateways(self):
@@ -1829,9 +2009,9 @@ USERCTL=no
         network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
         ns = network_state.parse_net_config_data(network_cfg,
                                                  skip_broken=False)
-        renderer = sysconfig.Renderer()
+        renderer = self._get_renderer()
         with self.assertRaises(ValueError):
-            renderer.render_network_state(ns, render_dir)
+            renderer.render_network_state(ns, target=render_dir)
         self.assertEqual([], os.listdir(render_dir))
 
     def test_openstack_rendering_samples(self):
@@ -1843,12 +2023,13 @@ USERCTL=no
                 ex_input, known_macs=ex_mac_addrs)
             ns = network_state.parse_net_config_data(network_cfg,
                                                      skip_broken=False)
-            renderer = sysconfig.Renderer()
+            renderer = self._get_renderer()
             # render a multiple times to simulate reboots
-            renderer.render_network_state(ns, render_dir)
-            renderer.render_network_state(ns, render_dir)
-            renderer.render_network_state(ns, render_dir)
-            for fn, expected_content in os_sample.get('out_sysconfig', []):
+            renderer.render_network_state(ns, target=render_dir)
+            renderer.render_network_state(ns, target=render_dir)
+            renderer.render_network_state(ns, target=render_dir)
+            for fn, expected_content in os_sample.get('out_sysconfig_rhel',
+                                                      []):
                 with open(os.path.join(render_dir, fn)) as fh:
                     self.assertEqual(expected_content, fh.read())
 
@@ -1856,8 +2037,8 @@ USERCTL=no
         ns = network_state.parse_net_config_data(CONFIG_V1_SIMPLE_SUBNET)
         render_dir = self.tmp_path("render")
         os.makedirs(render_dir)
-        renderer = sysconfig.Renderer()
-        renderer.render_network_state(ns, render_dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
         found = dir2dict(render_dir)
         nspath = '/etc/sysconfig/network-scripts/'
         self.assertNotIn(nspath + 'ifcfg-lo', found.keys())
@@ -1882,8 +2063,8 @@ USERCTL=no
         ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK)
         render_dir = self.tmp_path("render")
         os.makedirs(render_dir)
-        renderer = sysconfig.Renderer()
-        renderer.render_network_state(ns, render_dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
         found = dir2dict(render_dir)
         nspath = '/etc/sysconfig/network-scripts/'
         self.assertNotIn(nspath + 'ifcfg-lo', found.keys())
@@ -1900,33 +2081,332 @@ USERCTL=no
         self.assertEqual(expected, found[nspath + 'ifcfg-eth0'])
 
     def test_bond_config(self):
+        expected_name = 'expected_sysconfig_rhel'
+        entry = NETWORK_CONFIGS['bond']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[expected_name], found)
+        self._assert_headers(found)
+
+    def test_vlan_config(self):
+        entry = NETWORK_CONFIGS['vlan']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+    def test_bridge_config(self):
+        entry = NETWORK_CONFIGS['bridge']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+    def test_manual_config(self):
+        entry = NETWORK_CONFIGS['manual']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+    def test_all_config(self):
+        entry = NETWORK_CONFIGS['all']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+        self.assertNotIn(
+            'WARNING: Network config: ignoring eth0.101 device-level mtu',
+            self.logs.getvalue())
+
+    def test_small_config(self):
+        entry = NETWORK_CONFIGS['small']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+    def test_v4_and_v6_static_config(self):
+        entry = NETWORK_CONFIGS['v4_and_v6_static']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+        expected_msg = (
+            'WARNING: Network config: ignoring iface0 device-level mtu:8999'
+            ' because ipv4 subnet-level mtu:9000 provided.')
+        self.assertIn(expected_msg, self.logs.getvalue())
+
+    def test_dhcpv6_only_config(self):
+        entry = NETWORK_CONFIGS['dhcpv6_only']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']))
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+
+class TestOpenSuseSysConfigRendering(CiTestCase):
+
+    with_logs = True
+
+    scripts_dir = '/etc/sysconfig/network'
+    header = ('# Created by cloud-init on instance boot automatically, '
+              'do not edit.\n#\n')
+
+    expected_name = 'expected_sysconfig'
+
+    def _get_renderer(self):
+        distro_cls = distros.fetch('opensuse')
+        return sysconfig.Renderer(
+            config=distro_cls.renderer_configs.get('sysconfig'))
+
+    def _render_and_read(self, network_config=None, state=None, dir=None):
+        if dir is None:
+            dir = self.tmp_dir()
+
+        if network_config:
+            ns = network_state.parse_net_config_data(network_config)
+        elif state:
+            ns = state
+        else:
+            raise ValueError("Expected data or state, got neither")
+
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=dir)
+        return dir2dict(dir)
+
+    def _compare_files_to_expected(self, expected, found):
+        orig_maxdiff = self.maxDiff
+        expected_d = dict(
+            (os.path.join(self.scripts_dir, k), util.load_shell_content(v))
+            for k, v in expected.items())
+
+        # only compare the files in scripts_dir
+        scripts_found = dict(
+            (k, util.load_shell_content(v)) for k, v in found.items()
+            if k.startswith(self.scripts_dir))
+        try:
+            self.maxDiff = None
+            self.assertEqual(expected_d, scripts_found)
+        finally:
+            self.maxDiff = orig_maxdiff
+
+    def _assert_headers(self, found):
+        missing = [f for f in found
+                   if (f.startswith(self.scripts_dir) and
+                       not found[f].startswith(self.header))]
+        if missing:
+            raise AssertionError("Missing headers in: %s" % missing)
+
+    @mock.patch("cloudinit.net.util.get_cmdline", return_value="root=myroot")
+    @mock.patch("cloudinit.net.sys_dev_path")
+    @mock.patch("cloudinit.net.read_sys_net")
+    @mock.patch("cloudinit.net.get_devicelist")
+    def test_default_generation(self, mock_get_devicelist,
+                                mock_read_sys_net,
+                                mock_sys_dev_path, m_get_cmdline):
+        tmp_dir = self.tmp_dir()
+        _setup_test(tmp_dir, mock_get_devicelist,
+                    mock_read_sys_net, mock_sys_dev_path)
+
+        network_cfg = net.generate_fallback_config()
+        ns = network_state.parse_net_config_data(network_cfg,
+                                                 skip_broken=False)
+
+        render_dir = os.path.join(tmp_dir, "render")
+        os.makedirs(render_dir)
+
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
+
+        render_file = 'etc/sysconfig/network/ifcfg-eth1000'
+        with open(os.path.join(render_dir, render_file)) as fh:
+            content = fh.read()
+            expected_content = """
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=dhcp
+DEVICE=eth1000
+HWADDR=07-1C-C6-75-A4-BE
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+""".lstrip()
+            self.assertEqual(expected_content, content)
+
+    def test_multiple_ipv4_default_gateways(self):
+        """ValueError is raised when duplicate ipv4 gateways exist."""
+        net_json = {
+            "services": [{"type": "dns", "address": "172.19.0.12"}],
+            "networks": [{
+                "network_id": "dacd568d-5be6-4786-91fe-750c374b78b4",
+                "type": "ipv4", "netmask": "255.255.252.0",
+                "link": "tap1a81968a-79",
+                "routes": [{
+                    "netmask": "0.0.0.0",
+                    "network": "0.0.0.0",
+                    "gateway": "172.19.3.254",
+                }, {
+                    "netmask": "0.0.0.0",  # A second default gateway
+                    "network": "0.0.0.0",
+                    "gateway": "172.20.3.254",
+                }],
+                "ip_address": "172.19.1.34", "id": "network0"
+            }],
+            "links": [
+                {
+                    "ethernet_mac_address": "fa:16:3e:ed:9a:59",
+                    "mtu": None, "type": "bridge", "id":
+                    "tap1a81968a-79",
+                    "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f"
+                },
+            ],
+        }
+        macs = {'fa:16:3e:ed:9a:59': 'eth0'}
+        render_dir = self.tmp_dir()
+        network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
+        ns = network_state.parse_net_config_data(network_cfg,
+                                                 skip_broken=False)
+        renderer = self._get_renderer()
+        with self.assertRaises(ValueError):
+            renderer.render_network_state(ns, target=render_dir)
+        self.assertEqual([], os.listdir(render_dir))
+
+    def test_multiple_ipv6_default_gateways(self):
+        """ValueError is raised when duplicate ipv6 gateways exist."""
+        net_json = {
+            "services": [{"type": "dns", "address": "172.19.0.12"}],
+            "networks": [{
+                "network_id": "public-ipv6",
+                "type": "ipv6", "netmask": "",
+                "link": "tap1a81968a-79",
+                "routes": [{
+                    "gateway": "2001:DB8::1",
+                    "netmask": "::",
+                    "network": "::"
+                }, {
+                    "gateway": "2001:DB9::1",
+                    "netmask": "::",
+                    "network": "::"
+                }],
+                "ip_address": "2001:DB8::10", "id": "network1"
+            }],
+            "links": [
+                {
+                    "ethernet_mac_address": "fa:16:3e:ed:9a:59",
+                    "mtu": None, "type": "bridge", "id":
+                    "tap1a81968a-79",
+                    "vif_id": "1a81968a-797a-400f-8a80-567f997eb93f"
+                },
+            ],
+        }
+        macs = {'fa:16:3e:ed:9a:59': 'eth0'}
+        render_dir = self.tmp_dir()
+        network_cfg = openstack.convert_net_json(net_json, known_macs=macs)
+        ns = network_state.parse_net_config_data(network_cfg,
+                                                 skip_broken=False)
+        renderer = self._get_renderer()
+        with self.assertRaises(ValueError):
+            renderer.render_network_state(ns, target=render_dir)
+        self.assertEqual([], os.listdir(render_dir))
+
+    def test_openstack_rendering_samples(self):
+        for os_sample in OS_SAMPLES:
+            render_dir = self.tmp_dir()
+            ex_input = os_sample['in_data']
+            ex_mac_addrs = os_sample['in_macs']
+            network_cfg = openstack.convert_net_json(
+                ex_input, known_macs=ex_mac_addrs)
+            ns = network_state.parse_net_config_data(network_cfg,
+                                                     skip_broken=False)
+            renderer = self._get_renderer()
+            # render a multiple times to simulate reboots
+            renderer.render_network_state(ns, target=render_dir)
+            renderer.render_network_state(ns, target=render_dir)
+            renderer.render_network_state(ns, target=render_dir)
+            for fn, expected_content in os_sample.get('out_sysconfig_opensuse',
+                                                      []):
+                with open(os.path.join(render_dir, fn)) as fh:
+                    self.assertEqual(expected_content, fh.read())
+
+    def test_network_config_v1_samples(self):
+        ns = network_state.parse_net_config_data(CONFIG_V1_SIMPLE_SUBNET)
+        render_dir = self.tmp_path("render")
+        os.makedirs(render_dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
+        found = dir2dict(render_dir)
+        nspath = '/etc/sysconfig/network/'
+        self.assertNotIn(nspath + 'ifcfg-lo', found.keys())
+        expected = """\
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=none
+DEFROUTE=yes
+DEVICE=interface0
+GATEWAY=10.0.2.2
+HWADDR=52:54:00:12:34:00
+IPADDR=10.0.2.15
+NETMASK=255.255.255.0
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+"""
+        self.assertEqual(expected, found[nspath + 'ifcfg-interface0'])
+
+    def test_config_with_explicit_loopback(self):
+        ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK)
+        render_dir = self.tmp_path("render")
+        os.makedirs(render_dir)
+        renderer = self._get_renderer()
+        renderer.render_network_state(ns, target=render_dir)
+        found = dir2dict(render_dir)
+        nspath = '/etc/sysconfig/network/'
+        self.assertNotIn(nspath + 'ifcfg-lo', found.keys())
+        expected = """\
+# Created by cloud-init on instance boot automatically, do not edit.
+#
+BOOTPROTO=dhcp
+DEVICE=eth0
+NM_CONTROLLED=no
+ONBOOT=yes
+TYPE=Ethernet
+USERCTL=no
+"""
+        self.assertEqual(expected, found[nspath + 'ifcfg-eth0'])
+
+    def test_bond_config(self):
+        expected_name = 'expected_sysconfig_opensuse'
         entry = NETWORK_CONFIGS['bond']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        for fname, contents in entry[expected_name].items():
+            print(fname)
+            print(contents)
+            print()
+        print('-- expected ^ | v rendered --')
+        for fname, contents in found.items():
+            print(fname)
+            print(contents)
+            print()
+        self._compare_files_to_expected(entry[expected_name], found)
         self._assert_headers(found)
 
     def test_vlan_config(self):
         entry = NETWORK_CONFIGS['vlan']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
     def test_bridge_config(self):
         entry = NETWORK_CONFIGS['bridge']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
     def test_manual_config(self):
         entry = NETWORK_CONFIGS['manual']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
     def test_all_config(self):
         entry = NETWORK_CONFIGS['all']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
         self.assertNotIn(
             'WARNING: Network config: ignoring eth0.101 device-level mtu',
@@ -1935,13 +2415,13 @@ USERCTL=no
     def test_small_config(self):
         entry = NETWORK_CONFIGS['small']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
     def test_v4_and_v6_static_config(self):
         entry = NETWORK_CONFIGS['v4_and_v6_static']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
         expected_msg = (
             'WARNING: Network config: ignoring iface0 device-level mtu:8999'
@@ -1951,18 +2431,19 @@ USERCTL=no
     def test_dhcpv6_only_config(self):
         entry = NETWORK_CONFIGS['dhcpv6_only']
         found = self._render_and_read(network_config=yaml.load(entry['yaml']))
-        self._compare_files_to_expected(entry['expected_sysconfig'], found)
+        self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
 
 class TestEniNetRendering(CiTestCase):
 
+    @mock.patch("cloudinit.net.util.get_cmdline", return_value="root=myroot")
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
     @mock.patch("cloudinit.net.get_devicelist")
     def test_default_generation(self, mock_get_devicelist,
                                 mock_read_sys_net,
-                                mock_sys_dev_path):
+                                mock_sys_dev_path, m_get_cmdline):
         tmp_dir = self.tmp_dir()
         _setup_test(tmp_dir, mock_get_devicelist,
                     mock_read_sys_net, mock_sys_dev_path)
@@ -1976,7 +2457,7 @@ class TestEniNetRendering(CiTestCase):
 
         renderer = eni.Renderer(
             {'eni_path': 'interfaces', 'netrules_path': None})
-        renderer.render_network_state(ns, render_dir)
+        renderer.render_network_state(ns, target=render_dir)
 
         self.assertTrue(os.path.exists(os.path.join(render_dir,
                                                     'interfaces')))
@@ -1996,7 +2477,7 @@ iface eth1000 inet dhcp
         tmp_dir = self.tmp_dir()
         ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK)
         renderer = eni.Renderer()
-        renderer.render_network_state(ns, tmp_dir)
+        renderer.render_network_state(ns, target=tmp_dir)
         expected = """\
 auto lo
 iface lo inet loopback
@@ -2010,6 +2491,7 @@ iface eth0 inet dhcp
 
 class TestNetplanNetRendering(CiTestCase):
 
+    @mock.patch("cloudinit.net.util.get_cmdline", return_value="root=myroot")
     @mock.patch("cloudinit.net.netplan._clean_default")
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
@@ -2017,7 +2499,7 @@ class TestNetplanNetRendering(CiTestCase):
     def test_default_generation(self, mock_get_devicelist,
                                 mock_read_sys_net,
                                 mock_sys_dev_path,
-                                mock_clean_default):
+                                mock_clean_default, m_get_cmdline):
         tmp_dir = self.tmp_dir()
         _setup_test(tmp_dir, mock_get_devicelist,
                     mock_read_sys_net, mock_sys_dev_path)
@@ -2032,7 +2514,7 @@ class TestNetplanNetRendering(CiTestCase):
         render_target = 'netplan.yaml'
         renderer = netplan.Renderer(
             {'netplan_path': render_target, 'postcmds': False})
-        renderer.render_network_state(ns, render_dir)
+        renderer.render_network_state(ns, target=render_dir)
 
         self.assertTrue(os.path.exists(os.path.join(render_dir,
                                                     render_target)))
@@ -2137,7 +2619,7 @@ class TestNetplanPostcommands(CiTestCase):
         render_target = 'netplan.yaml'
         renderer = netplan.Renderer(
             {'netplan_path': render_target, 'postcmds': True})
-        renderer.render_network_state(ns, render_dir)
+        renderer.render_network_state(ns, target=render_dir)
 
         mock_netplan_generate.assert_called_with(run=True)
         mock_net_setup_link.assert_called_with(run=True)
@@ -2162,7 +2644,7 @@ class TestNetplanPostcommands(CiTestCase):
                        '/sys/class/net/lo'], capture=True),
         ]
         with mock.patch.object(os.path, 'islink', return_value=True):
-            renderer.render_network_state(ns, render_dir)
+            renderer.render_network_state(ns, target=render_dir)
             mock_subp.assert_has_calls(expected)
 
 
@@ -2357,7 +2839,7 @@ class TestNetplanRoundTrip(CiTestCase):
         renderer = netplan.Renderer(
             config={'netplan_path': netplan_path})
 
-        renderer.render_network_state(ns, target)
+        renderer.render_network_state(ns, target=target)
         return dir2dict(target)
 
     def testsimple_render_bond_netplan(self):
@@ -2447,7 +2929,7 @@ class TestEniRoundTrip(CiTestCase):
         renderer = eni.Renderer(
             config={'eni_path': eni_path, 'netrules_path': netrules_path})
 
-        renderer.render_network_state(ns, dir)
+        renderer.render_network_state(ns, target=dir)
         return dir2dict(dir)
 
     def testsimple_convert_and_render(self):
@@ -2778,11 +3260,15 @@ class TestGetInterfacesByMac(CiTestCase):
     def _se_interface_has_own_mac(self, name):
         return name in self.data['own_macs']
 
+    def _se_get_ib_interface_hwaddr(self, name, ethernet_format):
+        ib_hwaddr = self.data.get('ib_hwaddr', {})
+        return ib_hwaddr.get(name, {}).get(ethernet_format)
+
     def _mock_setup(self):
         self.data = copy.deepcopy(self._data)
         self.data['devices'] = set(list(self.data['macs'].keys()))
         mocks = ('get_devicelist', 'get_interface_mac', 'is_bridge',
-                 'interface_has_own_mac', 'is_vlan')
+                 'interface_has_own_mac', 'is_vlan', 'get_ib_interface_hwaddr')
         self.mocks = {}
         for n in mocks:
             m = mock.patch('cloudinit.net.' + n,
@@ -2856,6 +3342,20 @@ class TestGetInterfacesByMac(CiTestCase):
         ret = net.get_interfaces_by_mac()
         self.assertEqual('lo', ret[empty_mac])
 
+    def test_ib(self):
+        ib_addr = '80:00:00:28:fe:80:00:00:00:00:00:00:00:11:22:03:00:33:44:56'
+        ib_addr_eth_format = '00:11:22:33:44:56'
+        self._mock_setup()
+        self.data['devices'] = ['enp0s1', 'ib0']
+        self.data['own_macs'].append('ib0')
+        self.data['macs']['ib0'] = ib_addr
+        self.data['ib_hwaddr'] = {'ib0': {True: ib_addr_eth_format,
+                                          False: ib_addr}}
+        result = net.get_interfaces_by_mac()
+        expected = {'aa:aa:aa:aa:aa:01': 'enp0s1',
+                    ib_addr_eth_format: 'ib0', ib_addr: 'ib0'}
+        self.assertEqual(expected, result)
+
 
 class TestInterfacesSorting(CiTestCase):
 
@@ -2870,6 +3370,67 @@ class TestInterfacesSorting(CiTestCase):
             ['enp0s3', 'enp0s8', 'enp0s13', 'enp1s2', 'enp2s0', 'enp2s3'])
 
 
+class TestGetIBHwaddrsByInterface(CiTestCase):
+
+    _ib_addr = '80:00:00:28:fe:80:00:00:00:00:00:00:00:11:22:03:00:33:44:56'
+    _ib_addr_eth_format = '00:11:22:33:44:56'
+    _data = {'devices': ['enp0s1', 'enp0s2', 'bond1', 'bridge1',
+                         'bridge1-nic', 'tun0', 'ib0'],
+             'bonds': ['bond1'],
+             'bridges': ['bridge1'],
+             'own_macs': ['enp0s1', 'enp0s2', 'bridge1-nic', 'bridge1', 'ib0'],
+             'macs': {'enp0s1': 'aa:aa:aa:aa:aa:01',
+                      'enp0s2': 'aa:aa:aa:aa:aa:02',
+                      'bond1': 'aa:aa:aa:aa:aa:01',
+                      'bridge1': 'aa:aa:aa:aa:aa:03',
+                      'bridge1-nic': 'aa:aa:aa:aa:aa:03',
+                      'tun0': None,
+                      'ib0': _ib_addr},
+             'ib_hwaddr': {'ib0': {True: _ib_addr_eth_format,
+                                   False: _ib_addr}}}
+    data = {}
+
+    def _mock_setup(self):
+        self.data = copy.deepcopy(self._data)
+        mocks = ('get_devicelist', 'get_interface_mac', 'is_bridge',
+                 'interface_has_own_mac', 'get_ib_interface_hwaddr')
+        self.mocks = {}
+        for n in mocks:
+            m = mock.patch('cloudinit.net.' + n,
+                           side_effect=getattr(self, '_se_' + n))
+            self.addCleanup(m.stop)
+            self.mocks[n] = m.start()
+
+    def _se_get_devicelist(self):
+        return self.data['devices']
+
+    def _se_get_interface_mac(self, name):
+        return self.data['macs'][name]
+
+    def _se_is_bridge(self, name):
+        return name in self.data['bridges']
+
+    def _se_interface_has_own_mac(self, name):
+        return name in self.data['own_macs']
+
+    def _se_get_ib_interface_hwaddr(self, name, ethernet_format):
+        ib_hwaddr = self.data.get('ib_hwaddr', {})
+        return ib_hwaddr.get(name, {}).get(ethernet_format)
+
+    def test_ethernet(self):
+        self._mock_setup()
+        self.data['devices'].remove('ib0')
+        result = net.get_ib_hwaddrs_by_interface()
+        expected = {}
+        self.assertEqual(expected, result)
+
+    def test_ib(self):
+        self._mock_setup()
+        result = net.get_ib_hwaddrs_by_interface()
+        expected = {'ib0': self._ib_addr}
+        self.assertEqual(expected, result)
+
+
 def _gzip_data(data):
     with io.BytesIO() as iobuf:
         gzfp = gzip.GzipFile(mode="wb", fileobj=iobuf)
diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py
new file mode 100644
index 0000000..2e64c6c
--- /dev/null
+++ b/tests/unittests/test_reporting_hyperv.py
@@ -0,0 +1,134 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.reporting import events
+from cloudinit.reporting import handlers
+
+import json
+import os
+
+from cloudinit import util
+from cloudinit.tests.helpers import CiTestCase
+
+
+class TestKvpEncoding(CiTestCase):
+    def test_encode_decode(self):
+        kvp = {'key': 'key1', 'value': 'value1'}
+        kvp_reporting = handlers.HyperVKvpReportingHandler()
+        data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value'])
+        self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE)
+        decoded_kvp = kvp_reporting._decode_kvp_item(data)
+        self.assertEqual(kvp, decoded_kvp)
+
+
+class TextKvpReporter(CiTestCase):
+    def setUp(self):
+        super(TextKvpReporter, self).setUp()
+        self.tmp_file_path = self.tmp_path('kvp_pool_file')
+        util.ensure_file(self.tmp_file_path)
+
+    def test_event_type_can_be_filtered(self):
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path,
+            event_types=['foo', 'bar'])
+
+        reporter.publish_event(
+            events.ReportingEvent('foo', 'name', 'description'))
+        reporter.publish_event(
+            events.ReportingEvent('some_other', 'name', 'description3'))
+        reporter.q.join()
+
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(1, len(kvps))
+
+        reporter.publish_event(
+            events.ReportingEvent('bar', 'name', 'description2'))
+        reporter.q.join()
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(2, len(kvps))
+
+        self.assertIn('foo', kvps[0]['key'])
+        self.assertIn('bar', kvps[1]['key'])
+        self.assertNotIn('some_other', kvps[0]['key'])
+        self.assertNotIn('some_other', kvps[1]['key'])
+
+    def test_events_are_over_written(self):
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+
+        self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
+
+        reporter.publish_event(
+            events.ReportingEvent('foo', 'name1', 'description'))
+        reporter.publish_event(
+            events.ReportingEvent('foo', 'name2', 'description'))
+        reporter.q.join()
+        self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
+
+        reporter2 = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+        reporter2.incarnation_no = reporter.incarnation_no + 1
+        reporter2.publish_event(
+            events.ReportingEvent('foo', 'name3', 'description'))
+        reporter2.q.join()
+
+        self.assertEqual(2, len(list(reporter2._iterate_kvps(0))))
+
+    def test_events_with_higher_incarnation_not_over_written(self):
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+
+        self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
+
+        reporter.publish_event(
+            events.ReportingEvent('foo', 'name1', 'description'))
+        reporter.publish_event(
+            events.ReportingEvent('foo', 'name2', 'description'))
+        reporter.q.join()
+        self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
+
+        reporter3 = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+        reporter3.incarnation_no = reporter.incarnation_no - 1
+        reporter3.publish_event(
+            events.ReportingEvent('foo', 'name3', 'description'))
+        reporter3.q.join()
+        self.assertEqual(3, len(list(reporter3._iterate_kvps(0))))
+
+    def test_finish_event_result_is_logged(self):
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+        reporter.publish_event(
+            events.FinishReportingEvent('name2', 'description1',
+                                        result=events.status.FAIL))
+        reporter.q.join()
+        self.assertIn('FAIL', list(reporter._iterate_kvps(0))[0]['value'])
+
+    def test_file_operation_issue(self):
+        os.remove(self.tmp_file_path)
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+        reporter.publish_event(
+            events.FinishReportingEvent('name2', 'description1',
+                                        result=events.status.FAIL))
+        reporter.q.join()
+
+    def test_event_very_long(self):
+        reporter = handlers.HyperVKvpReportingHandler(
+            kvp_file_path=self.tmp_file_path)
+        description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE
+        long_event = events.FinishReportingEvent(
+            'event_name',
+            description,
+            result=events.status.FAIL)
+        reporter.publish_event(long_event)
+        reporter.q.join()
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(3, len(kvps))
+
+        # restore from the kvp to see the content are all there
+        full_description = ''
+        for i in range(len(kvps)):
+            msg_slice = json.loads(kvps[i]['value'])
+            self.assertEqual(msg_slice['msg_i'], i)
+            full_description += msg_slice['msg']
+        self.assertEqual(description, full_description)
diff --git a/tests/unittests/test_rh_subscription.py b/tests/unittests/test_rh_subscription.py
index 2271810..4cd27ee 100644
--- a/tests/unittests/test_rh_subscription.py
+++ b/tests/unittests/test_rh_subscription.py
@@ -8,10 +8,16 @@ import logging
 from cloudinit.config import cc_rh_subscription
 from cloudinit import util
 
-from cloudinit.tests.helpers import TestCase, mock
+from cloudinit.tests.helpers import CiTestCase, mock
 
+SUBMGR = cc_rh_subscription.SubscriptionManager
+SUB_MAN_CLI = 'cloudinit.config.cc_rh_subscription._sub_man_cli'
+
+
+@mock.patch(SUB_MAN_CLI)
+class GoodTests(CiTestCase):
+    with_logs = True
 
-class GoodTests(TestCase):
     def setUp(self):
         super(GoodTests, self).setUp()
         self.name = "cc_rh_subscription"
@@ -19,7 +25,6 @@ class GoodTests(TestCase):
         self.log = logging.getLogger("good_tests")
         self.args = []
         self.handle = cc_rh_subscription.handle
-        self.SM = cc_rh_subscription.SubscriptionManager
 
         self.config = {'rh_subscription':
                        {'username': 'scooby@xxxxxx',
@@ -35,55 +40,47 @@ class GoodTests(TestCase):
                              'disable-repo': ['repo4', 'repo5']
                              }}
 
-    def test_already_registered(self):
+    def test_already_registered(self, m_sman_cli):
         '''
         Emulates a system that is already registered. Ensure it gets
         a non-ProcessExecution error from is_registered()
         '''
-        with mock.patch.object(cc_rh_subscription.SubscriptionManager,
-                               '_sub_man_cli') as mockobj:
-            self.SM.log_success = mock.MagicMock()
-            self.handle(self.name, self.config, self.cloud_init,
-                        self.log, self.args)
-            self.assertEqual(self.SM.log_success.call_count, 1)
-            self.assertEqual(mockobj.call_count, 1)
-
-    def test_simple_registration(self):
+        self.handle(self.name, self.config, self.cloud_init,
+                    self.log, self.args)
+        self.assertEqual(m_sman_cli.call_count, 1)
+        self.assertIn('System is already registered', self.logs.getvalue())
+
+    def test_simple_registration(self, m_sman_cli):
         '''
         Simple registration with username and password
         '''
-        self.SM.log_success = mock.MagicMock()
         reg = "The system has been registered with ID:" \
               " 12345678-abde-abcde-1234-1234567890abc"
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (reg, 'bar')])
+        m_sman_cli.side_effect = [util.ProcessExecutionError, (reg, 'bar')]
         self.handle(self.name, self.config, self.cloud_init,
                     self.log, self.args)
-        self.assertIn(mock.call(['identity']),
-                      self.SM._sub_man_cli.call_args_list)
+        self.assertIn(mock.call(['identity']), m_sman_cli.call_args_list)
         self.assertIn(mock.call(['register', '--username=scooby@xxxxxx',
                                  '--password=scooby-snacks'],
                                 logstring_val=True),
-                      self.SM._sub_man_cli.call_args_list)
-
-        self.assertEqual(self.SM.log_success.call_count, 1)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 2)
+                      m_sman_cli.call_args_list)
+        self.assertIn('rh_subscription plugin completed successfully',
+                      self.logs.getvalue())
+        self.assertEqual(m_sman_cli.call_count, 2)
 
     @mock.patch.object(cc_rh_subscription.SubscriptionManager, "_getRepos")
-    @mock.patch.object(cc_rh_subscription.SubscriptionManager, "_sub_man_cli")
-    def test_update_repos_disable_with_none(self, m_sub_man_cli, m_get_repos):
+    def test_update_repos_disable_with_none(self, m_get_repos, m_sman_cli):
         cfg = copy.deepcopy(self.config)
         m_get_repos.return_value = ([], ['repo1'])
-        m_sub_man_cli.return_value = (b'', b'')
         cfg['rh_subscription'].update(
             {'enable-repo': ['repo1'], 'disable-repo': None})
         mysm = cc_rh_subscription.SubscriptionManager(cfg)
         self.assertEqual(True, mysm.update_repos())
         m_get_repos.assert_called_with()
-        self.assertEqual(m_sub_man_cli.call_args_list,
+        self.assertEqual(m_sman_cli.call_args_list,
                          [mock.call(['repos', '--enable=repo1'])])
 
-    def test_full_registration(self):
+    def test_full_registration(self, m_sman_cli):
         '''
         Registration with auto-attach, service-level, adding pools,
         and enabling and disabling yum repos
@@ -93,26 +90,28 @@ class GoodTests(TestCase):
         call_lists.append(['repos', '--disable=repo5', '--enable=repo2',
                            '--enable=repo3'])
         call_lists.append(['attach', '--auto', '--servicelevel=self-support'])
-        self.SM.log_success = mock.MagicMock()
         reg = "The system has been registered with ID:" \
               " 12345678-abde-abcde-1234-1234567890abc"
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (reg, 'bar'),
-                         ('Service level set to: self-support', ''),
-                         ('pool1\npool3\n', ''), ('pool2\n', ''), ('', ''),
-                         ('Repo ID: repo1\nRepo ID: repo5\n', ''),
-                         ('Repo ID: repo2\nRepo ID: repo3\nRepo ID: '
-                          'repo4', ''),
-                         ('', '')])
+        m_sman_cli.side_effect = [
+            util.ProcessExecutionError,
+            (reg, 'bar'),
+            ('Service level set to: self-support', ''),
+            ('pool1\npool3\n', ''), ('pool2\n', ''), ('', ''),
+            ('Repo ID: repo1\nRepo ID: repo5\n', ''),
+            ('Repo ID: repo2\nRepo ID: repo3\nRepo ID: repo4', ''),
+            ('', '')]
         self.handle(self.name, self.config_full, self.cloud_init,
                     self.log, self.args)
+        self.assertEqual(m_sman_cli.call_count, 9)
         for call in call_lists:
-            self.assertIn(mock.call(call), self.SM._sub_man_cli.call_args_list)
-        self.assertEqual(self.SM.log_success.call_count, 1)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 9)
+            self.assertIn(mock.call(call), m_sman_cli.call_args_list)
+        self.assertIn("rh_subscription plugin completed successfully",
+                      self.logs.getvalue())
 
 
-class TestBadInput(TestCase):
+@mock.patch(SUB_MAN_CLI)
+class TestBadInput(CiTestCase):
+    with_logs = True
     name = "cc_rh_subscription"
     cloud_init = None
     log = logging.getLogger("bad_tests")
@@ -155,81 +154,81 @@ class TestBadInput(TestCase):
         super(TestBadInput, self).setUp()
         self.handle = cc_rh_subscription.handle
 
-    def test_no_password(self):
-        '''
-        Attempt to register without the password key/value
-        '''
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (self.reg, 'bar')])
+    def assert_logged_warnings(self, warnings):
+        logs = self.logs.getvalue()
+        missing = [w for w in warnings if "WARNING: " + w not in logs]
+        self.assertEqual([], missing, "Missing expected warnings.")
+
+    def test_no_password(self, m_sman_cli):
+        '''Attempt to register without the password key/value.'''
+        m_sman_cli.side_effect = [util.ProcessExecutionError,
+                                  (self.reg, 'bar')]
         self.handle(self.name, self.config_no_password, self.cloud_init,
                     self.log, self.args)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 0)
+        self.assertEqual(m_sman_cli.call_count, 0)
 
-    def test_no_org(self):
-        '''
-        Attempt to register without the org key/value
-        '''
-        self.input_is_missing_data(self.config_no_key)
-
-    def test_service_level_without_auto(self):
-        '''
-        Attempt to register using service-level without the auto-attach key
-        '''
-        self.SM.log_warn = mock.MagicMock()
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (self.reg, 'bar')])
+    def test_no_org(self, m_sman_cli):
+        '''Attempt to register without the org key/value.'''
+        m_sman_cli.side_effect = [util.ProcessExecutionError]
+        self.handle(self.name, self.config_no_key, self.cloud_init,
+                    self.log, self.args)
+        m_sman_cli.assert_called_with(['identity'])
+        self.assertEqual(m_sman_cli.call_count, 1)
+        self.assert_logged_warnings((
+            'Unable to register system due to incomplete information.',
+            'Use either activationkey and org *or* userid and password',
+            'Registration failed or did not run completely',
+            'rh_subscription plugin did not complete successfully'))
+
+    def test_service_level_without_auto(self, m_sman_cli):
+        '''Attempt to register using service-level without auto-attach key.'''
+        m_sman_cli.side_effect = [util.ProcessExecutionError,
+                                  (self.reg, 'bar')]
         self.handle(self.name, self.config_service, self.cloud_init,
                     self.log, self.args)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 1)
-        self.assertEqual(self.SM.log_warn.call_count, 2)
+        self.assertEqual(m_sman_cli.call_count, 1)
+        self.assert_logged_warnings((
+            'The service-level key must be used in conjunction with ',
+            'rh_subscription plugin did not complete successfully'))
 
-    def test_pool_not_a_list(self):
+    def test_pool_not_a_list(self, m_sman_cli):
         '''
         Register with pools that are not in the format of a list
         '''
-        self.SM.log_warn = mock.MagicMock()
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (self.reg, 'bar')])
+        m_sman_cli.side_effect = [util.ProcessExecutionError,
+                                  (self.reg, 'bar')]
         self.handle(self.name, self.config_badpool, self.cloud_init,
                     self.log, self.args)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 2)
-        self.assertEqual(self.SM.log_warn.call_count, 2)
+        self.assertEqual(m_sman_cli.call_count, 2)
+        self.assert_logged_warnings((
+            'Pools must in the format of a list',
+            'rh_subscription plugin did not complete successfully'))
 
-    def test_repo_not_a_list(self):
+    def test_repo_not_a_list(self, m_sman_cli):
         '''
         Register with repos that are not in the format of a list
         '''
-        self.SM.log_warn = mock.MagicMock()
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (self.reg, 'bar')])
+        m_sman_cli.side_effect = [util.ProcessExecutionError,
+                                  (self.reg, 'bar')]
         self.handle(self.name, self.config_badrepo, self.cloud_init,
                     self.log, self.args)
-        self.assertEqual(self.SM.log_warn.call_count, 3)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 2)
+        self.assertEqual(m_sman_cli.call_count, 2)
+        self.assert_logged_warnings((
+            'Repo IDs must in the format of a list.',
+            'Unable to add or remove repos',
+            'rh_subscription plugin did not complete successfully'))
 
-    def test_bad_key_value(self):
+    def test_bad_key_value(self, m_sman_cli):
         '''
         Attempt to register with a key that we don't know
         '''
-        self.SM.log_warn = mock.MagicMock()
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError, (self.reg, 'bar')])
+        m_sman_cli.side_effect = [util.ProcessExecutionError,
+                                  (self.reg, 'bar')]
         self.handle(self.name, self.config_badkey, self.cloud_init,
                     self.log, self.args)
-        self.assertEqual(self.SM.log_warn.call_count, 2)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 1)
-
-    def input_is_missing_data(self, config):
-        '''
-        Helper def for tests that having missing information
-        '''
-        self.SM.log_warn = mock.MagicMock()
-        self.SM._sub_man_cli = mock.MagicMock(
-            side_effect=[util.ProcessExecutionError])
-        self.handle(self.name, config, self.cloud_init,
-                    self.log, self.args)
-        self.SM._sub_man_cli.assert_called_with(['identity'])
-        self.assertEqual(self.SM.log_warn.call_count, 4)
-        self.assertEqual(self.SM._sub_man_cli.call_count, 1)
+        self.assertEqual(m_sman_cli.call_count, 1)
+        self.assert_logged_warnings((
+            'fookey is not a valid key for rh_subscription. Valid keys are:',
+            'rh_subscription plugin did not complete successfully'))
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_templating.py b/tests/unittests/test_templating.py
index 20c87ef..c36e6eb 100644
--- a/tests/unittests/test_templating.py
+++ b/tests/unittests/test_templating.py
@@ -21,6 +21,9 @@ except ImportError:
 
 
 class TestTemplates(test_helpers.CiTestCase):
+
+    with_logs = True
+
     jinja_utf8 = b'It\xe2\x80\x99s not ascii, {{name}}\n'
     jinja_utf8_rbob = b'It\xe2\x80\x99s not ascii, bob\n'.decode('utf-8')
 
@@ -124,6 +127,13 @@ $a,$b'''
                 self.add_header("jinja", self.jinja_utf8), {"name": "bob"}),
             self.jinja_utf8_rbob)
 
+    def test_jinja_nonascii_render_undefined_variables_to_default_py3(self):
+        """Test py3 jinja render_to_string with undefined variable default."""
+        self.assertEqual(
+            templater.render_string(
+                self.add_header("jinja", self.jinja_utf8), {}),
+            self.jinja_utf8_rbob.replace('bob', 'CI_MISSING_JINJA_VAR/name'))
+
     def test_jinja_nonascii_render_to_file(self):
         """Test jinja render_to_file of a filename with non-ascii content."""
         tmpl_fn = self.tmp_path("j-render-to-file.template")
@@ -144,5 +154,18 @@ $a,$b'''
         result = templater.render_from_file(tmpl_fn, {"name": "bob"})
         self.assertEqual(result, self.jinja_utf8_rbob)
 
+    @test_helpers.skipIfJinja()
+    def test_jinja_warns_on_missing_dep_and_uses_basic_renderer(self):
+        """Test jinja render_from_file will fallback to basic renderer."""
+        tmpl_fn = self.tmp_path("j-render-from-file.template")
+        write_file(tmpl_fn, omode="wb",
+                   content=self.add_header(
+                       "jinja", self.jinja_utf8).encode('utf-8'))
+        result = templater.render_from_file(tmpl_fn, {"name": "bob"})
+        self.assertEqual(result, self.jinja_utf8.decode())
+        self.assertIn(
+            'WARNING: Jinja not available as the selected renderer for desired'
+            ' template, reverting to the basic renderer.',
+            self.logs.getvalue())
 
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 7a203ce..5a14479 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -24,6 +24,7 @@ except ImportError:
 
 
 BASH = util.which('bash')
+BOGUS_COMMAND = 'this-is-not-expected-to-be-a-program-name'
 
 
 class FakeSelinux(object):
@@ -742,6 +743,8 @@ class TestReadSeeded(helpers.TestCase):
 
 class TestSubp(helpers.CiTestCase):
     with_logs = True
+    allowed_subp = [BASH, 'cat', helpers.CiTestCase.SUBP_SHELL_TRUE,
+                    BOGUS_COMMAND, sys.executable]
 
     stdin2err = [BASH, '-c', 'cat >&2']
     stdin2out = ['cat']
@@ -749,7 +752,6 @@ class TestSubp(helpers.CiTestCase):
     utf8_valid = b'start \xc3\xa9 end'
     utf8_valid_2 = b'd\xc3\xa9j\xc8\xa7'
     printenv = [BASH, '-c', 'for n in "$@"; do echo "$n=${!n}"; done', '--']
-    bogus_command = 'this-is-not-expected-to-be-a-program-name'
 
     def printf_cmd(self, *args):
         # bash's printf supports \xaa.  So does /usr/bin/printf
@@ -848,9 +850,10 @@ class TestSubp(helpers.CiTestCase):
         util.write_file(noshebang, 'true\n')
 
         os.chmod(noshebang, os.stat(noshebang).st_mode | stat.S_IEXEC)
-        self.assertRaisesRegex(util.ProcessExecutionError,
-                               r'Missing #! in script\?',
-                               util.subp, (noshebang,))
+        with self.allow_subp([noshebang]):
+            self.assertRaisesRegex(util.ProcessExecutionError,
+                                   r'Missing #! in script\?',
+                                   util.subp, (noshebang,))
 
     def test_subp_combined_stderr_stdout(self):
         """Providing combine_capture as True redirects stderr to stdout."""
@@ -868,14 +871,14 @@ class TestSubp(helpers.CiTestCase):
     def test_exception_has_out_err_are_bytes_if_decode_false(self):
         """Raised exc should have stderr, stdout as bytes if no decode."""
         with self.assertRaises(util.ProcessExecutionError) as cm:
-            util.subp([self.bogus_command], decode=False)
+            util.subp([BOGUS_COMMAND], decode=False)
         self.assertTrue(isinstance(cm.exception.stdout, bytes))
         self.assertTrue(isinstance(cm.exception.stderr, bytes))
 
     def test_exception_has_out_err_are_bytes_if_decode_true(self):
         """Raised exc should have stderr, stdout as string if no decode."""
         with self.assertRaises(util.ProcessExecutionError) as cm:
-            util.subp([self.bogus_command], decode=True)
+            util.subp([BOGUS_COMMAND], decode=True)
         self.assertTrue(isinstance(cm.exception.stdout, six.string_types))
         self.assertTrue(isinstance(cm.exception.stderr, six.string_types))
 
@@ -925,10 +928,10 @@ class TestSubp(helpers.CiTestCase):
             logs.append(log)
 
         with self.assertRaises(util.ProcessExecutionError):
-            util.subp([self.bogus_command], status_cb=status_cb)
+            util.subp([BOGUS_COMMAND], status_cb=status_cb)
 
         expected = [
-            'Begin run command: {cmd}\n'.format(cmd=self.bogus_command),
+            'Begin run command: {cmd}\n'.format(cmd=BOGUS_COMMAND),
             'ERROR: End run command: invalid command provided\n']
         self.assertEqual(expected, logs)
 
@@ -940,13 +943,13 @@ class TestSubp(helpers.CiTestCase):
             logs.append(log)
 
         with self.assertRaises(util.ProcessExecutionError):
-            util.subp(['ls', '/I/dont/exist'], status_cb=status_cb)
-        util.subp(['ls'], status_cb=status_cb)
+            util.subp([BASH, '-c', 'exit 2'], status_cb=status_cb)
+        util.subp([BASH, '-c', 'exit 0'], status_cb=status_cb)
 
         expected = [
-            'Begin run command: ls /I/dont/exist\n',
+            'Begin run command: %s -c exit 2\n' % BASH,
             'ERROR: End run command: exit(2)\n',
-            'Begin run command: ls\n',
+            'Begin run command: %s -c exit 0\n' % BASH,
             'End run command: exit(0)\n']
         self.assertEqual(expected, logs)
 
diff --git a/tests/unittests/test_vmware_config_file.py b/tests/unittests/test_vmware_config_file.py
index 036f687..602dedb 100644
--- a/tests/unittests/test_vmware_config_file.py
+++ b/tests/unittests/test_vmware_config_file.py
@@ -2,11 +2,15 @@
 # Copyright (C) 2016 VMware INC.
 #
 # Author: Sankar Tanguturi <stanguturi@xxxxxxxxxx>
+#         Pengpeng Sun <pengpengs@xxxxxxxxxx>
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import logging
+import os
 import sys
+import tempfile
+import textwrap
 
 from cloudinit.sources.DataSourceOVF import get_network_config_from_conf
 from cloudinit.sources.DataSourceOVF import read_vmware_imc
@@ -343,4 +347,115 @@ class TestVmwareConfigFile(CiTestCase):
         conf = Config(cf)
         self.assertEqual("test-script", conf.custom_script_name)
 
+
+class TestVmwareNetConfig(CiTestCase):
+    """Test conversion of vmware config to cloud-init config."""
+
+    def _get_NicConfigurator(self, text):
+        fp = None
+        try:
+            with tempfile.NamedTemporaryFile(mode="w", dir=self.tmp_dir(),
+                                             delete=False) as fp:
+                fp.write(text)
+                fp.close()
+            cfg = Config(ConfigFile(fp.name))
+            return NicConfigurator(cfg.nics, use_system_devices=False)
+        finally:
+            if fp:
+                os.unlink(fp.name)
+
+    def test_non_primary_nic_without_gateway(self):
+        """A non primary nic set is not required to have a gateway."""
+        config = textwrap.dedent("""\
+            [NETWORK]
+            NETWORKING = yes
+            BOOTPROTO = dhcp
+            HOSTNAME = myhost1
+            DOMAINNAME = eng.vmware.com
+
+            [NIC-CONFIG]
+            NICS = NIC1
+
+            [NIC1]
+            MACADDR = 00:50:56:a6:8c:08
+            ONBOOT = yes
+            IPv4_MODE = BACKWARDS_COMPATIBLE
+            BOOTPROTO = static
+            IPADDR = 10.20.87.154
+            NETMASK = 255.255.252.0
+            """)
+        nc = self._get_NicConfigurator(config)
+        self.assertEqual(
+            [{'type': 'physical', 'name': 'NIC1',
+              'mac_address': '00:50:56:a6:8c:08',
+              'subnets': [
+                  {'control': 'auto', 'type': 'static',
+                   'address': '10.20.87.154', 'netmask': '255.255.252.0'}]}],
+            nc.generate())
+
+    def test_non_primary_nic_with_gateway(self):
+        """A non primary nic set can have a gateway."""
+        config = textwrap.dedent("""\
+            [NETWORK]
+            NETWORKING = yes
+            BOOTPROTO = dhcp
+            HOSTNAME = myhost1
+            DOMAINNAME = eng.vmware.com
+
+            [NIC-CONFIG]
+            NICS = NIC1
+
+            [NIC1]
+            MACADDR = 00:50:56:a6:8c:08
+            ONBOOT = yes
+            IPv4_MODE = BACKWARDS_COMPATIBLE
+            BOOTPROTO = static
+            IPADDR = 10.20.87.154
+            NETMASK = 255.255.252.0
+            GATEWAY = 10.20.87.253
+            """)
+        nc = self._get_NicConfigurator(config)
+        self.assertEqual(
+            [{'type': 'physical', 'name': 'NIC1',
+              'mac_address': '00:50:56:a6:8c:08',
+              'subnets': [
+                  {'control': 'auto', 'type': 'static',
+                   'address': '10.20.87.154', 'netmask': '255.255.252.0'}]},
+             {'type': 'route', 'destination': '10.20.84.0/22',
+              'gateway': '10.20.87.253', 'metric': 10000}],
+            nc.generate())
+
+    def test_a_primary_nic_with_gateway(self):
+        """A primary nic set can have a gateway."""
+        config = textwrap.dedent("""\
+            [NETWORK]
+            NETWORKING = yes
+            BOOTPROTO = dhcp
+            HOSTNAME = myhost1
+            DOMAINNAME = eng.vmware.com
+
+            [NIC-CONFIG]
+            NICS = NIC1
+
+            [NIC1]
+            MACADDR = 00:50:56:a6:8c:08
+            ONBOOT = yes
+            IPv4_MODE = BACKWARDS_COMPATIBLE
+            BOOTPROTO = static
+            IPADDR = 10.20.87.154
+            NETMASK = 255.255.252.0
+            PRIMARY = true
+            GATEWAY = 10.20.87.253
+            """)
+        nc = self._get_NicConfigurator(config)
+        self.assertEqual(
+            [{'type': 'physical', 'name': 'NIC1',
+              'mac_address': '00:50:56:a6:8c:08',
+              'subnets': [
+                  {'control': 'auto', 'type': 'static',
+                   'address': '10.20.87.154', 'netmask': '255.255.252.0',
+                   'gateway': '10.20.87.253'}]}],
+            nc.generate())
+
+
 # vi: ts=4 expandtab
diff --git a/tools/Z99-cloud-locale-test.sh b/tools/Z99-cloud-locale-test.sh
index 4978d87..9ee44bd 100644
--- a/tools/Z99-cloud-locale-test.sh
+++ b/tools/Z99-cloud-locale-test.sh
@@ -11,8 +11,11 @@
 #  of how to fix them.
 
 locale_warn() {
-    local bad_names="" bad_lcs="" key="" val="" var="" vars="" bad_kv=""
-    local w1 w2 w3 w4 remain
+    command -v local >/dev/null && local _local="local" ||
+        typeset _local="typeset"
+
+    $_local bad_names="" bad_lcs="" key="" val="" var="" vars="" bad_kv=""
+    $_local w1 w2 w3 w4 remain
 
     # if shell is zsh, act like sh only for this function (-L).
     # The behavior change will not permenently affect user's shell.
@@ -53,8 +56,8 @@ locale_warn() {
     printf " This can affect your user experience significantly, including the\n"
     printf " ability to manage packages. You may install the locales by running:\n\n"
 
-    local bad invalid="" to_gen="" sfile="/usr/share/i18n/SUPPORTED"
-    local pkgs=""
+    $_local bad invalid="" to_gen="" sfile="/usr/share/i18n/SUPPORTED"
+    $_local local pkgs=""
     if [ -e "$sfile" ]; then
         for bad in ${bad_lcs}; do
             grep -q -i "${bad}" "$sfile" &&
@@ -67,7 +70,7 @@ locale_warn() {
     fi
     to_gen=${to_gen# }
 
-    local pkgs=""
+    $_local pkgs=""
     for bad in ${to_gen}; do
         pkgs="${pkgs} language-pack-${bad%%_*}"
     done
diff --git a/tools/Z99-cloudinit-warnings.sh b/tools/Z99-cloudinit-warnings.sh
index 1d41337..cb8b463 100644
--- a/tools/Z99-cloudinit-warnings.sh
+++ b/tools/Z99-cloudinit-warnings.sh
@@ -4,9 +4,11 @@
 # Purpose: show user warnings on login.
 
 cloud_init_warnings() {
-    local warning="" idir="/var/lib/cloud/instance" n=0
-    local warndir="$idir/warnings"
-    local ufile="$HOME/.cloud-warnings.skip" sfile="$warndir/.skip"
+    command -v local >/dev/null && local _local="local" ||
+        typeset _local="typeset"
+    $_local warning="" idir="/var/lib/cloud/instance" n=0
+    $_local warndir="$idir/warnings"
+    $_local ufile="$HOME/.cloud-warnings.skip" sfile="$warndir/.skip"
     [ -d "$warndir" ] || return 0
     [ ! -f "$ufile" ] || return 0
     [ ! -f "$sfile" ] ||

Follow ups