← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial.

Commit message:
Upstream snapshot for SRU into Xenial

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1840080 in cloud-init (Ubuntu): "cloud-init cc_ubuntu_drivers does not set up /etc/default/linux-modules-nvidia"
  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1840080

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/371685
-- 
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial.
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000..170a71e
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,9 @@
+***This GitHub repo is only a mirror.  Do not submit pull requests
+here!***
+
+Thank you for taking the time to write and submit a change to
+cloud-init!   Please follow [our hacking
+guide](https://cloudinit.readthedocs.io/en/latest/topics/hacking.html)
+to submit your change to cloud-init's [Launchpad git
+repository](https://code.launchpad.net/cloud-init/), where cloud-init
+development happens.
diff --git a/ChangeLog b/ChangeLog
index bf48fd4..a98f8c2 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,39 @@
+19.2:
+ - net: add rfc3442 (classless static routes) to EphemeralDHCP
+   (LP: #1821102)
+ - templates/ntp.conf.debian.tmpl: fix missing newline for pools
+   (LP: #1836598)
+ - Support netplan renderer in Arch Linux [Conrad Hoffmann]
+ - Fix typo in publicly viewable documentation. [David Medberry]
+ - Add a cdrom size checker for OVF ds to ds-identify
+   [Pengpeng Sun] (LP: #1806701)
+ - VMWare: Trigger the post customization script via cc_scripts module.
+   [Xiaofeng Wang] (LP: #1833192)
+ - Cloud-init analyze module: Added ability to analyze boot events.
+   [Sam Gilson]
+ - Update debian eni network configuration location, retain Ubuntu setting
+   [Janos Lenart]
+ - net: skip bond interfaces in get_interfaces
+   [Stanislav Makar] (LP: #1812857)
+ - Fix a couple of issues raised by a coverity scan
+ - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
+ - doc: indicate that netplan is default in Ubuntu now
+ - azure: add region and AZ properties from imds compute location metadata
+ - sysconfig: support more bonding options [Penghui Liao]
+ - cloud-init-generator: use libexec path to ds-identify on redhat systems
+   (LP: #1833264)
+ - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
+ - Allow identification of OpenStack by Asset Tag
+   [Mark T. Voelker] (LP: #1669875)
+ - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
+ - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
+ - netplan: update netplan key mappings for gratuitous-arp (LP: #1827238)
+ - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
+ - freebsd: ability to grow root file system [Gonéri Le Bouder]
+ - freebsd: NoCloud data source support [Gonéri Le Bouder] (LP: #1645824)
+ - Azure: Return static fallback address as if failed to find endpoint
+   [Jason Zions (MSFT)]
+
 19.1:
   - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
   - tests: add Eoan release [Paride Legovini]
diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
index f861365..99e5c20 100644
--- a/cloudinit/analyze/__main__.py
+++ b/cloudinit/analyze/__main__.py
@@ -7,7 +7,7 @@ import re
 import sys
 
 from cloudinit.util import json_dumps
-
+from datetime import datetime
 from . import dump
 from . import show
 
@@ -52,9 +52,93 @@ def get_parser(parser=None):
                              dest='outfile', default='-',
                              help='specify where to write output. ')
     parser_dump.set_defaults(action=('dump', analyze_dump))
+    parser_boot = subparsers.add_parser(
+        'boot', help='Print list of boot times for kernel and cloud-init')
+    parser_boot.add_argument('-i', '--infile', action='store',
+                             dest='infile', default='/var/log/cloud-init.log',
+                             help='specify where to read input. ')
+    parser_boot.add_argument('-o', '--outfile', action='store',
+                             dest='outfile', default='-',
+                             help='specify where to write output.')
+    parser_boot.set_defaults(action=('boot', analyze_boot))
     return parser
 
 
+def analyze_boot(name, args):
+    """Report a list of how long different boot operations took.
+
+    For Example:
+    -- Most Recent Boot Record --
+        Kernel Started at: <time>
+        Kernel ended boot at: <time>
+        Kernel time to boot (seconds): <time>
+        Cloud-init activated by systemd at: <time>
+        Time between Kernel end boot and Cloud-init activation (seconds):<time>
+        Cloud-init start: <time>
+    """
+    infh, outfh = configure_io(args)
+    kernel_info = show.dist_check_timestamp()
+    status_code, kernel_start, kernel_end, ci_sysd_start = \
+        kernel_info
+    kernel_start_timestamp = datetime.utcfromtimestamp(kernel_start)
+    kernel_end_timestamp = datetime.utcfromtimestamp(kernel_end)
+    ci_sysd_start_timestamp = datetime.utcfromtimestamp(ci_sysd_start)
+    try:
+        last_init_local = \
+            [e for e in _get_events(infh) if e['name'] == 'init-local' and
+                'starting search' in e['description']][-1]
+        ci_start = datetime.utcfromtimestamp(last_init_local['timestamp'])
+    except IndexError:
+        ci_start = 'Could not find init-local log-line in cloud-init.log'
+        status_code = show.FAIL_CODE
+
+    FAILURE_MSG = 'Your Linux distro or container does not support this ' \
+                  'functionality.\n' \
+                  'You must be running a Kernel Telemetry supported ' \
+                  'distro.\nPlease check ' \
+                  'https://cloudinit.readthedocs.io/en/latest' \
+                  '/topics/analyze.html for more ' \
+                  'information on supported distros.\n'
+
+    SUCCESS_MSG = '-- Most Recent Boot Record --\n' \
+                  '    Kernel Started at: {k_s_t}\n' \
+                  '    Kernel ended boot at: {k_e_t}\n' \
+                  '    Kernel time to boot (seconds): {k_r}\n' \
+                  '    Cloud-init activated by systemd at: {ci_sysd_t}\n' \
+                  '    Time between Kernel end boot and Cloud-init ' \
+                  'activation (seconds): {bt_r}\n' \
+                  '    Cloud-init start: {ci_start}\n'
+
+    CONTAINER_MSG = '-- Most Recent Container Boot Record --\n' \
+                    '    Container started at: {k_s_t}\n' \
+                    '    Cloud-init activated by systemd at: {ci_sysd_t}\n' \
+                    '    Cloud-init start: {ci_start}\n' \
+
+    status_map = {
+        show.FAIL_CODE: FAILURE_MSG,
+        show.CONTAINER_CODE: CONTAINER_MSG,
+        show.SUCCESS_CODE: SUCCESS_MSG
+    }
+
+    kernel_runtime = kernel_end - kernel_start
+    between_process_runtime = ci_sysd_start - kernel_end
+
+    kwargs = {
+        'k_s_t': kernel_start_timestamp,
+        'k_e_t': kernel_end_timestamp,
+        'k_r': kernel_runtime,
+        'bt_r': between_process_runtime,
+        'k_e': kernel_end,
+        'k_s': kernel_start,
+        'ci_sysd': ci_sysd_start,
+        'ci_sysd_t': ci_sysd_start_timestamp,
+        'ci_start': ci_start
+    }
+
+    outfh.write(status_map[status_code].format(**kwargs))
+    return status_code
+
+
 def analyze_blame(name, args):
     """Report a list of records sorted by largest time delta.
 
@@ -119,7 +203,7 @@ def analyze_dump(name, args):
 
 def _get_events(infile):
     rawdata = None
-    events, rawdata = show.load_events(infile, None)
+    events, rawdata = show.load_events_infile(infile)
     if not events:
         events, _ = dump.dump_events(rawdata=rawdata)
     return events
diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
index 3e778b8..511b808 100644
--- a/cloudinit/analyze/show.py
+++ b/cloudinit/analyze/show.py
@@ -8,8 +8,11 @@ import base64
 import datetime
 import json
 import os
+import time
+import sys
 
 from cloudinit import util
+from cloudinit.distros import uses_systemd
 
 #  An event:
 '''
@@ -49,6 +52,10 @@ format_key = {
 
 formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)
                            for k, v in format_key.items()])
+SUCCESS_CODE = 'successful'
+FAIL_CODE = 'failure'
+CONTAINER_CODE = 'container'
+TIMESTAMP_UNKNOWN = (FAIL_CODE, -1, -1, -1)
 
 
 def format_record(msg, event):
@@ -125,9 +132,175 @@ def total_time_record(total_time):
     return 'Total Time: %3.5f seconds\n' % total_time
 
 
+class SystemctlReader(object):
+    '''
+    Class for dealing with all systemctl subp calls in a consistent manner.
+    '''
+    def __init__(self, property, parameter=None):
+        self.epoch = None
+        self.args = ['/bin/systemctl', 'show']
+        if parameter:
+            self.args.append(parameter)
+        self.args.extend(['-p', property])
+        # Don't want the init of our object to break. Instead of throwing
+        # an exception, set an error code that gets checked when data is
+        # requested from the object
+        self.failure = self.subp()
+
+    def subp(self):
+        '''
+        Make a subp call based on set args and handle errors by setting
+        failure code
+
+        :return: whether the subp call failed or not
+        '''
+        try:
+            value, err = util.subp(self.args, capture=True)
+            if err:
+                return err
+            self.epoch = value
+            return None
+        except Exception as systemctl_fail:
+            return systemctl_fail
+
+    def parse_epoch_as_float(self):
+        '''
+        If subp call succeeded, return the timestamp from subp as a float.
+
+        :return: timestamp as a float
+        '''
+        # subp has 2 ways to fail: it either fails and throws an exception,
+        # or returns an error code. Raise an exception here in order to make
+        # sure both scenarios throw exceptions
+        if self.failure:
+            raise RuntimeError('Subprocess call to systemctl has failed, '
+                               'returning error code ({})'
+                               .format(self.failure))
+        # Output from systemctl show has the format Property=Value.
+        # For example, UserspaceMonotonic=1929304
+        timestamp = self.epoch.split('=')[1]
+        # Timestamps reported by systemctl are in microseconds, converting
+        return float(timestamp) / 1000000
+
+
+def dist_check_timestamp():
+    '''
+    Determine which init system a particular linux distro is using.
+    Each init system (systemd, upstart, etc) has a different way of
+    providing timestamps.
+
+    :return: timestamps of kernelboot, kernelendboot, and cloud-initstart
+    or TIMESTAMP_UNKNOWN if the timestamps cannot be retrieved.
+    '''
+
+    if uses_systemd():
+        return gather_timestamps_using_systemd()
+
+    # Use dmesg to get timestamps if the distro does not have systemd
+    if util.is_FreeBSD() or 'gentoo' in \
+            util.system_info()['system'].lower():
+        return gather_timestamps_using_dmesg()
+
+    # this distro doesn't fit anything that is supported by cloud-init. just
+    # return error codes
+    return TIMESTAMP_UNKNOWN
+
+
+def gather_timestamps_using_dmesg():
+    '''
+    Gather timestamps that corresponds to kernel begin initialization,
+    kernel finish initialization using dmesg as opposed to systemctl
+
+    :return: the two timestamps plus a dummy timestamp to keep consistency
+    with gather_timestamps_using_systemd
+    '''
+    try:
+        data, _ = util.subp(['dmesg'], capture=True)
+        split_entries = data[0].splitlines()
+        for i in split_entries:
+            if i.decode('UTF-8').find('user') != -1:
+                splitup = i.decode('UTF-8').split()
+                stripped = splitup[1].strip(']')
+
+                # kernel timestamp from dmesg is equal to 0,
+                # with the userspace timestamp relative to it.
+                user_space_timestamp = float(stripped)
+                kernel_start = float(time.time()) - float(util.uptime())
+                kernel_end = kernel_start + user_space_timestamp
+
+                # systemd wont start cloud-init in this case,
+                # so we cannot get that timestamp
+                return SUCCESS_CODE, kernel_start, kernel_end, \
+                    kernel_end
+
+    except Exception:
+        pass
+    return TIMESTAMP_UNKNOWN
+
+
+def gather_timestamps_using_systemd():
+    '''
+    Gather timestamps that corresponds to kernel begin initialization,
+    kernel finish initialization. and cloud-init systemd unit activation
+
+    :return: the three timestamps
+    '''
+    kernel_start = float(time.time()) - float(util.uptime())
+    try:
+        delta_k_end = SystemctlReader('UserspaceTimestampMonotonic')\
+            .parse_epoch_as_float()
+        delta_ci_s = SystemctlReader('InactiveExitTimestampMonotonic',
+                                     'cloud-init-local').parse_epoch_as_float()
+        base_time = kernel_start
+        status = SUCCESS_CODE
+        # lxc based containers do not set their monotonic zero point to be when
+        # the container starts, instead keep using host boot as zero point
+        # time.CLOCK_MONOTONIC_RAW is only available in python 3.3
+        if util.is_container():
+            # clock.monotonic also uses host boot as zero point
+            if sys.version_info >= (3, 3):
+                base_time = float(time.time()) - float(time.monotonic())
+                # TODO: lxcfs automatically truncates /proc/uptime to seconds
+                # in containers when https://github.com/lxc/lxcfs/issues/292
+                # is fixed, util.uptime() should be used instead of stat on
+                try:
+                    file_stat = os.stat('/proc/1/cmdline')
+                    kernel_start = file_stat.st_atime
+                except OSError as err:
+                    raise RuntimeError('Could not determine container boot '
+                                       'time from /proc/1/cmdline. ({})'
+                                       .format(err))
+                status = CONTAINER_CODE
+            else:
+                status = FAIL_CODE
+        kernel_end = base_time + delta_k_end
+        cloudinit_sysd = base_time + delta_ci_s
+
+    except Exception as e:
+        # Except ALL exceptions as Systemctl reader can throw many different
+        # errors, but any failure in systemctl means that timestamps cannot be
+        # obtained
+        print(e)
+        return TIMESTAMP_UNKNOWN
+    return status, kernel_start, kernel_end, cloudinit_sysd
+
+
 def generate_records(events, blame_sort=False,
                      print_format="(%n) %d seconds in %I%D",
                      dump_files=False, log_datafiles=False):
+    '''
+    Take in raw events and create parent-child dependencies between events
+    in order to order events in chronological order.
+
+    :param events: JSONs from dump that represents events taken from logs
+    :param blame_sort: whether to sort by timestamp or by time taken.
+    :param print_format: formatting to represent event, time stamp,
+    and time taken by the event in one line
+    :param dump_files: whether to dump files into JSONs
+    :param log_datafiles: whether or not to log events generated
+
+    :return: boot records ordered chronologically
+    '''
 
     sorted_events = sorted(events, key=lambda x: x['timestamp'])
     records = []
@@ -189,19 +362,28 @@ def generate_records(events, blame_sort=False,
 
 
 def show_events(events, print_format):
+    '''
+    A passthrough method that makes it easier to call generate_records()
+
+    :param events: JSONs from dump that represents events taken from logs
+    :param print_format: formatting to represent event, time stamp,
+    and time taken by the event in one line
+
+    :return: boot records ordered chronologically
+    '''
     return generate_records(events, print_format=print_format)
 
 
-def load_events(infile, rawdata=None):
-    if rawdata:
-        data = rawdata.read()
-    else:
-        data = infile.read()
+def load_events_infile(infile):
+    '''
+    Takes in a log file, read it, and convert to json.
+
+    :param infile: The Log file to be read
 
-    j = None
+    :return: json version of logfile, raw file
+    '''
+    data = infile.read()
     try:
-        j = json.loads(data)
+        return json.loads(data), data
     except ValueError:
-        pass
-
-    return j, data
+        return None, data
diff --git a/cloudinit/analyze/tests/test_boot.py b/cloudinit/analyze/tests/test_boot.py
new file mode 100644
index 0000000..706e2cc
--- /dev/null
+++ b/cloudinit/analyze/tests/test_boot.py
@@ -0,0 +1,170 @@
+import os
+from cloudinit.analyze.__main__ import (analyze_boot, get_parser)
+from cloudinit.tests.helpers import CiTestCase, mock
+from cloudinit.analyze.show import dist_check_timestamp, SystemctlReader, \
+    FAIL_CODE, CONTAINER_CODE
+
+err_code = (FAIL_CODE, -1, -1, -1)
+
+
+class TestDistroChecker(CiTestCase):
+
+    @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
+                                                                     ''),
+                                                            'system': ''})
+    @mock.patch('platform.linux_distribution', return_value=('', '', ''))
+    @mock.patch('cloudinit.util.is_FreeBSD', return_value=False)
+    def test_blank_distro(self, m_sys_info, m_linux_distribution, m_free_bsd):
+        self.assertEqual(err_code, dist_check_timestamp())
+
+    @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
+                                                                     '')})
+    @mock.patch('platform.linux_distribution', return_value=('', '', ''))
+    @mock.patch('cloudinit.util.is_FreeBSD', return_value=True)
+    def test_freebsd_gentoo_cant_find(self, m_sys_info,
+                                      m_linux_distribution, m_is_FreeBSD):
+        self.assertEqual(err_code, dist_check_timestamp())
+
+    @mock.patch('cloudinit.util.subp', return_value=(0, 1))
+    def test_subp_fails(self, m_subp):
+        self.assertEqual(err_code, dist_check_timestamp())
+
+
+class TestSystemCtlReader(CiTestCase):
+
+    def test_systemctl_invalid_property(self):
+        reader = SystemctlReader('dummyProperty')
+        with self.assertRaises(RuntimeError):
+            reader.parse_epoch_as_float()
+
+    def test_systemctl_invalid_parameter(self):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        with self.assertRaises(RuntimeError):
+            reader.parse_epoch_as_float()
+
+    @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
+    def test_systemctl_works_correctly_threshold(self, m_subp):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        self.assertEqual(1.0, reader.parse_epoch_as_float())
+        thresh = 1.0 - reader.parse_epoch_as_float()
+        self.assertTrue(thresh < 1e-6)
+        self.assertTrue(thresh > (-1 * 1e-6))
+
+    @mock.patch('cloudinit.util.subp', return_value=('U=0', None))
+    def test_systemctl_succeed_zero(self, m_subp):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        self.assertEqual(0.0, reader.parse_epoch_as_float())
+
+    @mock.patch('cloudinit.util.subp', return_value=('U=1', None))
+    def test_systemctl_succeed_distinct(self, m_subp):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        val1 = reader.parse_epoch_as_float()
+        m_subp.return_value = ('U=2', None)
+        reader2 = SystemctlReader('dummyProperty', 'dummyParameter')
+        val2 = reader2.parse_epoch_as_float()
+        self.assertNotEqual(val1, val2)
+
+    @mock.patch('cloudinit.util.subp', return_value=('100', None))
+    def test_systemctl_epoch_not_splittable(self, m_subp):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        with self.assertRaises(IndexError):
+            reader.parse_epoch_as_float()
+
+    @mock.patch('cloudinit.util.subp', return_value=('U=foobar', None))
+    def test_systemctl_cannot_convert_epoch_to_float(self, m_subp):
+        reader = SystemctlReader('dummyProperty', 'dummyParameter')
+        with self.assertRaises(ValueError):
+            reader.parse_epoch_as_float()
+
+
+class TestAnalyzeBoot(CiTestCase):
+
+    def set_up_dummy_file_ci(self, path, log_path):
+        infh = open(path, 'w+')
+        infh.write('2019-07-08 17:40:49,601 - util.py[DEBUG]: Cloud-init v. '
+                   '19.1-1-gbaa47854-0ubuntu1~18.04.1 running \'init-local\' '
+                   'at Mon, 08 Jul 2019 17:40:49 +0000. Up 18.84 seconds.')
+        infh.close()
+        outfh = open(log_path, 'w+')
+        outfh.close()
+
+    def set_up_dummy_file(self, path, log_path):
+        infh = open(path, 'w+')
+        infh.write('dummy data')
+        infh.close()
+        outfh = open(log_path, 'w+')
+        outfh.close()
+
+    def remove_dummy_file(self, path, log_path):
+        if os.path.isfile(path):
+            os.remove(path)
+        if os.path.isfile(log_path):
+            os.remove(log_path)
+
+    @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
+                return_value=err_code)
+    def test_boot_invalid_distro(self, m_dist_check_timestamp):
+
+        path = os.path.dirname(os.path.abspath(__file__))
+        log_path = path + '/boot-test.log'
+        path += '/dummy.log'
+        self.set_up_dummy_file(path, log_path)
+
+        parser = get_parser()
+        args = parser.parse_args(args=['boot', '-i', path, '-o',
+                                       log_path])
+        name_default = ''
+        analyze_boot(name_default, args)
+        # now args have been tested, go into outfile and make sure error
+        # message is in the outfile
+        outfh = open(args.outfile, 'r')
+        data = outfh.read()
+        err_string = 'Your Linux distro or container does not support this ' \
+                     'functionality.\nYou must be running a Kernel ' \
+                     'Telemetry supported distro.\nPlease check ' \
+                     'https://cloudinit.readthedocs.io/en/latest/topics' \
+                     '/analyze.html for more information on supported ' \
+                     'distros.\n'
+
+        self.remove_dummy_file(path, log_path)
+        self.assertEqual(err_string, data)
+
+    @mock.patch("cloudinit.util.is_container", return_value=True)
+    @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
+    def test_container_no_ci_log_line(self, m_is_container, m_subp):
+        path = os.path.dirname(os.path.abspath(__file__))
+        log_path = path + '/boot-test.log'
+        path += '/dummy.log'
+        self.set_up_dummy_file(path, log_path)
+
+        parser = get_parser()
+        args = parser.parse_args(args=['boot', '-i', path, '-o',
+                                       log_path])
+        name_default = ''
+
+        finish_code = analyze_boot(name_default, args)
+
+        self.remove_dummy_file(path, log_path)
+        self.assertEqual(FAIL_CODE, finish_code)
+
+    @mock.patch("cloudinit.util.is_container", return_value=True)
+    @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
+    @mock.patch('cloudinit.analyze.__main__._get_events', return_value=[{
+        'name': 'init-local', 'description': 'starting search', 'timestamp':
+        100000}])
+    @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
+                return_value=(CONTAINER_CODE, 1, 1, 1))
+    def test_container_ci_log_line(self, m_is_container, m_subp, m_get, m_g):
+        path = os.path.dirname(os.path.abspath(__file__))
+        log_path = path + '/boot-test.log'
+        path += '/dummy.log'
+        self.set_up_dummy_file_ci(path, log_path)
+
+        parser = get_parser()
+        args = parser.parse_args(args=['boot', '-i', path, '-o',
+                                       log_path])
+        name_default = ''
+        finish_code = analyze_boot(name_default, args)
+
+        self.remove_dummy_file(path, log_path)
+        self.assertEqual(CONTAINER_CODE, finish_code)
diff --git a/cloudinit/apport.py b/cloudinit/apport.py
index 22cb7fd..003ff1f 100644
--- a/cloudinit/apport.py
+++ b/cloudinit/apport.py
@@ -23,6 +23,7 @@ KNOWN_CLOUD_NAMES = [
     'CloudStack',
     'DigitalOcean',
     'GCE - Google Compute Engine',
+    'Exoscale',
     'Hetzner Cloud',
     'IBM - (aka SoftLayer or BlueMix)',
     'LXD',
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index 919d199..f01e2aa 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -332,6 +332,8 @@ def apply_apt(cfg, cloud, target):
 
 
 def debconf_set_selections(selections, target=None):
+    if not selections.endswith(b'\n'):
+        selections += b'\n'
     util.subp(['debconf-set-selections'], data=selections, target=target,
               capture=True)
 
@@ -374,7 +376,7 @@ def apply_debconf_selections(cfg, target=None):
 
     selections = '\n'.join(
         [selsets[key] for key in sorted(selsets.keys())])
-    debconf_set_selections(selections.encode() + b"\n", target=target)
+    debconf_set_selections(selections.encode(), target=target)
 
     # get a complete list of packages listed in input
     pkgs_cfgd = set()
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 71d13ed..d983077 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -152,7 +152,7 @@ def handle(name, cfg, cloud, log, args):
 
             if cmd_attach:
                 log.debug("Setting up default lxd bridge: %s" %
-                          " ".join(cmd_create))
+                          " ".join(cmd_attach))
                 _lxc(cmd_attach)
 
     elif bridge_cfg:
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index 4585e4d..cf9b5ab 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -9,27 +9,40 @@
 """
 Set Passwords
 -------------
-**Summary:** Set user passwords
-
-Set system passwords and enable or disable ssh password authentication.
-The ``chpasswd`` config key accepts a dictionary containing a single one of two
-keys, either ``expire`` or ``list``. If ``expire`` is specified and is set to
-``false``, then the ``password`` global config key is used as the password for
-all user accounts. If the ``expire`` key is specified and is set to ``true``
-then user passwords will be expired, preventing the default system passwords
-from being used.
-
-If the ``list`` key is provided, a list of
-``username:password`` pairs can be specified. The usernames specified
-must already exist on the system, or have been created using the
-``cc_users_groups`` module. A password can be randomly generated using
-``username:RANDOM`` or ``username:R``. A hashed password can be specified
-using ``username:$6$salt$hash``. Password ssh authentication can be
-enabled, disabled, or left to system defaults using ``ssh_pwauth``.
+**Summary:** Set user passwords and enable/disable SSH password authentication
+
+This module consumes three top-level config keys: ``ssh_pwauth``, ``chpasswd``
+and ``password``.
+
+The ``ssh_pwauth`` config key determines whether or not sshd will be configured
+to accept password authentication.  True values will enable password auth,
+false values will disable password auth, and the literal string ``unchanged``
+will leave it unchanged.  Setting no value will also leave the current setting
+on-disk unchanged.
+
+The ``chpasswd`` config key accepts a dictionary containing either or both of
+``expire`` and ``list``.
+
+If the ``list`` key is provided, it should contain a list of
+``username:password`` pairs.  This can be either a YAML list (of strings), or a
+multi-line string with one pair per line.  Each user will have the
+corresponding password set.  A password can be randomly generated by specifying
+``RANDOM`` or ``R`` as a user's password.  A hashed password, created by a tool
+like ``mkpasswd``, can be specified; a regex
+(``r'\\$(1|2a|2y|5|6)(\\$.+){2}'``) is used to determine if a password value
+should be treated as a hash.
 
 .. note::
-    if using ``expire: true`` then a ssh authkey should be specified or it may
-    not be possible to login to the system
+    The users specified must already exist on the system.  Users will have been
+    created by the ``cc_users_groups`` module at this point.
+
+By default, all users on the system will have their passwords expired (meaning
+that they will have to be reset the next time the user logs in).  To disable
+this behaviour, set ``expire`` under ``chpasswd`` to a false value.
+
+If a ``list`` of user/password pairs is not specified under ``chpasswd``, then
+the value of the ``password`` config key will be used to set the default user's
+password.
 
 **Internal name:** ``cc_set_passwords``
 
@@ -160,6 +173,8 @@ def handle(_name, cfg, cloud, log, args):
         hashed_users = []
         randlist = []
         users = []
+        # N.B. This regex is included in the documentation (i.e. the module
+        # docstring), so any changes to it should be reflected there.
         prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
         for line in plist:
             u, p = line.split(':', 1)
diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
index f8f7cb3..fdd8f4d 100755
--- a/cloudinit/config/cc_ssh.py
+++ b/cloudinit/config/cc_ssh.py
@@ -91,6 +91,9 @@ public keys.
     ssh_authorized_keys:
         - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ...
         - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...
+    ssh_publish_hostkeys:
+        enabled: <true/false> (Defaults to true)
+        blacklist: <list of key types> (Defaults to [dsa])
 """
 
 import glob
@@ -104,6 +107,10 @@ from cloudinit import util
 
 GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']
 KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
+PUBLISH_HOST_KEYS = True
+# Don't publish the dsa hostkey by default since OpenSSH recommends not using
+# it.
+HOST_KEY_PUBLISH_BLACKLIST = ['dsa']
 
 CONFIG_KEY_TO_FILE = {}
 PRIV_TO_PUB = {}
@@ -176,6 +183,23 @@ def handle(_name, cfg, cloud, log, _args):
                         util.logexc(log, "Failed generating key type %s to "
                                     "file %s", keytype, keyfile)
 
+    if "ssh_publish_hostkeys" in cfg:
+        host_key_blacklist = util.get_cfg_option_list(
+            cfg["ssh_publish_hostkeys"], "blacklist",
+            HOST_KEY_PUBLISH_BLACKLIST)
+        publish_hostkeys = util.get_cfg_option_bool(
+            cfg["ssh_publish_hostkeys"], "enabled", PUBLISH_HOST_KEYS)
+    else:
+        host_key_blacklist = HOST_KEY_PUBLISH_BLACKLIST
+        publish_hostkeys = PUBLISH_HOST_KEYS
+
+    if publish_hostkeys:
+        hostkeys = get_public_host_keys(blacklist=host_key_blacklist)
+        try:
+            cloud.datasource.publish_host_keys(hostkeys)
+        except Exception:
+            util.logexc(log, "Publishing host keys failed!")
+
     try:
         (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro)
         (user, _user_config) = ug_util.extract_default(users)
@@ -209,4 +233,35 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
 
     ssh_util.setup_user_keys(keys, 'root', options=key_prefix)
 
+
+def get_public_host_keys(blacklist=None):
+    """Read host keys from /etc/ssh/*.pub files and return them as a list.
+
+    @param blacklist: List of key types to ignore. e.g. ['dsa', 'rsa']
+    @returns: List of keys, each formatted as a two-element tuple.
+        e.g. [('ssh-rsa', 'AAAAB3Nz...'), ('ssh-ed25519', 'AAAAC3Nx...')]
+    """
+    public_key_file_tmpl = '%s.pub' % (KEY_FILE_TPL,)
+    key_list = []
+    blacklist_files = []
+    if blacklist:
+        # Convert blacklist to filenames:
+        # 'dsa' -> '/etc/ssh/ssh_host_dsa_key.pub'
+        blacklist_files = [public_key_file_tmpl % (key_type,)
+                           for key_type in blacklist]
+    # Get list of public key files and filter out blacklisted files.
+    file_list = [hostfile for hostfile
+                 in glob.glob(public_key_file_tmpl % ('*',))
+                 if hostfile not in blacklist_files]
+
+    # Read host key files, retrieve first two fields as a tuple and
+    # append that tuple to key_list.
+    for file_name in file_list:
+        file_contents = util.load_file(file_name)
+        key_data = file_contents.split()
+        if key_data and len(key_data) > 1:
+            key_list.append(tuple(key_data[:2]))
+    return key_list
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py
index 91feb60..297451d 100644
--- a/cloudinit/config/cc_ubuntu_drivers.py
+++ b/cloudinit/config/cc_ubuntu_drivers.py
@@ -2,12 +2,14 @@
 
 """Ubuntu Drivers: Interact with third party drivers in Ubuntu."""
 
+import os
 from textwrap import dedent
 
 from cloudinit.config.schema import (
     get_schema_doc, validate_cloudconfig_schema)
 from cloudinit import log as logging
 from cloudinit.settings import PER_INSTANCE
+from cloudinit import temp_utils
 from cloudinit import type_utils
 from cloudinit import util
 
@@ -64,6 +66,33 @@ OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = (
 __doc__ = get_schema_doc(schema)  # Supplement python help()
 
 
+# Use a debconf template to configure a global debconf variable
+# (linux/nvidia/latelink) setting this to "true" allows the
+# 'linux-restricted-modules' deb to accept the NVIDIA EULA and the package
+# will automatically link the drivers to the running kernel.
+
+# EOL_XENIAL: can then drop this script and use python3-debconf which is only
+# available in Bionic and later. Can't use python3-debconf currently as it
+# isn't in Xenial and doesn't yet support X_LOADTEMPLATEFILE debconf command.
+
+NVIDIA_DEBCONF_CONTENT = """\
+Template: linux/nvidia/latelink
+Type: boolean
+Default: true
+Description: Late-link NVIDIA kernel modules?
+ Enable this to link the NVIDIA kernel modules in cloud-init and
+ make them available for use.
+"""
+
+NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT = """\
+#!/bin/sh
+# Allow cloud-init to trigger EULA acceptance via registering a debconf
+# template to set linux/nvidia/latelink true
+. /usr/share/debconf/confmodule
+db_x_loadtemplatefile "$1" cloud-init
+"""
+
+
 def install_drivers(cfg, pkg_install_func):
     if not isinstance(cfg, dict):
         raise TypeError(
@@ -89,9 +118,28 @@ def install_drivers(cfg, pkg_install_func):
     if version_cfg:
         driver_arg += ':{}'.format(version_cfg)
 
-    LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)",
+    LOG.debug("Installing and activating NVIDIA drivers (%s=%s, version=%s)",
               cfgpath, nv_acc, version_cfg if version_cfg else 'latest')
 
+    # Register and set debconf selection linux/nvidia/latelink = true
+    tdir = temp_utils.mkdtemp(needs_exe=True)
+    debconf_file = os.path.join(tdir, 'nvidia.template')
+    debconf_script = os.path.join(tdir, 'nvidia-debconf.sh')
+    try:
+        util.write_file(debconf_file, NVIDIA_DEBCONF_CONTENT)
+        util.write_file(
+            debconf_script,
+            util.encode_text(NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT),
+            mode=0o755)
+        util.subp([debconf_script, debconf_file])
+    except Exception as e:
+        util.logexc(
+            LOG, "Failed to register NVIDIA debconf template: %s", str(e))
+        raise
+    finally:
+        if os.path.isdir(tdir):
+            util.del_dir(tdir)
+
     try:
         util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg])
     except util.ProcessExecutionError as exc:
diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py
index c8a4271..e778984 100644
--- a/cloudinit/config/tests/test_ssh.py
+++ b/cloudinit/config/tests/test_ssh.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import os.path
 
 from cloudinit.config import cc_ssh
 from cloudinit import ssh_util
@@ -12,6 +13,25 @@ MODPATH = "cloudinit.config.cc_ssh."
 class TestHandleSsh(CiTestCase):
     """Test cc_ssh handling of ssh config."""
 
+    def _publish_hostkey_test_setup(self):
+        self.test_hostkeys = {
+            'dsa': ('ssh-dss', 'AAAAB3NzaC1kc3MAAACB'),
+            'ecdsa': ('ecdsa-sha2-nistp256', 'AAAAE2VjZ'),
+            'ed25519': ('ssh-ed25519', 'AAAAC3NzaC1lZDI'),
+            'rsa': ('ssh-rsa', 'AAAAB3NzaC1yc2EAAA'),
+        }
+        self.test_hostkey_files = []
+        hostkey_tmpdir = self.tmp_dir()
+        for key_type in ['dsa', 'ecdsa', 'ed25519', 'rsa']:
+            key_data = self.test_hostkeys[key_type]
+            filename = 'ssh_host_%s_key.pub' % key_type
+            filepath = os.path.join(hostkey_tmpdir, filename)
+            self.test_hostkey_files.append(filepath)
+            with open(filepath, 'w') as f:
+                f.write(' '.join(key_data))
+
+        cc_ssh.KEY_FILE_TPL = os.path.join(hostkey_tmpdir, 'ssh_host_%s_key')
+
     def test_apply_credentials_with_user(self, m_setup_keys):
         """Apply keys for the given user and root."""
         keys = ["key1"]
@@ -64,6 +84,7 @@ class TestHandleSsh(CiTestCase):
         # Mock os.path.exits to True to short-circuit the key writing logic
         m_path_exists.return_value = True
         m_nug.return_value = ([], {})
+        cc_ssh.PUBLISH_HOST_KEYS = False
         cloud = self.tmp_cloud(
             distro='ubuntu', metadata={'public-keys': keys})
         cc_ssh.handle("name", cfg, cloud, None, None)
@@ -149,3 +170,148 @@ class TestHandleSsh(CiTestCase):
         self.assertEqual([mock.call(set(keys), user),
                           mock.call(set(keys), "root", options="")],
                          m_setup_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_publish_hostkeys_default(
+            self, m_path_exists, m_nug, m_glob, m_setup_keys):
+        """Test handle with various configs for ssh_publish_hostkeys."""
+        self._publish_hostkey_test_setup()
+        cc_ssh.PUBLISH_HOST_KEYS = True
+        keys = ["key1"]
+        user = "clouduser"
+        # Return no matching keys for first glob, test keys for second.
+        m_glob.side_effect = iter([
+                                  [],
+                                  self.test_hostkey_files,
+                                  ])
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.datasource.publish_host_keys = mock.Mock()
+
+        cfg = {}
+        expected_call = [self.test_hostkeys[key_type] for key_type
+                         in ['ecdsa', 'ed25519', 'rsa']]
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        self.assertEqual([mock.call(expected_call)],
+                         cloud.datasource.publish_host_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_publish_hostkeys_config_enable(
+            self, m_path_exists, m_nug, m_glob, m_setup_keys):
+        """Test handle with various configs for ssh_publish_hostkeys."""
+        self._publish_hostkey_test_setup()
+        cc_ssh.PUBLISH_HOST_KEYS = False
+        keys = ["key1"]
+        user = "clouduser"
+        # Return no matching keys for first glob, test keys for second.
+        m_glob.side_effect = iter([
+                                  [],
+                                  self.test_hostkey_files,
+                                  ])
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.datasource.publish_host_keys = mock.Mock()
+
+        cfg = {'ssh_publish_hostkeys': {'enabled': True}}
+        expected_call = [self.test_hostkeys[key_type] for key_type
+                         in ['ecdsa', 'ed25519', 'rsa']]
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        self.assertEqual([mock.call(expected_call)],
+                         cloud.datasource.publish_host_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_publish_hostkeys_config_disable(
+            self, m_path_exists, m_nug, m_glob, m_setup_keys):
+        """Test handle with various configs for ssh_publish_hostkeys."""
+        self._publish_hostkey_test_setup()
+        cc_ssh.PUBLISH_HOST_KEYS = True
+        keys = ["key1"]
+        user = "clouduser"
+        # Return no matching keys for first glob, test keys for second.
+        m_glob.side_effect = iter([
+                                  [],
+                                  self.test_hostkey_files,
+                                  ])
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.datasource.publish_host_keys = mock.Mock()
+
+        cfg = {'ssh_publish_hostkeys': {'enabled': False}}
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        self.assertFalse(cloud.datasource.publish_host_keys.call_args_list)
+        cloud.datasource.publish_host_keys.assert_not_called()
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_publish_hostkeys_config_blacklist(
+            self, m_path_exists, m_nug, m_glob, m_setup_keys):
+        """Test handle with various configs for ssh_publish_hostkeys."""
+        self._publish_hostkey_test_setup()
+        cc_ssh.PUBLISH_HOST_KEYS = True
+        keys = ["key1"]
+        user = "clouduser"
+        # Return no matching keys for first glob, test keys for second.
+        m_glob.side_effect = iter([
+                                  [],
+                                  self.test_hostkey_files,
+                                  ])
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.datasource.publish_host_keys = mock.Mock()
+
+        cfg = {'ssh_publish_hostkeys': {'enabled': True,
+                                        'blacklist': ['dsa', 'rsa']}}
+        expected_call = [self.test_hostkeys[key_type] for key_type
+                         in ['ecdsa', 'ed25519']]
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        self.assertEqual([mock.call(expected_call)],
+                         cloud.datasource.publish_host_keys.call_args_list)
+
+    @mock.patch(MODPATH + "glob.glob")
+    @mock.patch(MODPATH + "ug_util.normalize_users_groups")
+    @mock.patch(MODPATH + "os.path.exists")
+    def test_handle_publish_hostkeys_empty_blacklist(
+            self, m_path_exists, m_nug, m_glob, m_setup_keys):
+        """Test handle with various configs for ssh_publish_hostkeys."""
+        self._publish_hostkey_test_setup()
+        cc_ssh.PUBLISH_HOST_KEYS = True
+        keys = ["key1"]
+        user = "clouduser"
+        # Return no matching keys for first glob, test keys for second.
+        m_glob.side_effect = iter([
+                                  [],
+                                  self.test_hostkey_files,
+                                  ])
+        # Mock os.path.exits to True to short-circuit the key writing logic
+        m_path_exists.return_value = True
+        m_nug.return_value = ({user: {"default": user}}, {})
+        cloud = self.tmp_cloud(
+            distro='ubuntu', metadata={'public-keys': keys})
+        cloud.datasource.publish_host_keys = mock.Mock()
+
+        cfg = {'ssh_publish_hostkeys': {'enabled': True,
+                                        'blacklist': []}}
+        expected_call = [self.test_hostkeys[key_type] for key_type
+                         in ['dsa', 'ecdsa', 'ed25519', 'rsa']]
+        cc_ssh.handle("name", cfg, cloud, None, None)
+        self.assertEqual([mock.call(expected_call)],
+                         cloud.datasource.publish_host_keys.call_args_list)
diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py
index efba4ce..4695269 100644
--- a/cloudinit/config/tests/test_ubuntu_drivers.py
+++ b/cloudinit/config/tests/test_ubuntu_drivers.py
@@ -1,6 +1,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 import copy
+import os
 
 from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock
 from cloudinit.config.schema import (
@@ -9,11 +10,27 @@ from cloudinit.config import cc_ubuntu_drivers as drivers
 from cloudinit.util import ProcessExecutionError
 
 MPATH = "cloudinit.config.cc_ubuntu_drivers."
+M_TMP_PATH = MPATH + "temp_utils.mkdtemp"
 OLD_UBUNTU_DRIVERS_ERROR_STDERR = (
     "ubuntu-drivers: error: argument <command>: invalid choice: 'install' "
     "(choose from 'list', 'autoinstall', 'devices', 'debug')\n")
 
 
+class AnyTempScriptAndDebconfFile(object):
+
+    def __init__(self, tmp_dir, debconf_file):
+        self.tmp_dir = tmp_dir
+        self.debconf_file = debconf_file
+
+    def __eq__(self, cmd):
+        if not len(cmd) == 2:
+            return False
+        script, debconf_file = cmd
+        if bool(script.startswith(self.tmp_dir) and script.endswith('.sh')):
+            return debconf_file == self.debconf_file
+        return False
+
+
 class TestUbuntuDrivers(CiTestCase):
     cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}}
     install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia']
@@ -28,16 +45,23 @@ class TestUbuntuDrivers(CiTestCase):
                 {'drivers': {'nvidia': {'license-accepted': "TRUE"}}},
                 schema=drivers.schema, strict=True)
 
+    @mock.patch(M_TMP_PATH)
     @mock.patch(MPATH + "util.subp", return_value=('', ''))
     @mock.patch(MPATH + "util.which", return_value=False)
-    def _assert_happy_path_taken(self, config, m_which, m_subp):
+    def _assert_happy_path_taken(
+            self, config, m_which, m_subp, m_tmp):
         """Positive path test through handle. Package should be installed."""
+        tdir = self.tmp_dir()
+        debconf_file = os.path.join(tdir, 'nvidia.template')
+        m_tmp.return_value = tdir
         myCloud = mock.MagicMock()
         drivers.handle('ubuntu_drivers', config, myCloud, None, None)
         self.assertEqual([mock.call(['ubuntu-drivers-common'])],
                          myCloud.distro.install_packages.call_args_list)
-        self.assertEqual([mock.call(self.install_gpgpu)],
-                         m_subp.call_args_list)
+        self.assertEqual(
+            [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
+             mock.call(self.install_gpgpu)],
+            m_subp.call_args_list)
 
     def test_handle_does_package_install(self):
         self._assert_happy_path_taken(self.cfg_accepted)
@@ -48,19 +72,33 @@ class TestUbuntuDrivers(CiTestCase):
             new_config['drivers']['nvidia']['license-accepted'] = true_value
             self._assert_happy_path_taken(new_config)
 
-    @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
-        stdout='No drivers found for installation.\n', exit_code=1))
+    @mock.patch(M_TMP_PATH)
+    @mock.patch(MPATH + "util.subp")
     @mock.patch(MPATH + "util.which", return_value=False)
-    def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp):
+    def test_handle_raises_error_if_no_drivers_found(
+            self, m_which, m_subp, m_tmp):
         """If ubuntu-drivers doesn't install any drivers, raise an error."""
+        tdir = self.tmp_dir()
+        debconf_file = os.path.join(tdir, 'nvidia.template')
+        m_tmp.return_value = tdir
         myCloud = mock.MagicMock()
+
+        def fake_subp(cmd):
+            if cmd[0].startswith(tdir):
+                return
+            raise ProcessExecutionError(
+                stdout='No drivers found for installation.\n', exit_code=1)
+        m_subp.side_effect = fake_subp
+
         with self.assertRaises(Exception):
             drivers.handle(
                 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
         self.assertEqual([mock.call(['ubuntu-drivers-common'])],
                          myCloud.distro.install_packages.call_args_list)
-        self.assertEqual([mock.call(self.install_gpgpu)],
-                         m_subp.call_args_list)
+        self.assertEqual(
+            [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
+             mock.call(self.install_gpgpu)],
+            m_subp.call_args_list)
         self.assertIn('ubuntu-drivers found no drivers for installation',
                       self.logs.getvalue())
 
@@ -108,18 +146,25 @@ class TestUbuntuDrivers(CiTestCase):
                       myLog.debug.call_args_list[0][0][0])
         self.assertEqual(0, m_install_drivers.call_count)
 
+    @mock.patch(M_TMP_PATH)
     @mock.patch(MPATH + "util.subp", return_value=('', ''))
     @mock.patch(MPATH + "util.which", return_value=True)
-    def test_install_drivers_no_install_if_present(self, m_which, m_subp):
+    def test_install_drivers_no_install_if_present(
+            self, m_which, m_subp, m_tmp):
         """If 'ubuntu-drivers' is present, no package install should occur."""
+        tdir = self.tmp_dir()
+        debconf_file = os.path.join(tdir, 'nvidia.template')
+        m_tmp.return_value = tdir
         pkg_install = mock.MagicMock()
         drivers.install_drivers(self.cfg_accepted['drivers'],
                                 pkg_install_func=pkg_install)
         self.assertEqual(0, pkg_install.call_count)
         self.assertEqual([mock.call('ubuntu-drivers')],
                          m_which.call_args_list)
-        self.assertEqual([mock.call(self.install_gpgpu)],
-                         m_subp.call_args_list)
+        self.assertEqual(
+            [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
+             mock.call(self.install_gpgpu)],
+            m_subp.call_args_list)
 
     def test_install_drivers_rejects_invalid_config(self):
         """install_drivers should raise TypeError if not given a config dict"""
@@ -128,20 +173,33 @@ class TestUbuntuDrivers(CiTestCase):
             drivers.install_drivers("mystring", pkg_install_func=pkg_install)
         self.assertEqual(0, pkg_install.call_count)
 
-    @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
-        stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2))
+    @mock.patch(M_TMP_PATH)
+    @mock.patch(MPATH + "util.subp")
     @mock.patch(MPATH + "util.which", return_value=False)
     def test_install_drivers_handles_old_ubuntu_drivers_gracefully(
-            self, m_which, m_subp):
+            self, m_which, m_subp, m_tmp):
         """Older ubuntu-drivers versions should emit message and raise error"""
+        tdir = self.tmp_dir()
+        debconf_file = os.path.join(tdir, 'nvidia.template')
+        m_tmp.return_value = tdir
         myCloud = mock.MagicMock()
+
+        def fake_subp(cmd):
+            if cmd[0].startswith(tdir):
+                return
+            raise ProcessExecutionError(
+                stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)
+        m_subp.side_effect = fake_subp
+
         with self.assertRaises(Exception):
             drivers.handle(
                 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
         self.assertEqual([mock.call(['ubuntu-drivers-common'])],
                          myCloud.distro.install_packages.call_args_list)
-        self.assertEqual([mock.call(self.install_gpgpu)],
-                         m_subp.call_args_list)
+        self.assertEqual(
+            [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
+             mock.call(self.install_gpgpu)],
+            m_subp.call_args_list)
         self.assertIn('WARNING: the available version of ubuntu-drivers is'
                       ' too old to perform requested driver installation',
                       self.logs.getvalue())
@@ -153,16 +211,21 @@ class TestUbuntuDriversWithVersion(TestUbuntuDrivers):
         'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}}
     install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123']
 
+    @mock.patch(M_TMP_PATH)
     @mock.patch(MPATH + "util.subp", return_value=('', ''))
     @mock.patch(MPATH + "util.which", return_value=False)
-    def test_version_none_uses_latest(self, m_which, m_subp):
+    def test_version_none_uses_latest(self, m_which, m_subp, m_tmp):
+        tdir = self.tmp_dir()
+        debconf_file = os.path.join(tdir, 'nvidia.template')
+        m_tmp.return_value = tdir
         myCloud = mock.MagicMock()
         version_none_cfg = {
             'drivers': {'nvidia': {'license-accepted': True, 'version': None}}}
         drivers.handle(
             'ubuntu_drivers', version_none_cfg, myCloud, None, None)
         self.assertEqual(
-            [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
+            [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
+             mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
             m_subp.call_args_list)
 
     def test_specifying_a_version_doesnt_override_license_acceptance(self):
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
index 20c994d..00bdee3 100644
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -396,16 +396,16 @@ class Distro(object):
         else:
             create_groups = True
 
-        adduser_cmd = ['useradd', name]
-        log_adduser_cmd = ['useradd', name]
+        useradd_cmd = ['useradd', name]
+        log_useradd_cmd = ['useradd', name]
         if util.system_is_snappy():
-            adduser_cmd.append('--extrausers')
-            log_adduser_cmd.append('--extrausers')
+            useradd_cmd.append('--extrausers')
+            log_useradd_cmd.append('--extrausers')
 
         # Since we are creating users, we want to carefully validate the
         # inputs. If something goes wrong, we can end up with a system
         # that nobody can login to.
-        adduser_opts = {
+        useradd_opts = {
             "gecos": '--comment',
             "homedir": '--home',
             "primary_group": '--gid',
@@ -418,7 +418,7 @@ class Distro(object):
             "selinux_user": '--selinux-user',
         }
 
-        adduser_flags = {
+        useradd_flags = {
             "no_user_group": '--no-user-group',
             "system": '--system',
             "no_log_init": '--no-log-init',
@@ -453,32 +453,32 @@ class Distro(object):
         # Check the values and create the command
         for key, val in sorted(kwargs.items()):
 
-            if key in adduser_opts and val and isinstance(val, str):
-                adduser_cmd.extend([adduser_opts[key], val])
+            if key in useradd_opts and val and isinstance(val, str):
+                useradd_cmd.extend([useradd_opts[key], val])
 
                 # Redact certain fields from the logs
                 if key in redact_opts:
-                    log_adduser_cmd.extend([adduser_opts[key], 'REDACTED'])
+                    log_useradd_cmd.extend([useradd_opts[key], 'REDACTED'])
                 else:
-                    log_adduser_cmd.extend([adduser_opts[key], val])
+                    log_useradd_cmd.extend([useradd_opts[key], val])
 
-            elif key in adduser_flags and val:
-                adduser_cmd.append(adduser_flags[key])
-                log_adduser_cmd.append(adduser_flags[key])
+            elif key in useradd_flags and val:
+                useradd_cmd.append(useradd_flags[key])
+                log_useradd_cmd.append(useradd_flags[key])
 
         # Don't create the home directory if directed so or if the user is a
         # system user
         if kwargs.get('no_create_home') or kwargs.get('system'):
-            adduser_cmd.append('-M')
-            log_adduser_cmd.append('-M')
+            useradd_cmd.append('-M')
+            log_useradd_cmd.append('-M')
         else:
-            adduser_cmd.append('-m')
-            log_adduser_cmd.append('-m')
+            useradd_cmd.append('-m')
+            log_useradd_cmd.append('-m')
 
         # Run the command
         LOG.debug("Adding user %s", name)
         try:
-            util.subp(adduser_cmd, logstring=log_adduser_cmd)
+            util.subp(useradd_cmd, logstring=log_useradd_cmd)
         except Exception as e:
             util.logexc(LOG, "Failed to create user %s", name)
             raise e
@@ -490,15 +490,15 @@ class Distro(object):
 
         snapuser = kwargs.get('snapuser')
         known = kwargs.get('known', False)
-        adduser_cmd = ["snap", "create-user", "--sudoer", "--json"]
+        create_user_cmd = ["snap", "create-user", "--sudoer", "--json"]
         if known:
-            adduser_cmd.append("--known")
-        adduser_cmd.append(snapuser)
+            create_user_cmd.append("--known")
+        create_user_cmd.append(snapuser)
 
         # Run the command
         LOG.debug("Adding snap user %s", name)
         try:
-            (out, err) = util.subp(adduser_cmd, logstring=adduser_cmd,
+            (out, err) = util.subp(create_user_cmd, logstring=create_user_cmd,
                                    capture=True)
             LOG.debug("snap create-user returned: %s:%s", out, err)
             jobj = util.load_json(out)
diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py
index b814c8b..9f89c5f 100644
--- a/cloudinit/distros/arch.py
+++ b/cloudinit/distros/arch.py
@@ -12,6 +12,8 @@ from cloudinit import util
 from cloudinit.distros import net_util
 from cloudinit.distros.parsers.hostname import HostnameConf
 
+from cloudinit.net.renderers import RendererNotFoundError
+
 from cloudinit.settings import PER_INSTANCE
 
 import os
@@ -24,6 +26,11 @@ class Distro(distros.Distro):
     network_conf_dir = "/etc/netctl"
     resolve_conf_fn = "/etc/resolv.conf"
     init_cmd = ['systemctl']  # init scripts
+    renderer_configs = {
+        "netplan": {"netplan_path": "/etc/netplan/50-cloud-init.yaml",
+                    "netplan_header": "# generated by cloud-init\n",
+                    "postcmds": True}
+    }
 
     def __init__(self, name, cfg, paths):
         distros.Distro.__init__(self, name, cfg, paths)
@@ -50,6 +57,13 @@ class Distro(distros.Distro):
         self.update_package_sources()
         self.package_command('', pkgs=pkglist)
 
+    def _write_network_config(self, netconfig):
+        try:
+            return self._supported_write_network_config(netconfig)
+        except RendererNotFoundError:
+            # Fall back to old _write_network
+            raise NotImplementedError
+
     def _write_network(self, settings):
         entries = net_util.translate_network(settings)
         LOG.debug("Translated ubuntu style network settings %s into %s",
diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
index d517fb8..0ad93ff 100644
--- a/cloudinit/distros/debian.py
+++ b/cloudinit/distros/debian.py
@@ -36,14 +36,14 @@ ENI_HEADER = """# This file is generated from information provided by
 # network: {config: disabled}
 """
 
-NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init.cfg"
+NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init"
 LOCALE_CONF_FN = "/etc/default/locale"
 
 
 class Distro(distros.Distro):
     hostname_conf_fn = "/etc/hostname"
     network_conf_fn = {
-        "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
+        "eni": "/etc/network/interfaces.d/50-cloud-init",
         "netplan": "/etc/netplan/50-cloud-init.yaml"
     }
     renderer_configs = {
diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
index ff22d56..f7825fd 100644
--- a/cloudinit/distros/freebsd.py
+++ b/cloudinit/distros/freebsd.py
@@ -185,10 +185,10 @@ class Distro(distros.Distro):
             LOG.info("User %s already exists, skipping.", name)
             return False
 
-        adduser_cmd = ['pw', 'useradd', '-n', name]
-        log_adduser_cmd = ['pw', 'useradd', '-n', name]
+        pw_useradd_cmd = ['pw', 'useradd', '-n', name]
+        log_pw_useradd_cmd = ['pw', 'useradd', '-n', name]
 
-        adduser_opts = {
+        pw_useradd_opts = {
             "homedir": '-d',
             "gecos": '-c',
             "primary_group": '-g',
@@ -196,34 +196,34 @@ class Distro(distros.Distro):
             "shell": '-s',
             "inactive": '-E',
         }
-        adduser_flags = {
+        pw_useradd_flags = {
             "no_user_group": '--no-user-group',
             "system": '--system',
             "no_log_init": '--no-log-init',
         }
 
         for key, val in kwargs.items():
-            if (key in adduser_opts and val and
+            if (key in pw_useradd_opts and val and
                isinstance(val, six.string_types)):
-                adduser_cmd.extend([adduser_opts[key], val])
+                pw_useradd_cmd.extend([pw_useradd_opts[key], val])
 
-            elif key in adduser_flags and val:
-                adduser_cmd.append(adduser_flags[key])
-                log_adduser_cmd.append(adduser_flags[key])
+            elif key in pw_useradd_flags and val:
+                pw_useradd_cmd.append(pw_useradd_flags[key])
+                log_pw_useradd_cmd.append(pw_useradd_flags[key])
 
         if 'no_create_home' in kwargs or 'system' in kwargs:
-            adduser_cmd.append('-d/nonexistent')
-            log_adduser_cmd.append('-d/nonexistent')
+            pw_useradd_cmd.append('-d/nonexistent')
+            log_pw_useradd_cmd.append('-d/nonexistent')
         else:
-            adduser_cmd.append('-d/usr/home/%s' % name)
-            adduser_cmd.append('-m')
-            log_adduser_cmd.append('-d/usr/home/%s' % name)
-            log_adduser_cmd.append('-m')
+            pw_useradd_cmd.append('-d/usr/home/%s' % name)
+            pw_useradd_cmd.append('-m')
+            log_pw_useradd_cmd.append('-d/usr/home/%s' % name)
+            log_pw_useradd_cmd.append('-m')
 
         # Run the command
         LOG.info("Adding user %s", name)
         try:
-            util.subp(adduser_cmd, logstring=log_adduser_cmd)
+            util.subp(pw_useradd_cmd, logstring=log_pw_useradd_cmd)
         except Exception as e:
             util.logexc(LOG, "Failed to create user %s", name)
             raise e
diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
index 1bfe047..e41e2f7 100644
--- a/cloudinit/distros/opensuse.py
+++ b/cloudinit/distros/opensuse.py
@@ -38,6 +38,8 @@ class Distro(distros.Distro):
         'sysconfig': {
             'control': 'etc/sysconfig/network/config',
             'iface_templates': '%(base)s/network/ifcfg-%(name)s',
+            'netrules_path': (
+                'etc/udev/rules.d/85-persistent-net-cloud-init.rules'),
             'route_templates': {
                 'ipv4': '%(base)s/network/ifroute-%(name)s',
                 'ipv6': '%(base)s/network/ifroute-%(name)s',
diff --git a/cloudinit/distros/parsers/sys_conf.py b/cloudinit/distros/parsers/sys_conf.py
index c27b5d5..44df17d 100644
--- a/cloudinit/distros/parsers/sys_conf.py
+++ b/cloudinit/distros/parsers/sys_conf.py
@@ -43,6 +43,13 @@ def _contains_shell_variable(text):
 
 
 class SysConf(configobj.ConfigObj):
+    """A configobj.ConfigObj subclass specialised for sysconfig files.
+
+    :param contents:
+        The sysconfig file to parse, in a format accepted by
+        ``configobj.ConfigObj.__init__`` (i.e. "a filename, file like object,
+        or list of lines").
+    """
     def __init__(self, contents):
         configobj.ConfigObj.__init__(self, contents,
                                      interpolation=False,
diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
index 6815410..e5fcbc5 100644
--- a/cloudinit/distros/ubuntu.py
+++ b/cloudinit/distros/ubuntu.py
@@ -21,6 +21,21 @@ LOG = logging.getLogger(__name__)
 
 class Distro(debian.Distro):
 
+    def __init__(self, name, cfg, paths):
+        super(Distro, self).__init__(name, cfg, paths)
+        # Ubuntu specific network cfg locations
+        self.network_conf_fn = {
+            "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
+            "netplan": "/etc/netplan/50-cloud-init.yaml"
+        }
+        self.renderer_configs = {
+            "eni": {"eni_path": self.network_conf_fn["eni"],
+                    "eni_header": debian.ENI_HEADER},
+            "netplan": {"netplan_path": self.network_conf_fn["netplan"],
+                        "netplan_header": debian.ENI_HEADER,
+                        "postcmds": True}
+        }
+
     @property
     def preferred_ntp_clients(self):
         """The preferred ntp client is dependent on the version."""
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 3642fb1..ea707c0 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -9,6 +9,7 @@ import errno
 import logging
 import os
 import re
+from functools import partial
 
 from cloudinit.net.network_state import mask_to_net_prefix
 from cloudinit import util
@@ -264,46 +265,29 @@ def find_fallback_nic(blacklist_drivers=None):
 
 
 def generate_fallback_config(blacklist_drivers=None, config_driver=None):
-    """Determine which attached net dev is most likely to have a connection and
-       generate network state to run dhcp on that interface"""
-
+    """Generate network cfg v2 for dhcp on the NIC most likely connected."""
     if not config_driver:
         config_driver = False
 
     target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)
-    if target_name:
-        target_mac = read_sys_net_safe(target_name, 'address')
-        nconf = {'config': [], 'version': 1}
-        cfg = {'type': 'physical', 'name': target_name,
-               'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]}
-        # inject the device driver name, dev_id into config if enabled and
-        # device has a valid device driver value
-        if config_driver:
-            driver = device_driver(target_name)
-            if driver:
-                cfg['params'] = {
-                    'driver': driver,
-                    'device_id': device_devid(target_name),
-                }
-        nconf['config'].append(cfg)
-        return nconf
-    else:
+    if not target_name:
         # can't read any interfaces addresses (or there are none); give up
         return None
+    target_mac = read_sys_net_safe(target_name, 'address')
+    cfg = {'dhcp4': True, 'set-name': target_name,
+           'match': {'macaddress': target_mac.lower()}}
+    if config_driver:
+        driver = device_driver(target_name)
+        if driver:
+            cfg['match']['driver'] = driver
+    nconf = {'ethernets': {target_name: cfg}, 'version': 2}
+    return nconf
 
 
-def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
-    """read the network config and rename devices accordingly.
-    if strict_present is false, then do not raise exception if no devices
-    match.  if strict_busy is false, then do not raise exception if the
-    device cannot be renamed because it is currently configured.
-
-    renames are only attempted for interfaces of type 'physical'.  It is
-    expected that the network system will create other devices with the
-    correct name in place."""
+def extract_physdevs(netcfg):
 
     def _version_1(netcfg):
-        renames = []
+        physdevs = []
         for ent in netcfg.get('config', {}):
             if ent.get('type') != 'physical':
                 continue
@@ -317,11 +301,11 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
                 driver = device_driver(name)
             if not device_id:
                 device_id = device_devid(name)
-            renames.append([mac, name, driver, device_id])
-        return renames
+            physdevs.append([mac, name, driver, device_id])
+        return physdevs
 
     def _version_2(netcfg):
-        renames = []
+        physdevs = []
         for ent in netcfg.get('ethernets', {}).values():
             # only rename if configured to do so
             name = ent.get('set-name')
@@ -337,16 +321,69 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
                 driver = device_driver(name)
             if not device_id:
                 device_id = device_devid(name)
-            renames.append([mac, name, driver, device_id])
-        return renames
+            physdevs.append([mac, name, driver, device_id])
+        return physdevs
+
+    version = netcfg.get('version')
+    if version == 1:
+        return _version_1(netcfg)
+    elif version == 2:
+        return _version_2(netcfg)
+
+    raise RuntimeError('Unknown network config version: %s' % version)
+
+
+def wait_for_physdevs(netcfg, strict=True):
+    physdevs = extract_physdevs(netcfg)
+
+    # set of expected iface names and mac addrs
+    expected_ifaces = dict([(iface[0], iface[1]) for iface in physdevs])
+    expected_macs = set(expected_ifaces.keys())
+
+    # set of current macs
+    present_macs = get_interfaces_by_mac().keys()
+
+    # compare the set of expected mac address values to
+    # the current macs present; we only check MAC as cloud-init
+    # has not yet renamed interfaces and the netcfg may include
+    # such renames.
+    for _ in range(0, 5):
+        if expected_macs.issubset(present_macs):
+            LOG.debug('net: all expected physical devices present')
+            return
 
-    if netcfg.get('version') == 1:
-        return _rename_interfaces(_version_1(netcfg))
-    elif netcfg.get('version') == 2:
-        return _rename_interfaces(_version_2(netcfg))
+        missing = expected_macs.difference(present_macs)
+        LOG.debug('net: waiting for expected net devices: %s', missing)
+        for mac in missing:
+            # trigger a settle, unless this interface exists
+            syspath = sys_dev_path(expected_ifaces[mac])
+            settle = partial(util.udevadm_settle, exists=syspath)
+            msg = 'Waiting for udev events to settle or %s exists' % syspath
+            util.log_time(LOG.debug, msg, func=settle)
 
-    raise RuntimeError('Failed to apply network config names. Found bad'
-                       ' network config version: %s' % netcfg.get('version'))
+        # update present_macs after settles
+        present_macs = get_interfaces_by_mac().keys()
+
+    msg = 'Not all expected physical devices present: %s' % missing
+    LOG.warning(msg)
+    if strict:
+        raise RuntimeError(msg)
+
+
+def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
+    """read the network config and rename devices accordingly.
+    if strict_present is false, then do not raise exception if no devices
+    match.  if strict_busy is false, then do not raise exception if the
+    device cannot be renamed because it is currently configured.
+
+    renames are only attempted for interfaces of type 'physical'.  It is
+    expected that the network system will create other devices with the
+    correct name in place."""
+
+    try:
+        _rename_interfaces(extract_physdevs(netcfg))
+    except RuntimeError as e:
+        raise RuntimeError('Failed to apply network config names: %s' % e)
 
 
 def interface_has_own_mac(ifname, strict=False):
@@ -622,6 +659,8 @@ def get_interfaces():
             continue
         if is_vlan(name):
             continue
+        if is_bond(name):
+            continue
         mac = get_interface_mac(name)
         # some devices may not have a mac (tun0)
         if not mac:
@@ -677,7 +716,7 @@ class EphemeralIPv4Network(object):
     """
 
     def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
-                 connectivity_url=None):
+                 connectivity_url=None, static_routes=None):
         """Setup context manager and validate call signature.
 
         @param interface: Name of the network interface to bring up.
@@ -688,6 +727,7 @@ class EphemeralIPv4Network(object):
         @param router: Optionally the default gateway IP.
         @param connectivity_url: Optionally, a URL to verify if a usable
            connection already exists.
+        @param static_routes: Optionally a list of static routes from DHCP
         """
         if not all([interface, ip, prefix_or_mask, broadcast]):
             raise ValueError(
@@ -704,6 +744,7 @@ class EphemeralIPv4Network(object):
         self.ip = ip
         self.broadcast = broadcast
         self.router = router
+        self.static_routes = static_routes
         self.cleanup_cmds = []  # List of commands to run to cleanup state.
 
     def __enter__(self):
@@ -716,7 +757,21 @@ class EphemeralIPv4Network(object):
                 return
 
         self._bringup_device()
-        if self.router:
+
+        # rfc3442 requires us to ignore the router config *if* classless static
+        # routes are provided.
+        #
+        # https://tools.ietf.org/html/rfc3442
+        #
+        # If the DHCP server returns both a Classless Static Routes option and
+        # a Router option, the DHCP client MUST ignore the Router option.
+        #
+        # Similarly, if the DHCP server returns both a Classless Static Routes
+        # option and a Static Routes option, the DHCP client MUST ignore the
+        # Static Routes option.
+        if self.static_routes:
+            self._bringup_static_routes()
+        elif self.router:
             self._bringup_router()
 
     def __exit__(self, excp_type, excp_value, excp_traceback):
@@ -760,6 +815,20 @@ class EphemeralIPv4Network(object):
                 ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev',
                  self.interface])
 
+    def _bringup_static_routes(self):
+        # static_routes = [("169.254.169.254/32", "130.56.248.255"),
+        #                  ("0.0.0.0/0", "130.56.240.1")]
+        for net_address, gateway in self.static_routes:
+            via_arg = []
+            if gateway != "0.0.0.0/0":
+                via_arg = ['via', gateway]
+            util.subp(
+                ['ip', '-4', 'route', 'add', net_address] + via_arg +
+                ['dev', self.interface], capture=True)
+            self.cleanup_cmds.insert(
+                0, ['ip', '-4', 'route', 'del', net_address] + via_arg +
+                   ['dev', self.interface])
+
     def _bringup_router(self):
         """Perform the ip commands to fully setup the router if needed."""
         # Check if a default route exists and exit if it does
diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
index f89a0f7..556a10f 100755
--- a/cloudinit/net/cmdline.py
+++ b/cloudinit/net/cmdline.py
@@ -177,21 +177,13 @@ def _is_initramfs_netconfig(files, cmdline):
     return False
 
 
-def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
+def read_initramfs_config(files=None, mac_addrs=None, cmdline=None):
     if cmdline is None:
         cmdline = util.get_cmdline()
 
     if files is None:
         files = _get_klibc_net_cfg_files()
 
-    if 'network-config=' in cmdline:
-        data64 = None
-        for tok in cmdline.split():
-            if tok.startswith("network-config="):
-                data64 = tok.split("=", 1)[1]
-        if data64:
-            return util.load_yaml(_b64dgz(data64))
-
     if not _is_initramfs_netconfig(files, cmdline):
         return None
 
@@ -204,4 +196,19 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
 
     return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
 
+
+def read_kernel_cmdline_config(cmdline=None):
+    if cmdline is None:
+        cmdline = util.get_cmdline()
+
+    if 'network-config=' in cmdline:
+        data64 = None
+        for tok in cmdline.split():
+            if tok.startswith("network-config="):
+                data64 = tok.split("=", 1)[1]
+        if data64:
+            return util.load_yaml(_b64dgz(data64))
+
+    return None
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index c98a97c..1737991 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -92,10 +92,14 @@ class EphemeralDHCPv4(object):
         nmap = {'interface': 'interface', 'ip': 'fixed-address',
                 'prefix_or_mask': 'subnet-mask',
                 'broadcast': 'broadcast-address',
+                'static_routes': 'rfc3442-classless-static-routes',
                 'router': 'routers'}
         kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
         if not kwargs['broadcast']:
             kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
+        if kwargs['static_routes']:
+            kwargs['static_routes'] = (
+                parse_static_routes(kwargs['static_routes']))
         if self.connectivity_url:
             kwargs['connectivity_url'] = self.connectivity_url
         ephipv4 = EphemeralIPv4Network(**kwargs)
@@ -272,4 +276,90 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
             return data[keyname]
     return None
 
+
+def parse_static_routes(rfc3442):
+    """ parse rfc3442 format and return a list containing tuple of strings.
+
+    The tuple is composed of the network_address (including net length) and
+    gateway for a parsed static route.
+
+    @param rfc3442: string in rfc3442 format
+    @returns: list of tuple(str, str) for all valid parsed routes until the
+              first parsing error.
+
+    E.g.
+    sr = parse_state_routes("32,169,254,169,254,130,56,248,255,0,130,56,240,1")
+    sr = [
+        ("169.254.169.254/32", "130.56.248.255"), ("0.0.0.0/0", "130.56.240.1")
+    ]
+
+    Python version of isc-dhclient's hooks:
+       /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes
+    """
+    # raw strings from dhcp lease may end in semi-colon
+    rfc3442 = rfc3442.rstrip(";")
+    tokens = rfc3442.split(',')
+    static_routes = []
+
+    def _trunc_error(cidr, required, remain):
+        msg = ("RFC3442 string malformed.  Current route has CIDR of %s "
+               "and requires %s significant octets, but only %s remain. "
+               "Verify DHCP rfc3442-classless-static-routes value: %s"
+               % (cidr, required, remain, rfc3442))
+        LOG.error(msg)
+
+    current_idx = 0
+    for idx, tok in enumerate(tokens):
+        if idx < current_idx:
+            continue
+        net_length = int(tok)
+        if net_length in range(25, 33):
+            req_toks = 9
+            if len(tokens[idx:]) < req_toks:
+                _trunc_error(net_length, req_toks, len(tokens[idx:]))
+                return static_routes
+            net_address = ".".join(tokens[idx+1:idx+5])
+            gateway = ".".join(tokens[idx+5:idx+req_toks])
+            current_idx = idx + req_toks
+        elif net_length in range(17, 25):
+            req_toks = 8
+            if len(tokens[idx:]) < req_toks:
+                _trunc_error(net_length, req_toks, len(tokens[idx:]))
+                return static_routes
+            net_address = ".".join(tokens[idx+1:idx+4] + ["0"])
+            gateway = ".".join(tokens[idx+4:idx+req_toks])
+            current_idx = idx + req_toks
+        elif net_length in range(9, 17):
+            req_toks = 7
+            if len(tokens[idx:]) < req_toks:
+                _trunc_error(net_length, req_toks, len(tokens[idx:]))
+                return static_routes
+            net_address = ".".join(tokens[idx+1:idx+3] + ["0", "0"])
+            gateway = ".".join(tokens[idx+3:idx+req_toks])
+            current_idx = idx + req_toks
+        elif net_length in range(1, 9):
+            req_toks = 6
+            if len(tokens[idx:]) < req_toks:
+                _trunc_error(net_length, req_toks, len(tokens[idx:]))
+                return static_routes
+            net_address = ".".join(tokens[idx+1:idx+2] + ["0", "0", "0"])
+            gateway = ".".join(tokens[idx+2:idx+req_toks])
+            current_idx = idx + req_toks
+        elif net_length == 0:
+            req_toks = 5
+            if len(tokens[idx:]) < req_toks:
+                _trunc_error(net_length, req_toks, len(tokens[idx:]))
+                return static_routes
+            net_address = "0.0.0.0"
+            gateway = ".".join(tokens[idx+1:idx+req_toks])
+            current_idx = idx + req_toks
+        else:
+            LOG.error('Parsed invalid net length "%s".  Verify DHCP '
+                      'rfc3442-classless-static-routes value.', net_length)
+            return static_routes
+
+        static_routes.append(("%s/%s" % (net_address, net_length), gateway))
+
+    return static_routes
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index 3702130..c0c415d 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -596,6 +596,7 @@ class NetworkStateInterpreter(object):
           eno1:
             match:
               macaddress: 00:11:22:33:44:55
+              driver: hv_netsvc
             wakeonlan: true
             dhcp4: true
             dhcp6: false
@@ -631,15 +632,18 @@ class NetworkStateInterpreter(object):
                 'type': 'physical',
                 'name': cfg.get('set-name', eth),
             }
-            mac_address = cfg.get('match', {}).get('macaddress', None)
+            match = cfg.get('match', {})
+            mac_address = match.get('macaddress', None)
             if not mac_address:
                 LOG.debug('NetworkState Version2: missing "macaddress" info '
                           'in config entry: %s: %s', eth, str(cfg))
-            phy_cmd.update({'mac_address': mac_address})
-
+            phy_cmd['mac_address'] = mac_address
+            driver = match.get('driver', None)
+            if driver:
+                phy_cmd['params'] = {'driver': driver}
             for key in ['mtu', 'match', 'wakeonlan']:
                 if key in cfg:
-                    phy_cmd.update({key: cfg.get(key)})
+                    phy_cmd[key] = cfg[key]
 
             subnets = self._v2_to_v1_ipcfg(cfg)
             if len(subnets) > 0:
@@ -673,6 +677,8 @@ class NetworkStateInterpreter(object):
                 'vlan_id': cfg.get('id'),
                 'vlan_link': cfg.get('link'),
             }
+            if 'mtu' in cfg:
+                vlan_cmd['mtu'] = cfg['mtu']
             subnets = self._v2_to_v1_ipcfg(cfg)
             if len(subnets) > 0:
                 vlan_cmd.update({'subnets': subnets})
@@ -722,6 +728,8 @@ class NetworkStateInterpreter(object):
                 'params': dict((v2key_to_v1[k], v) for k, v in
                                item_params.get('parameters', {}).items())
             }
+            if 'mtu' in item_cfg:
+                v1_cmd['mtu'] = item_cfg['mtu']
             subnets = self._v2_to_v1_ipcfg(item_cfg)
             if len(subnets) > 0:
                 v1_cmd.update({'subnets': subnets})
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index a47da0a..be5dede 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -284,6 +284,18 @@ class Renderer(renderer.Renderer):
         ('bond_mode', "mode=%s"),
         ('bond_xmit_hash_policy', "xmit_hash_policy=%s"),
         ('bond_miimon', "miimon=%s"),
+        ('bond_min_links', "min_links=%s"),
+        ('bond_arp_interval', "arp_interval=%s"),
+        ('bond_arp_ip_target', "arp_ip_target=%s"),
+        ('bond_arp_validate', "arp_validate=%s"),
+        ('bond_ad_select', "ad_select=%s"),
+        ('bond_num_grat_arp', "num_grat_arp=%s"),
+        ('bond_downdelay', "downdelay=%s"),
+        ('bond_updelay', "updelay=%s"),
+        ('bond_lacp_rate', "lacp_rate=%s"),
+        ('bond_fail_over_mac', "fail_over_mac=%s"),
+        ('bond_primary', "primary=%s"),
+        ('bond_primary_reselect', "primary_reselect=%s"),
     ])
 
     bridge_opts_keys = tuple([
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
index 5139024..91f503c 100644
--- a/cloudinit/net/tests/test_dhcp.py
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -8,7 +8,8 @@ from textwrap import dedent
 import cloudinit.net as net
 from cloudinit.net.dhcp import (
     InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
-    parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
+    parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases,
+    parse_static_routes)
 from cloudinit.util import ensure_file, write_file
 from cloudinit.tests.helpers import (
     CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
@@ -64,6 +65,123 @@ class TestParseDHCPLeasesFile(CiTestCase):
         self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
 
 
+class TestDHCPRFC3442(CiTestCase):
+
+    def test_parse_lease_finds_rfc3442_classless_static_routes(self):
+        """parse_dhcp_lease_file returns rfc3442-classless-static-routes."""
+        lease_file = self.tmp_path('leases')
+        content = dedent("""
+            lease {
+              interface "wlp3s0";
+              fixed-address 192.168.2.74;
+              option subnet-mask 255.255.255.0;
+              option routers 192.168.2.1;
+              option rfc3442-classless-static-routes 0,130,56,240,1;
+              renew 4 2017/07/27 18:02:30;
+              expire 5 2017/07/28 07:08:15;
+            }
+        """)
+        expected = [
+            {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
+             'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
+             'rfc3442-classless-static-routes': '0,130,56,240,1',
+             'renew': '4 2017/07/27 18:02:30',
+             'expire': '5 2017/07/28 07:08:15'}]
+        write_file(lease_file, content)
+        self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
+
+    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_obtain_lease_parses_static_routes(self, m_maybe, m_ipv4):
+        """EphemeralDHPCv4 parses rfc3442 routes for EphemeralIPv4Network"""
+        lease = [
+            {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
+             'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
+             'rfc3442-classless-static-routes': '0,130,56,240,1',
+             'renew': '4 2017/07/27 18:02:30',
+             'expire': '5 2017/07/28 07:08:15'}]
+        m_maybe.return_value = lease
+        eph = net.dhcp.EphemeralDHCPv4()
+        eph.obtain_lease()
+        expected_kwargs = {
+            'interface': 'wlp3s0',
+            'ip': '192.168.2.74',
+            'prefix_or_mask': '255.255.255.0',
+            'broadcast': '192.168.2.255',
+            'static_routes': [('0.0.0.0/0', '130.56.240.1')],
+            'router': '192.168.2.1'}
+        m_ipv4.assert_called_with(**expected_kwargs)
+
+
+class TestDHCPParseStaticRoutes(CiTestCase):
+
+    with_logs = True
+
+    def parse_static_routes_empty_string(self):
+        self.assertEqual([], parse_static_routes(""))
+
+    def test_parse_static_routes_invalid_input_returns_empty_list(self):
+        rfc3442 = "32,169,254,169,254,130,56,248"
+        self.assertEqual([], parse_static_routes(rfc3442))
+
+    def test_parse_static_routes_bogus_width_returns_empty_list(self):
+        rfc3442 = "33,169,254,169,254,130,56,248"
+        self.assertEqual([], parse_static_routes(rfc3442))
+
+    def test_parse_static_routes_single_ip(self):
+        rfc3442 = "32,169,254,169,254,130,56,248,255"
+        self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
+                         parse_static_routes(rfc3442))
+
+    def test_parse_static_routes_single_ip_handles_trailing_semicolon(self):
+        rfc3442 = "32,169,254,169,254,130,56,248,255;"
+        self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
+                         parse_static_routes(rfc3442))
+
+    def test_parse_static_routes_default_route(self):
+        rfc3442 = "0,130,56,240,1"
+        self.assertEqual([('0.0.0.0/0', '130.56.240.1')],
+                         parse_static_routes(rfc3442))
+
+    def test_parse_static_routes_class_c_b_a(self):
+        class_c = "24,192,168,74,192,168,0,4"
+        class_b = "16,172,16,172,16,0,4"
+        class_a = "8,10,10,0,0,4"
+        rfc3442 = ",".join([class_c, class_b, class_a])
+        self.assertEqual(sorted([
+            ("192.168.74.0/24", "192.168.0.4"),
+            ("172.16.0.0/16", "172.16.0.4"),
+            ("10.0.0.0/8", "10.0.0.4")
+        ]), sorted(parse_static_routes(rfc3442)))
+
+    def test_parse_static_routes_logs_error_truncated(self):
+        bad_rfc3442 = {
+            "class_c": "24,169,254,169,10",
+            "class_b": "16,172,16,10",
+            "class_a": "8,10,10",
+            "gateway": "0,0",
+            "netlen":  "33,0",
+        }
+        for rfc3442 in bad_rfc3442.values():
+            self.assertEqual([], parse_static_routes(rfc3442))
+
+        logs = self.logs.getvalue()
+        self.assertEqual(len(bad_rfc3442.keys()), len(logs.splitlines()))
+
+    def test_parse_static_routes_returns_valid_routes_until_parse_err(self):
+        class_c = "24,192,168,74,192,168,0,4"
+        class_b = "16,172,16,172,16,0,4"
+        class_a_error = "8,10,10,0,0"
+        rfc3442 = ",".join([class_c, class_b, class_a_error])
+        self.assertEqual(sorted([
+            ("192.168.74.0/24", "192.168.0.4"),
+            ("172.16.0.0/16", "172.16.0.4"),
+        ]), sorted(parse_static_routes(rfc3442)))
+
+        logs = self.logs.getvalue()
+        self.assertIn(rfc3442, logs.splitlines()[0])
+
+
 class TestDHCPDiscoveryClean(CiTestCase):
     with_logs = True
 
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 6d2affe..d2e38f0 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -212,9 +212,9 @@ class TestGenerateFallbackConfig(CiTestCase):
         mac = 'aa:bb:cc:aa:bb:cc'
         write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac)
         expected = {
-            'config': [{'type': 'physical', 'mac_address': mac,
-                        'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}],
-            'version': 1}
+            'ethernets': {'eth1': {'match': {'macaddress': mac},
+                                   'dhcp4': True, 'set-name': 'eth1'}},
+            'version': 2}
         self.assertEqual(expected, net.generate_fallback_config())
 
     def test_generate_fallback_finds_dormant_eth_with_mac(self):
@@ -223,9 +223,9 @@ class TestGenerateFallbackConfig(CiTestCase):
         mac = 'aa:bb:cc:aa:bb:cc'
         write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
         expected = {
-            'config': [{'type': 'physical', 'mac_address': mac,
-                        'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
-            'version': 1}
+            'ethernets': {'eth0': {'match': {'macaddress': mac}, 'dhcp4': True,
+                                   'set-name': 'eth0'}},
+            'version': 2}
         self.assertEqual(expected, net.generate_fallback_config())
 
     def test_generate_fallback_finds_eth_by_operstate(self):
@@ -233,9 +233,10 @@ class TestGenerateFallbackConfig(CiTestCase):
         mac = 'aa:bb:cc:aa:bb:cc'
         write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
         expected = {
-            'config': [{'type': 'physical', 'mac_address': mac,
-                        'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
-            'version': 1}
+            'ethernets': {
+                'eth0': {'dhcp4': True, 'match': {'macaddress': mac},
+                         'set-name': 'eth0'}},
+            'version': 2}
         valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown']
         for state in valid_operstates:
             write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state)
@@ -549,6 +550,45 @@ class TestEphemeralIPV4Network(CiTestCase):
             self.assertEqual(expected_setup_calls, m_subp.call_args_list)
         m_subp.assert_has_calls(expected_teardown_calls)
 
+    def test_ephemeral_ipv4_network_with_rfc3442_static_routes(self, m_subp):
+        params = {
+            'interface': 'eth0', 'ip': '192.168.2.2',
+            'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
+            'static_routes': [('169.254.169.254/32', '192.168.2.1'),
+                              ('0.0.0.0/0', '192.168.2.1')],
+            'router': '192.168.2.1'}
+        expected_setup_calls = [
+            mock.call(
+                ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24',
+                 'broadcast', '192.168.2.255', 'dev', 'eth0'],
+                capture=True, update_env={'LANG': 'C'}),
+            mock.call(
+                ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'],
+                capture=True),
+            mock.call(
+                ['ip', '-4', 'route', 'add', '169.254.169.254/32',
+                 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
+            mock.call(
+                ['ip', '-4', 'route', 'add', '0.0.0.0/0',
+                 'via', '192.168.2.1', 'dev', 'eth0'], capture=True)]
+        expected_teardown_calls = [
+            mock.call(
+                ['ip', '-4', 'route', 'del', '0.0.0.0/0',
+                 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
+            mock.call(
+                ['ip', '-4', 'route', 'del', '169.254.169.254/32',
+                 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
+            mock.call(
+                ['ip', '-family', 'inet', 'link', 'set', 'dev',
+                 'eth0', 'down'], capture=True),
+            mock.call(
+                ['ip', '-family', 'inet', 'addr', 'del',
+                 '192.168.2.2/24', 'dev', 'eth0'], capture=True)
+        ]
+        with net.EphemeralIPv4Network(**params):
+            self.assertEqual(expected_setup_calls, m_subp.call_args_list)
+        m_subp.assert_has_calls(expected_setup_calls + expected_teardown_calls)
+
 
 class TestApplyNetworkCfgNames(CiTestCase):
     V1_CONFIG = textwrap.dedent("""\
@@ -669,3 +709,216 @@ class TestHasURLConnectivity(HttprettyTestCase):
         httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
         self.assertFalse(
             net.has_url_connectivity(self.url), 'Expected False on url fail')
+
+
+def _mk_v1_phys(mac, name, driver, device_id):
+    v1_cfg = {'type': 'physical', 'name': name, 'mac_address': mac}
+    params = {}
+    if driver:
+        params.update({'driver': driver})
+    if device_id:
+        params.update({'device_id': device_id})
+
+    if params:
+        v1_cfg.update({'params': params})
+
+    return v1_cfg
+
+
+def _mk_v2_phys(mac, name, driver=None, device_id=None):
+    v2_cfg = {'set-name': name, 'match': {'macaddress': mac}}
+    if driver:
+        v2_cfg['match'].update({'driver': driver})
+    if device_id:
+        v2_cfg['match'].update({'device_id': device_id})
+
+    return v2_cfg
+
+
+class TestExtractPhysdevs(CiTestCase):
+
+    def setUp(self):
+        super(TestExtractPhysdevs, self).setUp()
+        self.add_patch('cloudinit.net.device_driver', 'm_driver')
+        self.add_patch('cloudinit.net.device_devid', 'm_devid')
+
+    def test_extract_physdevs_looks_up_driver_v1(self):
+        driver = 'virtio'
+        self.m_driver.return_value = driver
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
+        ]
+        netcfg = {
+            'version': 1,
+            'config': [_mk_v1_phys(*args) for args in physdevs],
+        }
+        # insert the driver value for verification
+        physdevs[0][2] = driver
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+        self.m_driver.assert_called_with('eth0')
+
+    def test_extract_physdevs_looks_up_driver_v2(self):
+        driver = 'virtio'
+        self.m_driver.return_value = driver
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
+        }
+        # insert the driver value for verification
+        physdevs[0][2] = driver
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+        self.m_driver.assert_called_with('eth0')
+
+    def test_extract_physdevs_looks_up_devid_v1(self):
+        devid = '0x1000'
+        self.m_devid.return_value = devid
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
+        ]
+        netcfg = {
+            'version': 1,
+            'config': [_mk_v1_phys(*args) for args in physdevs],
+        }
+        # insert the driver value for verification
+        physdevs[0][3] = devid
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+        self.m_devid.assert_called_with('eth0')
+
+    def test_extract_physdevs_looks_up_devid_v2(self):
+        devid = '0x1000'
+        self.m_devid.return_value = devid
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
+        }
+        # insert the driver value for verification
+        physdevs[0][3] = devid
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+        self.m_devid.assert_called_with('eth0')
+
+    def test_get_v1_type_physical(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+            ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
+        ]
+        netcfg = {
+            'version': 1,
+            'config': [_mk_v1_phys(*args) for args in physdevs],
+        }
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+
+    def test_get_v2_type_physical(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+            ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
+        }
+        self.assertEqual(sorted(physdevs),
+                         sorted(net.extract_physdevs(netcfg)))
+
+    def test_get_v2_type_physical_skips_if_no_set_name(self):
+        netcfg = {
+            'version': 2,
+            'ethernets': {
+                'ens3': {
+                    'match': {'macaddress': '00:11:22:33:44:55'},
+                }
+            }
+        }
+        self.assertEqual([], net.extract_physdevs(netcfg))
+
+    def test_runtime_error_on_unknown_netcfg_version(self):
+        with self.assertRaises(RuntimeError):
+            net.extract_physdevs({'version': 3, 'awesome_config': []})
+
+
+class TestWaitForPhysdevs(CiTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestWaitForPhysdevs, self).setUp()
+        self.add_patch('cloudinit.net.get_interfaces_by_mac',
+                       'm_get_iface_mac')
+        self.add_patch('cloudinit.util.udevadm_settle', 'm_udev_settle')
+
+    def test_wait_for_physdevs_skips_settle_if_all_present(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args)
+                          for args in physdevs},
+        }
+        self.m_get_iface_mac.side_effect = iter([
+            {'aa:bb:cc:dd:ee:ff': 'eth0',
+             '00:11:22:33:44:55': 'ens3'},
+        ])
+        net.wait_for_physdevs(netcfg)
+        self.assertEqual(0, self.m_udev_settle.call_count)
+
+    def test_wait_for_physdevs_calls_udev_settle_on_missing(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args)
+                          for args in physdevs},
+        }
+        self.m_get_iface_mac.side_effect = iter([
+            {'aa:bb:cc:dd:ee:ff': 'eth0'},   # first call ens3 is missing
+            {'aa:bb:cc:dd:ee:ff': 'eth0',
+             '00:11:22:33:44:55': 'ens3'},   # second call has both
+        ])
+        net.wait_for_physdevs(netcfg)
+        self.m_udev_settle.assert_called_with(exists=net.sys_dev_path('ens3'))
+
+    def test_wait_for_physdevs_raise_runtime_error_if_missing_and_strict(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args)
+                          for args in physdevs},
+        }
+        self.m_get_iface_mac.return_value = {}
+        with self.assertRaises(RuntimeError):
+            net.wait_for_physdevs(netcfg)
+
+        self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
+
+    def test_wait_for_physdevs_no_raise_if_not_strict(self):
+        physdevs = [
+            ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
+            ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
+        ]
+        netcfg = {
+            'version': 2,
+            'ethernets': {args[1]: _mk_v2_phys(*args)
+                          for args in physdevs},
+        }
+        self.m_get_iface_mac.return_value = {}
+        net.wait_for_physdevs(netcfg, strict=False)
+        self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index b1ebaad..2060d81 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -39,6 +39,7 @@ CFG_BUILTIN = {
         'Hetzner',
         'IBMCloud',
         'Oracle',
+        'Exoscale',
         # At the end to act as a 'catch' when none of the above work...
         'None',
     ],
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index b7440c1..4984fa8 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -26,9 +26,14 @@ from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
 from cloudinit import util
 from cloudinit.reporting import events
 
-from cloudinit.sources.helpers.azure import (azure_ds_reporter,
-                                             azure_ds_telemetry_reporter,
-                                             get_metadata_from_fabric)
+from cloudinit.sources.helpers.azure import (
+    azure_ds_reporter,
+    azure_ds_telemetry_reporter,
+    get_metadata_from_fabric,
+    get_boot_telemetry,
+    get_system_info,
+    report_diagnostic_event,
+    EphemeralDHCPv4WithReporting)
 
 LOG = logging.getLogger(__name__)
 
@@ -354,7 +359,7 @@ class DataSourceAzure(sources.DataSource):
                 bname = str(pk['fingerprint'] + ".crt")
                 fp_files += [os.path.join(ddir, bname)]
                 LOG.debug("ssh authentication: "
-                          "using fingerprint from fabirc")
+                          "using fingerprint from fabric")
 
         with events.ReportEventStack(
                 name="waiting-for-ssh-public-key",
@@ -419,12 +424,17 @@ class DataSourceAzure(sources.DataSource):
                     ret = load_azure_ds_dir(cdev)
 
             except NonAzureDataSource:
+                report_diagnostic_event(
+                    "Did not find Azure data source in %s" % cdev)
                 continue
             except BrokenAzureDataSource as exc:
                 msg = 'BrokenAzureDataSource: %s' % exc
+                report_diagnostic_event(msg)
                 raise sources.InvalidMetaDataException(msg)
             except util.MountFailedError:
-                LOG.warning("%s was not mountable", cdev)
+                msg = '%s was not mountable' % cdev
+                report_diagnostic_event(msg)
+                LOG.warning(msg)
                 continue
 
             perform_reprovision = reprovision or self._should_reprovision(ret)
@@ -432,6 +442,7 @@ class DataSourceAzure(sources.DataSource):
                 if util.is_FreeBSD():
                     msg = "Free BSD is not supported for PPS VMs"
                     LOG.error(msg)
+                    report_diagnostic_event(msg)
                     raise sources.InvalidMetaDataException(msg)
                 ret = self._reprovision()
             imds_md = get_metadata_from_imds(
@@ -450,7 +461,9 @@ class DataSourceAzure(sources.DataSource):
             break
 
         if not found:
-            raise sources.InvalidMetaDataException('No Azure metadata found')
+            msg = 'No Azure metadata found'
+            report_diagnostic_event(msg)
+            raise sources.InvalidMetaDataException(msg)
 
         if found == ddir:
             LOG.debug("using files cached in %s", ddir)
@@ -469,9 +482,14 @@ class DataSourceAzure(sources.DataSource):
                 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
                 self._ephemeral_dhcp_ctx.clean_network()  # Teardown ephemeral
             else:
-                with EphemeralDHCPv4() as lease:
-                    self._report_ready(lease=lease)
-
+                try:
+                    with EphemeralDHCPv4WithReporting(
+                            azure_ds_reporter) as lease:
+                        self._report_ready(lease=lease)
+                except Exception as e:
+                    report_diagnostic_event(
+                        "exception while reporting ready: %s" % e)
+                    raise
         return crawled_data
 
     def _is_platform_viable(self):
@@ -493,6 +511,16 @@ class DataSourceAzure(sources.DataSource):
         if not self._is_platform_viable():
             return False
         try:
+            get_boot_telemetry()
+        except Exception as e:
+            LOG.warning("Failed to get boot telemetry: %s", e)
+
+        try:
+            get_system_info()
+        except Exception as e:
+            LOG.warning("Failed to get system information: %s", e)
+
+        try:
             crawled_data = util.log_time(
                         logfunc=LOG.debug, msg='Crawl of metadata service',
                         func=self.crawl_metadata)
@@ -551,27 +579,55 @@ class DataSourceAzure(sources.DataSource):
         headers = {"Metadata": "true"}
         nl_sock = None
         report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
+        self.imds_logging_threshold = 1
+        self.imds_poll_counter = 1
+        dhcp_attempts = 0
+        vnet_switched = False
+        return_val = None
 
         def exc_cb(msg, exception):
             if isinstance(exception, UrlError) and exception.code == 404:
+                if self.imds_poll_counter == self.imds_logging_threshold:
+                    # Reducing the logging frequency as we are polling IMDS
+                    self.imds_logging_threshold *= 2
+                    LOG.debug("Call to IMDS with arguments %s failed "
+                              "with status code %s after %s retries",
+                              msg, exception.code, self.imds_poll_counter)
+                    LOG.debug("Backing off logging threshold for the same "
+                              "exception to %d", self.imds_logging_threshold)
+                self.imds_poll_counter += 1
                 return True
+
             # If we get an exception while trying to call IMDS, we
             # call DHCP and setup the ephemeral network to acquire the new IP.
+            LOG.debug("Call to IMDS with arguments %s failed  with "
+                      "status code %s", msg, exception.code)
+            report_diagnostic_event("polling IMDS failed with exception %s"
+                                    % exception.code)
             return False
 
         LOG.debug("Wait for vnetswitch to happen")
         while True:
             try:
-                # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
-                self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
-                lease = self._ephemeral_dhcp_ctx.obtain_lease()
+                # Save our EphemeralDHCPv4 context to avoid repeated dhcp
+                with events.ReportEventStack(
+                        name="obtain-dhcp-lease",
+                        description="obtain dhcp lease",
+                        parent=azure_ds_reporter):
+                    self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
+                    lease = self._ephemeral_dhcp_ctx.obtain_lease()
+
+                if vnet_switched:
+                    dhcp_attempts += 1
                 if report_ready:
                     try:
                         nl_sock = netlink.create_bound_netlink_socket()
                     except netlink.NetlinkCreateSocketError as e:
+                        report_diagnostic_event(e)
                         LOG.warning(e)
                         self._ephemeral_dhcp_ctx.clean_network()
-                        return
+                        break
+
                     path = REPORTED_READY_MARKER_FILE
                     LOG.info(
                         "Creating a marker file to report ready: %s", path)
@@ -579,17 +635,33 @@ class DataSourceAzure(sources.DataSource):
                         pid=os.getpid(), time=time()))
                     self._report_ready(lease=lease)
                     report_ready = False
-                    try:
-                        netlink.wait_for_media_disconnect_connect(
-                            nl_sock, lease['interface'])
-                    except AssertionError as error:
-                        LOG.error(error)
-                        return
+
+                    with events.ReportEventStack(
+                            name="wait-for-media-disconnect-connect",
+                            description="wait for vnet switch",
+                            parent=azure_ds_reporter):
+                        try:
+                            netlink.wait_for_media_disconnect_connect(
+                                nl_sock, lease['interface'])
+                        except AssertionError as error:
+                            report_diagnostic_event(error)
+                            LOG.error(error)
+                            break
+
+                    vnet_switched = True
                     self._ephemeral_dhcp_ctx.clean_network()
                 else:
-                    return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
-                                   headers=headers, exception_cb=exc_cb,
-                                   infinite=True, log_req_resp=False).contents
+                    with events.ReportEventStack(
+                            name="get-reprovision-data-from-imds",
+                            description="get reprovision data from imds",
+                            parent=azure_ds_reporter):
+                        return_val = readurl(url,
+                                             timeout=IMDS_TIMEOUT_IN_SECONDS,
+                                             headers=headers,
+                                             exception_cb=exc_cb,
+                                             infinite=True,
+                                             log_req_resp=False).contents
+                    break
             except UrlError:
                 # Teardown our EphemeralDHCPv4 context on failure as we retry
                 self._ephemeral_dhcp_ctx.clean_network()
@@ -598,6 +670,14 @@ class DataSourceAzure(sources.DataSource):
                 if nl_sock:
                     nl_sock.close()
 
+        if vnet_switched:
+            report_diagnostic_event("attempted dhcp %d times after reuse" %
+                                    dhcp_attempts)
+            report_diagnostic_event("polled imds %d times after reuse" %
+                                    self.imds_poll_counter)
+
+        return return_val
+
     @azure_ds_telemetry_reporter
     def _report_ready(self, lease):
         """Tells the fabric provisioning has completed """
@@ -666,9 +746,12 @@ class DataSourceAzure(sources.DataSource):
                   self.ds_cfg['agent_command'])
         try:
             fabric_data = metadata_func()
-        except Exception:
+        except Exception as e:
+            report_diagnostic_event(
+                "Error communicating with Azure fabric; You may experience "
+                "connectivity issues: %s" % e)
             LOG.warning(
-                "Error communicating with Azure fabric; You may experience."
+                "Error communicating with Azure fabric; You may experience "
                 "connectivity issues.", exc_info=True)
             return False
 
@@ -684,6 +767,11 @@ class DataSourceAzure(sources.DataSource):
         return
 
     @property
+    def availability_zone(self):
+        return self.metadata.get(
+            'imds', {}).get('compute', {}).get('platformFaultDomain')
+
+    @property
     def network_config(self):
         """Generate a network config like net.generate_fallback_network() with
            the following exceptions.
@@ -701,6 +789,10 @@ class DataSourceAzure(sources.DataSource):
             self._network_config = parse_network_config(nc_src)
         return self._network_config
 
+    @property
+    def region(self):
+        return self.metadata.get('imds', {}).get('compute', {}).get('location')
+
 
 def _partitions_on_device(devpath, maxnum=16):
     # return a list of tuples (ptnum, path) for each part on devpath
@@ -1018,7 +1110,9 @@ def read_azure_ovf(contents):
     try:
         dom = minidom.parseString(contents)
     except Exception as e:
-        raise BrokenAzureDataSource("Invalid ovf-env.xml: %s" % e)
+        error_str = "Invalid ovf-env.xml: %s" % e
+        report_diagnostic_event(error_str)
+        raise BrokenAzureDataSource(error_str)
 
     results = find_child(dom.documentElement,
                          lambda n: n.localName == "ProvisioningSection")
@@ -1232,7 +1326,7 @@ def parse_network_config(imds_metadata):
                     privateIpv4 = addr4['privateIpAddress']
                     if privateIpv4:
                         if dev_config.get('dhcp4', False):
-                            # Append static address config for nic > 1
+                            # Append static address config for ip > 1
                             netPrefix = intf['ipv4']['subnet'][0].get(
                                 'prefix', '24')
                             if not dev_config.get('addresses'):
@@ -1242,6 +1336,11 @@ def parse_network_config(imds_metadata):
                                     ip=privateIpv4, prefix=netPrefix))
                         else:
                             dev_config['dhcp4'] = True
+                            # non-primary interfaces should have a higher
+                            # route-metric (cost) so default routes prefer
+                            # primary nic due to lower route-metric value
+                            dev_config['dhcp4-overrides'] = {
+                                'route-metric': (idx + 1) * 100}
                 for addr6 in intf['ipv6']['ipAddress']:
                     privateIpv6 = addr6['privateIpAddress']
                     if privateIpv6:
@@ -1285,8 +1384,13 @@ def get_metadata_from_imds(fallback_nic, retries):
     if net.is_up(fallback_nic):
         return util.log_time(**kwargs)
     else:
-        with EphemeralDHCPv4(fallback_nic):
-            return util.log_time(**kwargs)
+        try:
+            with EphemeralDHCPv4WithReporting(
+                    azure_ds_reporter, fallback_nic):
+                return util.log_time(**kwargs)
+        except Exception as e:
+            report_diagnostic_event("exception while getting metadata: %s" % e)
+            raise
 
 
 @azure_ds_telemetry_reporter
@@ -1299,11 +1403,14 @@ def _get_metadata_from_imds(retries):
             url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
             retries=retries, exception_cb=retry_on_url_exc)
     except Exception as e:
-        LOG.debug('Ignoring IMDS instance metadata: %s', e)
+        msg = 'Ignoring IMDS instance metadata: %s' % e
+        report_diagnostic_event(msg)
+        LOG.debug(msg)
         return {}
     try:
         return util.load_json(str(response))
-    except json.decoder.JSONDecodeError:
+    except json.decoder.JSONDecodeError as e:
+        report_diagnostic_event('non-json imds response' % e)
         LOG.warning(
             'Ignoring non-json IMDS instance metadata: %s', str(response))
     return {}
@@ -1356,8 +1463,10 @@ def _is_platform_viable(seed_dir):
         asset_tag = util.read_dmi_data('chassis-asset-tag')
         if asset_tag == AZURE_CHASSIS_ASSET_TAG:
             return True
-        LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
-        evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag
+        msg = "Non-Azure DMI asset tag '%s' discovered." % asset_tag
+        LOG.debug(msg)
+        evt.description = msg
+        report_diagnostic_event(msg)
         if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
             return True
         return False
diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
index 2955d3f..df88f67 100644
--- a/cloudinit/sources/DataSourceCloudSigma.py
+++ b/cloudinit/sources/DataSourceCloudSigma.py
@@ -42,12 +42,8 @@ class DataSourceCloudSigma(sources.DataSource):
         if not sys_product_name:
             LOG.debug("system-product-name not available in dmi data")
             return False
-        else:
-            LOG.debug("detected hypervisor as %s", sys_product_name)
-            return 'cloudsigma' in sys_product_name.lower()
-
-        LOG.warning("failed to query dmi data for system product name")
-        return False
+        LOG.debug("detected hypervisor as %s", sys_product_name)
+        return 'cloudsigma' in sys_product_name.lower()
 
     def _get_data(self):
         """
diff --git a/cloudinit/sources/DataSourceExoscale.py b/cloudinit/sources/DataSourceExoscale.py
new file mode 100644
index 0000000..52e7f6f
--- /dev/null
+++ b/cloudinit/sources/DataSourceExoscale.py
@@ -0,0 +1,258 @@
+# Author: Mathieu Corbin <mathieu.corbin@xxxxxxxxxxxx>
+# Author: Christopher Glass <christopher.glass@xxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit import ec2_utils as ec2
+from cloudinit import log as logging
+from cloudinit import sources
+from cloudinit import url_helper
+from cloudinit import util
+
+LOG = logging.getLogger(__name__)
+
+METADATA_URL = "http://169.254.169.254";
+API_VERSION = "1.0"
+PASSWORD_SERVER_PORT = 8080
+
+URL_TIMEOUT = 10
+URL_RETRIES = 6
+
+EXOSCALE_DMI_NAME = "Exoscale"
+
+BUILTIN_DS_CONFIG = {
+    # We run the set password config module on every boot in order to enable
+    # resetting the instance's password via the exoscale console (and a
+    # subsequent instance reboot).
+    'cloud_config_modules': [["set-passwords", "always"]]
+}
+
+
+class DataSourceExoscale(sources.DataSource):
+
+    dsname = 'Exoscale'
+
+    def __init__(self, sys_cfg, distro, paths):
+        super(DataSourceExoscale, self).__init__(sys_cfg, distro, paths)
+        LOG.debug("Initializing the Exoscale datasource")
+
+        self.metadata_url = self.ds_cfg.get('metadata_url', METADATA_URL)
+        self.api_version = self.ds_cfg.get('api_version', API_VERSION)
+        self.password_server_port = int(
+            self.ds_cfg.get('password_server_port', PASSWORD_SERVER_PORT))
+        self.url_timeout = self.ds_cfg.get('timeout', URL_TIMEOUT)
+        self.url_retries = self.ds_cfg.get('retries', URL_RETRIES)
+
+        self.extra_config = BUILTIN_DS_CONFIG
+
+    def wait_for_metadata_service(self):
+        """Wait for the metadata service to be reachable."""
+
+        metadata_url = "{}/{}/meta-data/instance-id".format(
+            self.metadata_url, self.api_version)
+
+        url = url_helper.wait_for_url(
+            urls=[metadata_url],
+            max_wait=self.url_max_wait,
+            timeout=self.url_timeout,
+            status_cb=LOG.critical)
+
+        return bool(url)
+
+    def crawl_metadata(self):
+        """
+        Crawl the metadata service when available.
+
+        @returns: Dictionary of crawled metadata content.
+        """
+        metadata_ready = util.log_time(
+            logfunc=LOG.info,
+            msg='waiting for the metadata service',
+            func=self.wait_for_metadata_service)
+
+        if not metadata_ready:
+            return {}
+
+        return read_metadata(self.metadata_url, self.api_version,
+                             self.password_server_port, self.url_timeout,
+                             self.url_retries)
+
+    def _get_data(self):
+        """Fetch the user data, the metadata and the VM password
+        from the metadata service.
+
+        Please refer to the datasource documentation for details on how the
+        metadata server and password server are crawled.
+        """
+        if not self._is_platform_viable():
+            return False
+
+        data = util.log_time(
+            logfunc=LOG.debug,
+            msg='Crawl of metadata service',
+            func=self.crawl_metadata)
+
+        if not data:
+            return False
+
+        self.userdata_raw = data['user-data']
+        self.metadata = data['meta-data']
+        password = data.get('password')
+
+        password_config = {}
+        if password:
+            # Since we have a password, let's make sure we are allowed to use
+            # it by allowing ssh_pwauth.
+            # The password module's default behavior is to leave the
+            # configuration as-is in this regard, so that means it will either
+            # leave the password always disabled if no password is ever set, or
+            # leave the password login enabled if we set it once.
+            password_config = {
+                'ssh_pwauth': True,
+                'password': password,
+                'chpasswd': {
+                    'expire': False,
+                },
+            }
+
+        # builtin extra_config overrides password_config
+        self.extra_config = util.mergemanydict(
+            [self.extra_config, password_config])
+
+        return True
+
+    def get_config_obj(self):
+        return self.extra_config
+
+    def _is_platform_viable(self):
+        return util.read_dmi_data('system-product-name').startswith(
+            EXOSCALE_DMI_NAME)
+
+
+# Used to match classes to dependencies
+datasources = [
+    (DataSourceExoscale, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
+]
+
+
+# Return a list of data sources that match this set of dependencies
+def get_datasource_list(depends):
+    return sources.list_from_depends(depends, datasources)
+
+
+def get_password(metadata_url=METADATA_URL,
+                 api_version=API_VERSION,
+                 password_server_port=PASSWORD_SERVER_PORT,
+                 url_timeout=URL_TIMEOUT,
+                 url_retries=URL_RETRIES):
+    """Obtain the VM's password if set.
+
+    Once fetched the password is marked saved. Future calls to this method may
+    return empty string or 'saved_password'."""
+    password_url = "{}:{}/{}/".format(metadata_url, password_server_port,
+                                      api_version)
+    response = url_helper.read_file_or_url(
+        password_url,
+        ssl_details=None,
+        headers={"DomU_Request": "send_my_password"},
+        timeout=url_timeout,
+        retries=url_retries)
+    password = response.contents.decode('utf-8')
+    # the password is empty or already saved
+    # Note: the original metadata server would answer an additional
+    # 'bad_request' status, but the Exoscale implementation does not.
+    if password in ['', 'saved_password']:
+        return None
+    # save the password
+    url_helper.read_file_or_url(
+        password_url,
+        ssl_details=None,
+        headers={"DomU_Request": "saved_password"},
+        timeout=url_timeout,
+        retries=url_retries)
+    return password
+
+
+def read_metadata(metadata_url=METADATA_URL,
+                  api_version=API_VERSION,
+                  password_server_port=PASSWORD_SERVER_PORT,
+                  url_timeout=URL_TIMEOUT,
+                  url_retries=URL_RETRIES):
+    """Query the metadata server and return the retrieved data."""
+    crawled_metadata = {}
+    crawled_metadata['_metadata_api_version'] = api_version
+    try:
+        crawled_metadata['user-data'] = ec2.get_instance_userdata(
+            api_version,
+            metadata_url,
+            timeout=url_timeout,
+            retries=url_retries)
+        crawled_metadata['meta-data'] = ec2.get_instance_metadata(
+            api_version,
+            metadata_url,
+            timeout=url_timeout,
+            retries=url_retries)
+    except Exception as e:
+        util.logexc(LOG, "failed reading from metadata url %s (%s)",
+                    metadata_url, e)
+        return {}
+
+    try:
+        crawled_metadata['password'] = get_password(
+            api_version=api_version,
+            metadata_url=metadata_url,
+            password_server_port=password_server_port,
+            url_retries=url_retries,
+            url_timeout=url_timeout)
+    except Exception as e:
+        util.logexc(LOG, "failed to read from password server url %s:%s (%s)",
+                    metadata_url, password_server_port, e)
+
+    return crawled_metadata
+
+
+if __name__ == "__main__":
+    import argparse
+
+    parser = argparse.ArgumentParser(description='Query Exoscale Metadata')
+    parser.add_argument(
+        "--endpoint",
+        metavar="URL",
+        help="The url of the metadata service.",
+        default=METADATA_URL)
+    parser.add_argument(
+        "--version",
+        metavar="VERSION",
+        help="The version of the metadata endpoint to query.",
+        default=API_VERSION)
+    parser.add_argument(
+        "--retries",
+        metavar="NUM",
+        type=int,
+        help="The number of retries querying the endpoint.",
+        default=URL_RETRIES)
+    parser.add_argument(
+        "--timeout",
+        metavar="NUM",
+        type=int,
+        help="The time in seconds to wait before timing out.",
+        default=URL_TIMEOUT)
+    parser.add_argument(
+        "--password-port",
+        metavar="PORT",
+        type=int,
+        help="The port on which the password endpoint listens",
+        default=PASSWORD_SERVER_PORT)
+
+    args = parser.parse_args()
+
+    data = read_metadata(
+        metadata_url=args.endpoint,
+        api_version=args.version,
+        password_server_port=args.password_port,
+        url_timeout=args.timeout,
+        url_retries=args.retries)
+
+    print(util.json_dumps(data))
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py
index d816262..6cbfbba 100644
--- a/cloudinit/sources/DataSourceGCE.py
+++ b/cloudinit/sources/DataSourceGCE.py
@@ -18,10 +18,13 @@ LOG = logging.getLogger(__name__)
 MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
 BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
 REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
+GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/'
+                        'v1/instance/guest-attributes')
+HOSTKEY_NAMESPACE = 'hostkeys'
+HEADERS = {'Metadata-Flavor': 'Google'}
 
 
 class GoogleMetadataFetcher(object):
-    headers = {'Metadata-Flavor': 'Google'}
 
     def __init__(self, metadata_address):
         self.metadata_address = metadata_address
@@ -32,7 +35,7 @@ class GoogleMetadataFetcher(object):
             url = self.metadata_address + path
             if is_recursive:
                 url += '/?recursive=True'
-            resp = url_helper.readurl(url=url, headers=self.headers)
+            resp = url_helper.readurl(url=url, headers=HEADERS)
         except url_helper.UrlError as exc:
             msg = "url %s raised exception %s"
             LOG.debug(msg, path, exc)
@@ -90,6 +93,10 @@ class DataSourceGCE(sources.DataSource):
         public_keys_data = self.metadata['public-keys-data']
         return _parse_public_keys(public_keys_data, self.default_user)
 
+    def publish_host_keys(self, hostkeys):
+        for key in hostkeys:
+            _write_host_key_to_guest_attributes(*key)
+
     def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
         # GCE has long FDQN's and has asked for short hostnames.
         return self.metadata['local-hostname'].split('.')[0]
@@ -103,6 +110,17 @@ class DataSourceGCE(sources.DataSource):
         return self.availability_zone.rsplit('-', 1)[0]
 
 
+def _write_host_key_to_guest_attributes(key_type, key_value):
+    url = '%s/%s/%s' % (GUEST_ATTRIBUTES_URL, HOSTKEY_NAMESPACE, key_type)
+    key_value = key_value.encode('utf-8')
+    resp = url_helper.readurl(url=url, data=key_value, headers=HEADERS,
+                              request_method='PUT', check_status=False)
+    if resp.ok():
+        LOG.debug('Wrote %s host key to guest attributes.',  key_type)
+    else:
+        LOG.debug('Unable to write %s host key to guest attributes.', key_type)
+
+
 def _has_expired(public_key):
     # Check whether an SSH key is expired. Public key input is a single SSH
     # public key in the GCE specific key format documented here:
diff --git a/cloudinit/sources/DataSourceHetzner.py b/cloudinit/sources/DataSourceHetzner.py
index 5c75b65..5029833 100644
--- a/cloudinit/sources/DataSourceHetzner.py
+++ b/cloudinit/sources/DataSourceHetzner.py
@@ -28,6 +28,9 @@ MD_WAIT_RETRY = 2
 
 
 class DataSourceHetzner(sources.DataSource):
+
+    dsname = 'Hetzner'
+
     def __init__(self, sys_cfg, distro, paths):
         sources.DataSource.__init__(self, sys_cfg, distro, paths)
         self.distro = distro
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index 70e7a5c..dd941d2 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -148,6 +148,9 @@ class DataSourceOVF(sources.DataSource):
                     product_marker, os.path.join(self.paths.cloud_dir, 'data'))
                 special_customization = product_marker and not hasmarkerfile
                 customscript = self._vmware_cust_conf.custom_script_name
+                ccScriptsDir = os.path.join(
+                    self.paths.get_cpath("scripts"),
+                    "per-instance")
             except Exception as e:
                 _raise_error_status(
                     "Error parsing the customization Config File",
@@ -201,7 +204,9 @@ class DataSourceOVF(sources.DataSource):
 
                 if customscript:
                     try:
-                        postcust = PostCustomScript(customscript, imcdirpath)
+                        postcust = PostCustomScript(customscript,
+                                                    imcdirpath,
+                                                    ccScriptsDir)
                         postcust.execute()
                     except Exception as e:
                         _raise_error_status(
diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
index 70b9c58..6e73f56 100644
--- a/cloudinit/sources/DataSourceOracle.py
+++ b/cloudinit/sources/DataSourceOracle.py
@@ -16,7 +16,7 @@ Notes:
 """
 
 from cloudinit.url_helper import combine_url, readurl, UrlError
-from cloudinit.net import dhcp
+from cloudinit.net import dhcp, get_interfaces_by_mac
 from cloudinit import net
 from cloudinit import sources
 from cloudinit import util
@@ -28,8 +28,80 @@ import re
 
 LOG = logging.getLogger(__name__)
 
+BUILTIN_DS_CONFIG = {
+    # Don't use IMDS to configure secondary NICs by default
+    'configure_secondary_nics': False,
+}
 CHASSIS_ASSET_TAG = "OracleCloud.com"
 METADATA_ENDPOINT = "http://169.254.169.254/openstack/";
+VNIC_METADATA_URL = 'http://169.254.169.254/opc/v1/vnics/'
+# https://docs.cloud.oracle.com/iaas/Content/Network/Troubleshoot/connectionhang.htm#Overview,
+# indicates that an MTU of 9000 is used within OCI
+MTU = 9000
+
+
+def _add_network_config_from_opc_imds(network_config):
+    """
+    Fetch data from Oracle's IMDS, generate secondary NIC config, merge it.
+
+    The primary NIC configuration should not be modified based on the IMDS
+    values, as it should continue to be configured for DHCP.  As such, this
+    takes an existing network_config dict which is expected to have the primary
+    NIC configuration already present.  It will mutate the given dict to
+    include the secondary VNICs.
+
+    :param network_config:
+        A v1 network config dict with the primary NIC already configured.  This
+        dict will be mutated.
+
+    :raises:
+        Exceptions are not handled within this function.  Likely exceptions are
+        those raised by url_helper.readurl (if communicating with the IMDS
+        fails), ValueError/JSONDecodeError (if the IMDS returns invalid JSON),
+        and KeyError/IndexError (if the IMDS returns valid JSON with unexpected
+        contents).
+    """
+    resp = readurl(VNIC_METADATA_URL)
+    vnics = json.loads(str(resp))
+
+    if 'nicIndex' in vnics[0]:
+        # TODO: Once configure_secondary_nics defaults to True, lower the level
+        # of this log message.  (Currently, if we're running this code at all,
+        # someone has explicitly opted-in to secondary VNIC configuration, so
+        # we should warn them that it didn't happen.  Once it's default, this
+        # would be emitted on every Bare Metal Machine launch, which means INFO
+        # or DEBUG would be more appropriate.)
+        LOG.warning(
+            'VNIC metadata indicates this is a bare metal machine; skipping'
+            ' secondary VNIC configuration.'
+        )
+        return
+
+    interfaces_by_mac = get_interfaces_by_mac()
+
+    for vnic_dict in vnics[1:]:
+        # We skip the first entry in the response because the primary interface
+        # is already configured by iSCSI boot; applying configuration from the
+        # IMDS is not required.
+        mac_address = vnic_dict['macAddr'].lower()
+        if mac_address not in interfaces_by_mac:
+            LOG.debug('Interface with MAC %s not found; skipping', mac_address)
+            continue
+        name = interfaces_by_mac[mac_address]
+        subnet = {
+            'type': 'static',
+            'address': vnic_dict['privateIp'],
+            'netmask': vnic_dict['subnetCidrBlock'].split('/')[1],
+            'gateway': vnic_dict['virtualRouterIp'],
+            'control': 'manual',
+        }
+        network_config['config'].append({
+            'name': name,
+            'type': 'physical',
+            'mac_address': mac_address,
+            'mtu': MTU,
+            'subnets': [subnet],
+        })
 
 
 class DataSourceOracle(sources.DataSource):
@@ -37,8 +109,22 @@ class DataSourceOracle(sources.DataSource):
     dsname = 'Oracle'
     system_uuid = None
     vendordata_pure = None
+    network_config_sources = (
+        sources.NetworkConfigSource.cmdline,
+        sources.NetworkConfigSource.ds,
+        sources.NetworkConfigSource.initramfs,
+        sources.NetworkConfigSource.system_cfg,
+    )
+
     _network_config = sources.UNSET
 
+    def __init__(self, sys_cfg, *args, **kwargs):
+        super(DataSourceOracle, self).__init__(sys_cfg, *args, **kwargs)
+
+        self.ds_cfg = util.mergemanydict([
+            util.get_cfg_by_path(sys_cfg, ['datasource', self.dsname], {}),
+            BUILTIN_DS_CONFIG])
+
     def _is_platform_viable(self):
         """Check platform environment to report if this datasource may run."""
         return _is_platform_viable()
@@ -48,7 +134,7 @@ class DataSourceOracle(sources.DataSource):
             return False
 
         # network may be configured if iscsi root.  If that is the case
-        # then read_kernel_cmdline_config will return non-None.
+        # then read_initramfs_config will return non-None.
         if _is_iscsi_root():
             data = self.crawl_metadata()
         else:
@@ -118,11 +204,17 @@ class DataSourceOracle(sources.DataSource):
         We nonetheless return cmdline provided config if present
         and fallback to generate fallback."""
         if self._network_config == sources.UNSET:
-            cmdline_cfg = cmdline.read_kernel_cmdline_config()
-            if cmdline_cfg:
-                self._network_config = cmdline_cfg
-            else:
+            self._network_config = cmdline.read_initramfs_config()
+            if not self._network_config:
                 self._network_config = self.distro.generate_fallback_config()
+            if self.ds_cfg.get('configure_secondary_nics'):
+                try:
+                    # Mutate self._network_config to include secondary VNICs
+                    _add_network_config_from_opc_imds(self._network_config)
+                except Exception:
+                    util.logexc(
+                        LOG,
+                        "Failed to fetch secondary network configuration!")
         return self._network_config
 
 
@@ -137,7 +229,7 @@ def _is_platform_viable():
 
 
 def _is_iscsi_root():
-    return bool(cmdline.read_kernel_cmdline_config())
+    return bool(cmdline.read_initramfs_config())
 
 
 def _load_index(content):
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index e6966b3..a319322 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -66,6 +66,13 @@ CLOUD_ID_REGION_PREFIX_MAP = {
     'china': ('azure-china', lambda c: c == 'azure'),  # only change azure
 }
 
+# NetworkConfigSource represents the canonical list of network config sources
+# that cloud-init knows about.  (Python 2.7 lacks PEP 435, so use a singleton
+# namedtuple as an enum; see https://stackoverflow.com/a/6971002)
+_NETCFG_SOURCE_NAMES = ('cmdline', 'ds', 'system_cfg', 'fallback', 'initramfs')
+NetworkConfigSource = namedtuple('NetworkConfigSource',
+                                 _NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES)
+
 
 class DataSourceNotFoundException(Exception):
     pass
@@ -153,6 +160,16 @@ class DataSource(object):
     # Track the discovered fallback nic for use in configuration generation.
     _fallback_interface = None
 
+    # The network configuration sources that should be considered for this data
+    # source.  (The first source in this list that provides network
+    # configuration will be used without considering any that follow.)  This
+    # should always be a subset of the members of NetworkConfigSource with no
+    # duplicate entries.
+    network_config_sources = (NetworkConfigSource.cmdline,
+                              NetworkConfigSource.initramfs,
+                              NetworkConfigSource.system_cfg,
+                              NetworkConfigSource.ds)
+
     # read_url_params
     url_max_wait = -1   # max_wait < 0 means do not wait
     url_timeout = 10    # timeout for each metadata url read attempt
@@ -474,6 +491,16 @@ class DataSource(object):
     def get_public_ssh_keys(self):
         return normalize_pubkey_data(self.metadata.get('public-keys'))
 
+    def publish_host_keys(self, hostkeys):
+        """Publish the public SSH host keys (found in /etc/ssh/*.pub).
+
+        @param hostkeys: List of host key tuples (key_type, key_value),
+            where key_type is the first field in the public key file
+            (e.g. 'ssh-rsa') and key_value is the key itself
+            (e.g. 'AAAAB3NzaC1y...').
+        """
+        pass
+
     def _remap_device(self, short_name):
         # LP: #611137
         # the metadata service may believe that devices are named 'sda'
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index 82c4c8c..f1fba17 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -16,7 +16,11 @@ from xml.etree import ElementTree
 
 from cloudinit import url_helper
 from cloudinit import util
+from cloudinit import version
+from cloudinit import distros
 from cloudinit.reporting import events
+from cloudinit.net.dhcp import EphemeralDHCPv4
+from datetime import datetime
 
 LOG = logging.getLogger(__name__)
 
@@ -24,6 +28,10 @@ LOG = logging.getLogger(__name__)
 # value is applied if the endpoint can't be found within a lease file
 DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
 
+BOOT_EVENT_TYPE = 'boot-telemetry'
+SYSTEMINFO_EVENT_TYPE = 'system-info'
+DIAGNOSTIC_EVENT_TYPE = 'diagnostic'
+
 azure_ds_reporter = events.ReportEventStack(
     name="azure-ds",
     description="initialize reporter for azure ds",
@@ -40,6 +48,105 @@ def azure_ds_telemetry_reporter(func):
     return impl
 
 
+@azure_ds_telemetry_reporter
+def get_boot_telemetry():
+    """Report timestamps related to kernel initialization and systemd
+       activation of cloud-init"""
+    if not distros.uses_systemd():
+        raise RuntimeError(
+            "distro not using systemd, skipping boot telemetry")
+
+    LOG.debug("Collecting boot telemetry")
+    try:
+        kernel_start = float(time.time()) - float(util.uptime())
+    except ValueError:
+        raise RuntimeError("Failed to determine kernel start timestamp")
+
+    try:
+        out, _ = util.subp(['/bin/systemctl',
+                            'show', '-p',
+                            'UserspaceTimestampMonotonic'],
+                           capture=True)
+        tsm = None
+        if out and '=' in out:
+            tsm = out.split("=")[1]
+
+        if not tsm:
+            raise RuntimeError("Failed to parse "
+                               "UserspaceTimestampMonotonic from systemd")
+
+        user_start = kernel_start + (float(tsm) / 1000000)
+    except util.ProcessExecutionError as e:
+        raise RuntimeError("Failed to get UserspaceTimestampMonotonic: %s"
+                           % e)
+    except ValueError as e:
+        raise RuntimeError("Failed to parse "
+                           "UserspaceTimestampMonotonic from systemd: %s"
+                           % e)
+
+    try:
+        out, _ = util.subp(['/bin/systemctl', 'show',
+                            'cloud-init-local', '-p',
+                            'InactiveExitTimestampMonotonic'],
+                           capture=True)
+        tsm = None
+        if out and '=' in out:
+            tsm = out.split("=")[1]
+        if not tsm:
+            raise RuntimeError("Failed to parse "
+                               "InactiveExitTimestampMonotonic from systemd")
+
+        cloudinit_activation = kernel_start + (float(tsm) / 1000000)
+    except util.ProcessExecutionError as e:
+        raise RuntimeError("Failed to get InactiveExitTimestampMonotonic: %s"
+                           % e)
+    except ValueError as e:
+        raise RuntimeError("Failed to parse "
+                           "InactiveExitTimestampMonotonic from systemd: %s"
+                           % e)
+
+    evt = events.ReportingEvent(
+        BOOT_EVENT_TYPE, 'boot-telemetry',
+        "kernel_start=%s user_start=%s cloudinit_activation=%s" %
+        (datetime.utcfromtimestamp(kernel_start).isoformat() + 'Z',
+         datetime.utcfromtimestamp(user_start).isoformat() + 'Z',
+         datetime.utcfromtimestamp(cloudinit_activation).isoformat() + 'Z'),
+        events.DEFAULT_EVENT_ORIGIN)
+    events.report_event(evt)
+
+    # return the event for unit testing purpose
+    return evt
+
+
+@azure_ds_telemetry_reporter
+def get_system_info():
+    """Collect and report system information"""
+    info = util.system_info()
+    evt = events.ReportingEvent(
+        SYSTEMINFO_EVENT_TYPE, 'system information',
+        "cloudinit_version=%s, kernel_version=%s, variant=%s, "
+        "distro_name=%s, distro_version=%s, flavor=%s, "
+        "python_version=%s" %
+        (version.version_string(), info['release'], info['variant'],
+         info['dist'][0], info['dist'][1], info['dist'][2],
+         info['python']), events.DEFAULT_EVENT_ORIGIN)
+    events.report_event(evt)
+
+    # return the event for unit testing purpose
+    return evt
+
+
+def report_diagnostic_event(str):
+    """Report a diagnostic event"""
+    evt = events.ReportingEvent(
+        DIAGNOSTIC_EVENT_TYPE, 'diagnostic message',
+        str, events.DEFAULT_EVENT_ORIGIN)
+    events.report_event(evt)
+
+    # return the event for unit testing purpose
+    return evt
+
+
 @contextmanager
 def cd(newdir):
     prevdir = os.getcwd()
@@ -360,16 +467,19 @@ class WALinuxAgentShim(object):
             value = dhcp245
             LOG.debug("Using Azure Endpoint from dhcp options")
         if value is None:
+            report_diagnostic_event("No Azure endpoint from dhcp options")
             LOG.debug('Finding Azure endpoint from networkd...')
             value = WALinuxAgentShim._networkd_get_value_from_leases()
         if value is None:
             # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
             # a dhclient exit hook that calls cloud-init-dhclient-hook
+            report_diagnostic_event("No Azure endpoint from networkd")
             LOG.debug('Finding Azure endpoint from hook json...')
             dhcp_options = WALinuxAgentShim._load_dhclient_json()
             value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
         if value is None:
             # Fallback and check the leases file if unsuccessful
+            report_diagnostic_event("No Azure endpoint from dhclient logs")
             LOG.debug("Unable to find endpoint in dhclient logs. "
                       " Falling back to check lease files")
             if fallback_lease_file is None:
@@ -381,11 +491,15 @@ class WALinuxAgentShim(object):
                 value = WALinuxAgentShim._get_value_from_leases_file(
                     fallback_lease_file)
         if value is None:
-            LOG.warning("No lease found; using default endpoint")
+            msg = "No lease found; using default endpoint"
+            report_diagnostic_event(msg)
+            LOG.warning(msg)
             value = DEFAULT_WIRESERVER_ENDPOINT
 
         endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
-        LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
+        msg = 'Azure endpoint found at %s' % endpoint_ip_address
+        report_diagnostic_event(msg)
+        LOG.debug(msg)
         return endpoint_ip_address
 
     @azure_ds_telemetry_reporter
@@ -399,16 +513,19 @@ class WALinuxAgentShim(object):
             try:
                 response = http_client.get(
                     'http://{0}/machine/?comp=goalstate'.format(self.endpoint))
-            except Exception:
+            except Exception as e:
                 if attempts < 10:
                     time.sleep(attempts + 1)
                 else:
+                    report_diagnostic_event(
+                        "failed to register with Azure: %s" % e)
                     raise
             else:
                 break
             attempts += 1
         LOG.debug('Successfully fetched GoalState XML.')
         goal_state = GoalState(response.contents, http_client)
+        report_diagnostic_event("container_id %s" % goal_state.container_id)
         ssh_keys = []
         if goal_state.certificates_xml is not None and pubkey_info is not None:
             LOG.debug('Certificate XML found; parsing out public keys.')
@@ -449,11 +566,20 @@ class WALinuxAgentShim(object):
             container_id=goal_state.container_id,
             instance_id=goal_state.instance_id,
         )
-        http_client.post(
-            "http://{0}/machine?comp=health".format(self.endpoint),
-            data=document,
-            extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
-        )
+        # Host will collect kvps when cloud-init reports ready.
+        # some kvps might still be in the queue. We yield the scheduler
+        # to make sure we process all kvps up till this point.
+        time.sleep(0)
+        try:
+            http_client.post(
+                "http://{0}/machine?comp=health".format(self.endpoint),
+                data=document,
+                extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
+            )
+        except Exception as e:
+            report_diagnostic_event("exception while reporting ready: %s" % e)
+            raise
+
         LOG.info('Reported ready to Azure fabric.')
 
 
@@ -467,4 +593,22 @@ def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
     finally:
         shim.clean_up()
 
+
+class EphemeralDHCPv4WithReporting(object):
+    def __init__(self, reporter, nic=None):
+        self.reporter = reporter
+        self.ephemeralDHCPv4 = EphemeralDHCPv4(iface=nic)
+
+    def __enter__(self):
+        with events.ReportEventStack(
+                name="obtain-dhcp-lease",
+                description="obtain dhcp lease",
+                parent=self.reporter):
+            return self.ephemeralDHCPv4.__enter__()
+
+    def __exit__(self, excp_type, excp_value, excp_traceback):
+        self.ephemeralDHCPv4.__exit__(
+            excp_type, excp_value, excp_traceback)
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
index a7d4ad9..9f14770 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
@@ -1,5 +1,5 @@
 # Copyright (C) 2017 Canonical Ltd.
-# Copyright (C) 2017 VMware Inc.
+# Copyright (C) 2017-2019 VMware Inc.
 #
 # Author: Maitreyee Saikia <msaikia@xxxxxxxxxx>
 #
@@ -8,7 +8,6 @@
 import logging
 import os
 import stat
-from textwrap import dedent
 
 from cloudinit import util
 
@@ -20,12 +19,15 @@ class CustomScriptNotFound(Exception):
 
 
 class CustomScriptConstant(object):
-    RC_LOCAL = "/etc/rc.local"
-    POST_CUST_TMP_DIR = "/root/.customization"
-    POST_CUST_RUN_SCRIPT_NAME = "post-customize-guest.sh"
-    POST_CUST_RUN_SCRIPT = os.path.join(POST_CUST_TMP_DIR,
-                                        POST_CUST_RUN_SCRIPT_NAME)
-    POST_REBOOT_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
+    CUSTOM_TMP_DIR = "/root/.customization"
+
+    # The user defined custom script
+    CUSTOM_SCRIPT_NAME = "customize.sh"
+    CUSTOM_SCRIPT = os.path.join(CUSTOM_TMP_DIR,
+                                 CUSTOM_SCRIPT_NAME)
+    POST_CUSTOM_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
+    # The cc_scripts_per_instance script to launch custom script
+    POST_CUSTOM_SCRIPT_NAME = "post-customize-guest.sh"
 
 
 class RunCustomScript(object):
@@ -39,10 +41,19 @@ class RunCustomScript(object):
             raise CustomScriptNotFound("Script %s not found!! "
                                        "Cannot execute custom script!"
                                        % self.scriptpath)
+
+        util.ensure_dir(CustomScriptConstant.CUSTOM_TMP_DIR)
+
+        LOG.debug("Copying custom script to %s",
+                  CustomScriptConstant.CUSTOM_SCRIPT)
+        util.copy(self.scriptpath, CustomScriptConstant.CUSTOM_SCRIPT)
+
         # Strip any CR characters from the decoded script
-        util.load_file(self.scriptpath).replace("\r", "")
-        st = os.stat(self.scriptpath)
-        os.chmod(self.scriptpath, st.st_mode | stat.S_IEXEC)
+        content = util.load_file(
+            CustomScriptConstant.CUSTOM_SCRIPT).replace("\r", "")
+        util.write_file(CustomScriptConstant.CUSTOM_SCRIPT,
+                        content,
+                        mode=0o544)
 
 
 class PreCustomScript(RunCustomScript):
@@ -50,104 +61,34 @@ class PreCustomScript(RunCustomScript):
         """Executing custom script with precustomization argument."""
         LOG.debug("Executing pre-customization script")
         self.prepare_script()
-        util.subp(["/bin/sh", self.scriptpath, "precustomization"])
+        util.subp([CustomScriptConstant.CUSTOM_SCRIPT, "precustomization"])
 
 
 class PostCustomScript(RunCustomScript):
-    def __init__(self, scriptname, directory):
+    def __init__(self, scriptname, directory, ccScriptsDir):
         super(PostCustomScript, self).__init__(scriptname, directory)
-        # Determine when to run custom script. When postreboot is True,
-        # the user uploaded script will run as part of rc.local after
-        # the machine reboots. This is determined by presence of rclocal.
-        # When postreboot is False, script will run as part of cloud-init.
-        self.postreboot = False
-
-    def _install_post_reboot_agent(self, rclocal):
-        """
-        Install post-reboot agent for running custom script after reboot.
-        As part of this process, we are editing the rclocal file to run a
-        VMware script, which in turn is resposible for handling the user
-        script.
-        @param: path to rc local.
-        """
-        LOG.debug("Installing post-reboot customization from %s to %s",
-                  self.directory, rclocal)
-        if not self.has_previous_agent(rclocal):
-            LOG.info("Adding post-reboot customization agent to rc.local")
-            new_content = dedent("""
-                # Run post-reboot guest customization
-                /bin/sh %s
-                exit 0
-                """) % CustomScriptConstant.POST_CUST_RUN_SCRIPT
-            existing_rclocal = util.load_file(rclocal).replace('exit 0\n', '')
-            st = os.stat(rclocal)
-            # "x" flag should be set
-            mode = st.st_mode | stat.S_IEXEC
-            util.write_file(rclocal, existing_rclocal + new_content, mode)
-
-        else:
-            # We don't need to update rclocal file everytime a customization
-            # is requested. It just needs to be done for the first time.
-            LOG.info("Post-reboot guest customization agent is already "
-                     "registered in rc.local")
-        LOG.debug("Installing post-reboot customization agent finished: %s",
-                  self.postreboot)
-
-    def has_previous_agent(self, rclocal):
-        searchstring = "# Run post-reboot guest customization"
-        if searchstring in open(rclocal).read():
-            return True
-        return False
-
-    def find_rc_local(self):
-        """
-        Determine if rc local is present.
-        """
-        rclocal = ""
-        if os.path.exists(CustomScriptConstant.RC_LOCAL):
-            LOG.debug("rc.local detected.")
-            # resolving in case of symlink
-            rclocal = os.path.realpath(CustomScriptConstant.RC_LOCAL)
-            LOG.debug("rc.local resolved to %s", rclocal)
-        else:
-            LOG.warning("Can't find rc.local, post-customization "
-                        "will be run before reboot")
-        return rclocal
-
-    def install_agent(self):
-        rclocal = self.find_rc_local()
-        if rclocal:
-            self._install_post_reboot_agent(rclocal)
-            self.postreboot = True
+        self.ccScriptsDir = ccScriptsDir
+        self.ccScriptPath = os.path.join(
+            ccScriptsDir,
+            CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME)
 
     def execute(self):
         """
-        This method executes post-customization script before or after reboot
-        based on the presence of rc local.
+        This method copy the post customize run script to
+        cc_scripts_per_instance directory and let this
+        module to run post custom script.
         """
         self.prepare_script()
-        self.install_agent()
-        if not self.postreboot:
-            LOG.warning("Executing post-customization script inline")
-            util.subp(["/bin/sh", self.scriptpath, "postcustomization"])
-        else:
-            LOG.debug("Scheduling custom script to run post reboot")
-            if not os.path.isdir(CustomScriptConstant.POST_CUST_TMP_DIR):
-                os.mkdir(CustomScriptConstant.POST_CUST_TMP_DIR)
-            # Script "post-customize-guest.sh" and user uploaded script are
-            # are present in the same directory and needs to copied to a temp
-            # directory to be executed post reboot. User uploaded script is
-            # saved as customize.sh in the temp directory.
-            # post-customize-guest.sh excutes customize.sh after reboot.
-            LOG.debug("Copying post-customization script")
-            util.copy(self.scriptpath,
-                      CustomScriptConstant.POST_CUST_TMP_DIR + "/customize.sh")
-            LOG.debug("Copying script to run post-customization script")
-            util.copy(
-                os.path.join(self.directory,
-                             CustomScriptConstant.POST_CUST_RUN_SCRIPT_NAME),
-                CustomScriptConstant.POST_CUST_RUN_SCRIPT)
-            LOG.info("Creating post-reboot pending marker")
-            util.ensure_file(CustomScriptConstant.POST_REBOOT_PENDING_MARKER)
+
+        LOG.debug("Copying post customize run script to %s",
+                  self.ccScriptPath)
+        util.copy(
+            os.path.join(self.directory,
+                         CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME),
+            self.ccScriptPath)
+        st = os.stat(self.ccScriptPath)
+        os.chmod(self.ccScriptPath, st.st_mode | stat.S_IEXEC)
+        LOG.info("Creating post customization pending marker")
+        util.ensure_file(CustomScriptConstant.POST_CUSTOM_PENDING_MARKER)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
index 97d6294..3ddf7df 100644
--- a/cloudinit/sources/tests/test_oracle.py
+++ b/cloudinit/sources/tests/test_oracle.py
@@ -1,7 +1,7 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit.sources import DataSourceOracle as oracle
-from cloudinit.sources import BrokenMetadata
+from cloudinit.sources import BrokenMetadata, NetworkConfigSource
 from cloudinit import helpers
 
 from cloudinit.tests import helpers as test_helpers
@@ -18,10 +18,52 @@ import uuid
 DS_PATH = "cloudinit.sources.DataSourceOracle"
 MD_VER = "2013-10-17"
 
+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Bare Metal Machine
+# with a secondary VNIC attached (vnicId truncated for Python line length)
+OPC_BM_SECONDARY_VNIC_RESPONSE = """\
+[ {
+  "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtyvcucqkhdqmgjszebxe4hrb!!TRUNCATED||",
+  "privateIp" : "10.0.0.8",
+  "vlanTag" : 0,
+  "macAddr" : "90:e2:ba:d4:f1:68",
+  "virtualRouterIp" : "10.0.0.1",
+  "subnetCidrBlock" : "10.0.0.0/24",
+  "nicIndex" : 0
+}, {
+  "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtfmkxjdy2sqidndiwrsg63zf!!TRUNCATED||",
+  "privateIp" : "10.0.4.5",
+  "vlanTag" : 1,
+  "macAddr" : "02:00:17:05:CF:51",
+  "virtualRouterIp" : "10.0.4.1",
+  "subnetCidrBlock" : "10.0.4.0/24",
+  "nicIndex" : 0
+} ]"""
+
+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Virtual Machine
+# with a secondary VNIC attached
+OPC_VM_SECONDARY_VNIC_RESPONSE = """\
+[ {
+  "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtch72z5pd76cc2636qeqh7z_truncated",
+  "privateIp" : "10.0.0.230",
+  "vlanTag" : 1039,
+  "macAddr" : "02:00:17:05:D1:DB",
+  "virtualRouterIp" : "10.0.0.1",
+  "subnetCidrBlock" : "10.0.0.0/24"
+}, {
+  "vnicId" : "ocid1.vnic.oc1.phx.abyhqljt4iew3gwmvrwrhhf3bp5drj_truncated",
+  "privateIp" : "10.0.0.231",
+  "vlanTag" : 1041,
+  "macAddr" : "00:00:17:02:2B:B1",
+  "virtualRouterIp" : "10.0.0.1",
+  "subnetCidrBlock" : "10.0.0.0/24"
+} ]"""
+
 
 class TestDataSourceOracle(test_helpers.CiTestCase):
     """Test datasource DataSourceOracle."""
 
+    with_logs = True
+
     ds_class = oracle.DataSourceOracle
 
     my_uuid = str(uuid.uuid4())
@@ -79,6 +121,16 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
         self.assertEqual(
             'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
 
+    def test_sys_cfg_can_enable_configure_secondary_nics(self):
+        # Confirm that behaviour is toggled by sys_cfg
+        ds, _mocks = self._get_ds()
+        self.assertFalse(ds.ds_cfg['configure_secondary_nics'])
+
+        sys_cfg = {
+            'datasource': {'Oracle': {'configure_secondary_nics': True}}}
+        ds, _mocks = self._get_ds(sys_cfg=sys_cfg)
+        self.assertTrue(ds.ds_cfg['configure_secondary_nics'])
+
     @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
     def test_without_userdata(self, m_is_iscsi_root):
         """If no user-data is provided, it should not be in return dict."""
@@ -133,9 +185,12 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
         self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
         self.assertEqual(my_userdata, ds.userdata_raw)
 
-    @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
+    @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
+                side_effect=lambda network_config: network_config)
+    @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
     @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
-    def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config):
+    def test_network_cmdline(self, m_is_iscsi_root, m_initramfs_config,
+                             _m_add_network_config_from_opc_imds):
         """network_config should read kernel cmdline."""
         distro = mock.MagicMock()
         ds, _ = self._get_ds(distro=distro, patches={
@@ -145,15 +200,18 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
                     MD_VER: {'system_uuid': self.my_uuid,
                              'meta_data': self.my_md}}}})
         ncfg = {'version': 1, 'config': [{'a': 'b'}]}
-        m_cmdline_config.return_value = ncfg
+        m_initramfs_config.return_value = ncfg
         self.assertTrue(ds._get_data())
         self.assertEqual(ncfg, ds.network_config)
-        m_cmdline_config.assert_called_once_with()
+        self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
         self.assertFalse(distro.generate_fallback_config.called)
 
-    @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
+    @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
+                side_effect=lambda network_config: network_config)
+    @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
     @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
-    def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config):
+    def test_network_fallback(self, m_is_iscsi_root, m_initramfs_config,
+                              _m_add_network_config_from_opc_imds):
         """test that fallback network is generated if no kernel cmdline."""
         distro = mock.MagicMock()
         ds, _ = self._get_ds(distro=distro, patches={
@@ -163,18 +221,95 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
                     MD_VER: {'system_uuid': self.my_uuid,
                              'meta_data': self.my_md}}}})
         ncfg = {'version': 1, 'config': [{'a': 'b'}]}
-        m_cmdline_config.return_value = None
+        m_initramfs_config.return_value = None
         self.assertTrue(ds._get_data())
         ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}
         distro.generate_fallback_config.return_value = ncfg
         self.assertEqual(ncfg, ds.network_config)
-        m_cmdline_config.assert_called_once_with()
+        self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
         distro.generate_fallback_config.assert_called_once_with()
-        self.assertEqual(1, m_cmdline_config.call_count)
 
         # test that the result got cached, and the methods not re-called.
         self.assertEqual(ncfg, ds.network_config)
-        self.assertEqual(1, m_cmdline_config.call_count)
+        self.assertEqual(1, m_initramfs_config.call_count)
+
+    @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
+    @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
+                return_value={'some': 'config'})
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_secondary_nics_added_to_network_config_if_enabled(
+            self, _m_is_iscsi_root, _m_initramfs_config,
+            m_add_network_config_from_opc_imds):
+
+        needle = object()
+
+        def network_config_side_effect(network_config):
+            network_config['secondary_added'] = needle
+
+        m_add_network_config_from_opc_imds.side_effect = (
+            network_config_side_effect)
+
+        distro = mock.MagicMock()
+        ds, _ = self._get_ds(distro=distro, patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        ds.ds_cfg['configure_secondary_nics'] = True
+        self.assertEqual(needle, ds.network_config['secondary_added'])
+
+    @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
+    @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
+                return_value={'some': 'config'})
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_secondary_nics_not_added_to_network_config_by_default(
+            self, _m_is_iscsi_root, _m_initramfs_config,
+            m_add_network_config_from_opc_imds):
+
+        def network_config_side_effect(network_config):
+            network_config['secondary_added'] = True
+
+        m_add_network_config_from_opc_imds.side_effect = (
+            network_config_side_effect)
+
+        distro = mock.MagicMock()
+        ds, _ = self._get_ds(distro=distro, patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        self.assertNotIn('secondary_added', ds.network_config)
+
+    @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
+    @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
+    @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
+    def test_secondary_nic_failure_isnt_blocking(
+            self, _m_is_iscsi_root, m_initramfs_config,
+            m_add_network_config_from_opc_imds):
+
+        m_add_network_config_from_opc_imds.side_effect = Exception()
+
+        distro = mock.MagicMock()
+        ds, _ = self._get_ds(distro=distro, patches={
+            '_is_platform_viable': {'return_value': True},
+            'crawl_metadata': {
+                'return_value': {
+                    MD_VER: {'system_uuid': self.my_uuid,
+                             'meta_data': self.my_md}}}})
+        ds.ds_cfg['configure_secondary_nics'] = True
+        self.assertEqual(ds.network_config, m_initramfs_config.return_value)
+        self.assertIn('Failed to fetch secondary network configuration',
+                      self.logs.getvalue())
+
+    def test_ds_network_cfg_preferred_over_initramfs(self):
+        """Ensure that DS net config is preferred over initramfs config"""
+        network_config_sources = oracle.DataSourceOracle.network_config_sources
+        self.assertLess(
+            network_config_sources.index(NetworkConfigSource.ds),
+            network_config_sources.index(NetworkConfigSource.initramfs)
+        )
 
 
 @mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))
@@ -336,4 +471,86 @@ class TestLoadIndex(test_helpers.CiTestCase):
             oracle._load_index("\n".join(["meta_data.json", "user_data"])))
 
 
+class TestNetworkConfigFromOpcImds(test_helpers.CiTestCase):
+
+    with_logs = True
+
+    def setUp(self):
+        super(TestNetworkConfigFromOpcImds, self).setUp()
+        self.add_patch(DS_PATH + '.readurl', 'm_readurl')
+        self.add_patch(DS_PATH + '.get_interfaces_by_mac',
+                       'm_get_interfaces_by_mac')
+
+    def test_failure_to_readurl(self):
+        # readurl failures should just bubble out to the caller
+        self.m_readurl.side_effect = Exception('oh no')
+        with self.assertRaises(Exception) as excinfo:
+            oracle._add_network_config_from_opc_imds({})
+        self.assertEqual(str(excinfo.exception), 'oh no')
+
+    def test_empty_response(self):
+        # empty response error should just bubble out to the caller
+        self.m_readurl.return_value = ''
+        with self.assertRaises(Exception):
+            oracle._add_network_config_from_opc_imds([])
+
+    def test_invalid_json(self):
+        # invalid JSON error should just bubble out to the caller
+        self.m_readurl.return_value = '{'
+        with self.assertRaises(Exception):
+            oracle._add_network_config_from_opc_imds([])
+
+    def test_no_secondary_nics_does_not_mutate_input(self):
+        self.m_readurl.return_value = json.dumps([{}])
+        # We test this by passing in a non-dict to ensure that no dict
+        # operations are used; failure would be seen as exceptions
+        oracle._add_network_config_from_opc_imds(object())
+
+    def test_bare_metal_machine_skipped(self):
+        # nicIndex in the first entry indicates a bare metal machine
+        self.m_readurl.return_value = OPC_BM_SECONDARY_VNIC_RESPONSE
+        # We test this by passing in a non-dict to ensure that no dict
+        # operations are used
+        self.assertFalse(oracle._add_network_config_from_opc_imds(object()))
+        self.assertIn('bare metal machine', self.logs.getvalue())
+
+    def test_missing_mac_skipped(self):
+        self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
+        self.m_get_interfaces_by_mac.return_value = {}
+
+        network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
+        oracle._add_network_config_from_opc_imds(network_config)
+
+        self.assertEqual(1, len(network_config['config']))
+        self.assertIn(
+            'Interface with MAC 00:00:17:02:2b:b1 not found; skipping',
+            self.logs.getvalue())
+
+    def test_secondary_nic(self):
+        self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
+        mac_addr, nic_name = '00:00:17:02:2b:b1', 'ens3'
+        self.m_get_interfaces_by_mac.return_value = {
+            mac_addr: nic_name,
+        }
+
+        network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
+        oracle._add_network_config_from_opc_imds(network_config)
+
+        # The input is mutated
+        self.assertEqual(2, len(network_config['config']))
+
+        secondary_nic_cfg = network_config['config'][1]
+        self.assertEqual(nic_name, secondary_nic_cfg['name'])
+        self.assertEqual('physical', secondary_nic_cfg['type'])
+        self.assertEqual(mac_addr, secondary_nic_cfg['mac_address'])
+        self.assertEqual(9000, secondary_nic_cfg['mtu'])
+
+        self.assertEqual(1, len(secondary_nic_cfg['subnets']))
+        subnet_cfg = secondary_nic_cfg['subnets'][0]
+        # These values are hard-coded in OPC_VM_SECONDARY_VNIC_RESPONSE
+        self.assertEqual('10.0.0.231', subnet_cfg['address'])
+        self.assertEqual('24', subnet_cfg['netmask'])
+        self.assertEqual('10.0.0.1', subnet_cfg['gateway'])
+        self.assertEqual('manual', subnet_cfg['control'])
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index da7d349..5012988 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -24,6 +24,7 @@ from cloudinit.handlers.shell_script import ShellScriptPartHandler
 from cloudinit.handlers.upstart_job import UpstartJobPartHandler
 
 from cloudinit.event import EventType
+from cloudinit.sources import NetworkConfigSource
 
 from cloudinit import cloud
 from cloudinit import config
@@ -630,32 +631,54 @@ class Init(object):
         if os.path.exists(disable_file):
             return (None, disable_file)
 
-        cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())
-        dscfg = ('ds', None)
+        available_cfgs = {
+            NetworkConfigSource.cmdline: cmdline.read_kernel_cmdline_config(),
+            NetworkConfigSource.initramfs: cmdline.read_initramfs_config(),
+            NetworkConfigSource.ds: None,
+            NetworkConfigSource.system_cfg: self.cfg.get('network'),
+        }
+
         if self.datasource and hasattr(self.datasource, 'network_config'):
-            dscfg = ('ds', self.datasource.network_config)
-        sys_cfg = ('system_cfg', self.cfg.get('network'))
+            available_cfgs[NetworkConfigSource.ds] = (
+                self.datasource.network_config)
 
-        for loc, ncfg in (cmdline_cfg, sys_cfg, dscfg):
+        if self.datasource:
+            order = self.datasource.network_config_sources
+        else:
+            order = sources.DataSource.network_config_sources
+        for cfg_source in order:
+            if not hasattr(NetworkConfigSource, cfg_source):
+                LOG.warning('data source specifies an invalid network'
+                            ' cfg_source: %s', cfg_source)
+                continue
+            if cfg_source not in available_cfgs:
+                LOG.warning('data source specifies an unavailable network'
+                            ' cfg_source: %s', cfg_source)
+                continue
+            ncfg = available_cfgs[cfg_source]
             if net.is_disabled_cfg(ncfg):
-                LOG.debug("network config disabled by %s", loc)
-                return (None, loc)
+                LOG.debug("network config disabled by %s", cfg_source)
+                return (None, cfg_source)
             if ncfg:
-                return (ncfg, loc)
-        return (self.distro.generate_fallback_config(), "fallback")
-
-    def apply_network_config(self, bring_up):
-        netcfg, src = self._find_networking_config()
-        if netcfg is None:
-            LOG.info("network config is disabled by %s", src)
-            return
+                return (ncfg, cfg_source)
+        return (self.distro.generate_fallback_config(),
+                NetworkConfigSource.fallback)
 
+    def _apply_netcfg_names(self, netcfg):
         try:
             LOG.debug("applying net config names for %s", netcfg)
             self.distro.apply_network_config_names(netcfg)
         except Exception as e:
             LOG.warning("Failed to rename devices: %s", e)
 
+    def apply_network_config(self, bring_up):
+        # get a network config
+        netcfg, src = self._find_networking_config()
+        if netcfg is None:
+            LOG.info("network config is disabled by %s", src)
+            return
+
+        # request an update if needed/available
         if self.datasource is not NULL_DATA_SOURCE:
             if not self.is_new_instance():
                 if not self.datasource.update_metadata([EventType.BOOT]):
@@ -663,8 +686,20 @@ class Init(object):
                         "No network config applied. Neither a new instance"
                         " nor datasource network update on '%s' event",
                         EventType.BOOT)
+                    # nothing new, but ensure proper names
+                    self._apply_netcfg_names(netcfg)
                     return
+                else:
+                    # refresh netcfg after update
+                    netcfg, src = self._find_networking_config()
+
+        # ensure all physical devices in config are present
+        net.wait_for_physdevs(netcfg)
+
+        # apply renames from config
+        self._apply_netcfg_names(netcfg)
 
+        # rendering config
         LOG.info("Applying network configuration from %s bringup=%s: %s",
                  src, bring_up, netcfg)
         try:
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index f41180f..23fddd0 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -198,7 +198,8 @@ class CiTestCase(TestCase):
                 prefix="ci-%s." % self.__class__.__name__)
         else:
             tmpd = tempfile.mkdtemp(dir=dir)
-        self.addCleanup(functools.partial(shutil.rmtree, tmpd))
+        self.addCleanup(
+            functools.partial(shutil.rmtree, tmpd, ignore_errors=True))
         return tmpd
 
     def tmp_path(self, path, dir=None):
diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py
index 94b6b25..d5c9c0e 100644
--- a/cloudinit/tests/test_stages.py
+++ b/cloudinit/tests/test_stages.py
@@ -6,6 +6,7 @@ import os
 
 from cloudinit import stages
 from cloudinit import sources
+from cloudinit.sources import NetworkConfigSource
 
 from cloudinit.event import EventType
 from cloudinit.util import write_file
@@ -37,6 +38,7 @@ class FakeDataSource(sources.DataSource):
 
 class TestInit(CiTestCase):
     with_logs = True
+    allowed_subp = False
 
     def setUp(self):
         super(TestInit, self).setUp()
@@ -57,84 +59,189 @@ class TestInit(CiTestCase):
             (None, disable_file),
             self.init._find_networking_config())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline):
+    def test_wb__find_networking_config_disabled_by_kernel(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns when disabled by kernel cmdline."""
         m_cmdline.return_value = {'config': 'disabled'}
+        m_initramfs.return_value = {'config': ['fake_initrd']}
         self.assertEqual(
-            (None, 'cmdline'),
+            (None, NetworkConfigSource.cmdline),
             self.init._find_networking_config())
         self.assertEqual('DEBUG: network config disabled by cmdline\n',
                          self.logs.getvalue())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline):
+    def test_wb__find_networking_config_disabled_by_initrd(
+            self, m_cmdline, m_initramfs):
+        """find_networking_config returns when disabled by kernel cmdline."""
+        m_cmdline.return_value = {}
+        m_initramfs.return_value = {'config': 'disabled'}
+        self.assertEqual(
+            (None, NetworkConfigSource.initramfs),
+            self.init._find_networking_config())
+        self.assertEqual('DEBUG: network config disabled by initramfs\n',
+                         self.logs.getvalue())
+
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
+    @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
+    def test_wb__find_networking_config_disabled_by_datasrc(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns when disabled by datasource cfg."""
         m_cmdline.return_value = {}  # Kernel doesn't disable networking
+        m_initramfs.return_value = {}  # initramfs doesn't disable networking
         self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
                           'network': {}}  # system config doesn't disable
 
         self.init.datasource = FakeDataSource(
             network_config={'config': 'disabled'})
         self.assertEqual(
-            (None, 'ds'),
+            (None, NetworkConfigSource.ds),
             self.init._find_networking_config())
         self.assertEqual('DEBUG: network config disabled by ds\n',
                          self.logs.getvalue())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline):
+    def test_wb__find_networking_config_disabled_by_sysconfig(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns when disabled by system config."""
         m_cmdline.return_value = {}  # Kernel doesn't disable networking
+        m_initramfs.return_value = {}  # initramfs doesn't disable networking
         self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
                           'network': {'config': 'disabled'}}
         self.assertEqual(
-            (None, 'system_cfg'),
+            (None, NetworkConfigSource.system_cfg),
             self.init._find_networking_config())
         self.assertEqual('DEBUG: network config disabled by system_cfg\n',
                          self.logs.getvalue())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
+    @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
+    def test__find_networking_config_uses_datasrc_order(
+            self, m_cmdline, m_initramfs):
+        """find_networking_config should check sources in DS defined order"""
+        # cmdline and initramfs, which would normally be preferred over other
+        # sources, disable networking; in this case, though, the DS moves them
+        # later so its own config is preferred
+        m_cmdline.return_value = {'config': 'disabled'}
+        m_initramfs.return_value = {'config': 'disabled'}
+
+        ds_net_cfg = {'config': {'needle': True}}
+        self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
+        self.init.datasource.network_config_sources = [
+            NetworkConfigSource.ds, NetworkConfigSource.system_cfg,
+            NetworkConfigSource.cmdline, NetworkConfigSource.initramfs]
+
+        self.assertEqual(
+            (ds_net_cfg, NetworkConfigSource.ds),
+            self.init._find_networking_config())
+
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
+    @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
+    def test__find_networking_config_warns_if_datasrc_uses_invalid_src(
+            self, m_cmdline, m_initramfs):
+        """find_networking_config should check sources in DS defined order"""
+        ds_net_cfg = {'config': {'needle': True}}
+        self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
+        self.init.datasource.network_config_sources = [
+            'invalid_src', NetworkConfigSource.ds]
+
+        self.assertEqual(
+            (ds_net_cfg, NetworkConfigSource.ds),
+            self.init._find_networking_config())
+        self.assertIn('WARNING: data source specifies an invalid network'
+                      ' cfg_source: invalid_src',
+                      self.logs.getvalue())
+
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_returns_kernel(self, m_cmdline):
+    def test__find_networking_config_warns_if_datasrc_uses_unavailable_src(
+            self, m_cmdline, m_initramfs):
+        """find_networking_config should check sources in DS defined order"""
+        ds_net_cfg = {'config': {'needle': True}}
+        self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
+        self.init.datasource.network_config_sources = [
+            NetworkConfigSource.fallback, NetworkConfigSource.ds]
+
+        self.assertEqual(
+            (ds_net_cfg, NetworkConfigSource.ds),
+            self.init._find_networking_config())
+        self.assertIn('WARNING: data source specifies an unavailable network'
+                      ' cfg_source: fallback',
+                      self.logs.getvalue())
+
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
+    @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
+    def test_wb__find_networking_config_returns_kernel(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns kernel cmdline config if present."""
         expected_cfg = {'config': ['fakekernel']}
         m_cmdline.return_value = expected_cfg
+        m_initramfs.return_value = {'config': ['fake_initrd']}
         self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
                           'network': {'config': ['fakesys_config']}}
         self.init.datasource = FakeDataSource(
             network_config={'config': ['fakedatasource']})
         self.assertEqual(
-            (expected_cfg, 'cmdline'),
+            (expected_cfg, NetworkConfigSource.cmdline),
             self.init._find_networking_config())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline):
+    def test_wb__find_networking_config_returns_initramfs(
+            self, m_cmdline, m_initramfs):
+        """find_networking_config returns kernel cmdline config if present."""
+        expected_cfg = {'config': ['fake_initrd']}
+        m_cmdline.return_value = {}
+        m_initramfs.return_value = expected_cfg
+        self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
+                          'network': {'config': ['fakesys_config']}}
+        self.init.datasource = FakeDataSource(
+            network_config={'config': ['fakedatasource']})
+        self.assertEqual(
+            (expected_cfg, NetworkConfigSource.initramfs),
+            self.init._find_networking_config())
+
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
+    @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
+    def test_wb__find_networking_config_returns_system_cfg(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns system config when present."""
         m_cmdline.return_value = {}  # No kernel network config
+        m_initramfs.return_value = {}  # no initramfs network config
         expected_cfg = {'config': ['fakesys_config']}
         self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
                           'network': expected_cfg}
         self.init.datasource = FakeDataSource(
             network_config={'config': ['fakedatasource']})
         self.assertEqual(
-            (expected_cfg, 'system_cfg'),
+            (expected_cfg, NetworkConfigSource.system_cfg),
             self.init._find_networking_config())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline):
+    def test_wb__find_networking_config_returns_datasrc_cfg(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns datasource net config if present."""
         m_cmdline.return_value = {}  # No kernel network config
+        m_initramfs.return_value = {}  # no initramfs network config
         # No system config for network in setUp
         expected_cfg = {'config': ['fakedatasource']}
         self.init.datasource = FakeDataSource(network_config=expected_cfg)
         self.assertEqual(
-            (expected_cfg, 'ds'),
+            (expected_cfg, NetworkConfigSource.ds),
             self.init._find_networking_config())
 
+    @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
     @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
-    def test_wb__find_networking_config_returns_fallback(self, m_cmdline):
+    def test_wb__find_networking_config_returns_fallback(
+            self, m_cmdline, m_initramfs):
         """find_networking_config returns fallback config if not defined."""
         m_cmdline.return_value = {}  # Kernel doesn't disable networking
+        m_initramfs.return_value = {}  # no initramfs network config
         # Neither datasource nor system_info disable or provide network
 
         fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}],
@@ -147,7 +254,7 @@ class TestInit(CiTestCase):
         distro = self.init.distro
         distro.generate_fallback_config = fake_generate_fallback
         self.assertEqual(
-            (fake_cfg, 'fallback'),
+            (fake_cfg, NetworkConfigSource.fallback),
             self.init._find_networking_config())
         self.assertNotIn('network config disabled', self.logs.getvalue())
 
@@ -166,8 +273,9 @@ class TestInit(CiTestCase):
             'INFO: network config is disabled by %s' % disable_file,
             self.logs.getvalue())
 
+    @mock.patch('cloudinit.net.get_interfaces_by_mac')
     @mock.patch('cloudinit.distros.ubuntu.Distro')
-    def test_apply_network_on_new_instance(self, m_ubuntu):
+    def test_apply_network_on_new_instance(self, m_ubuntu, m_macs):
         """Call distro apply_network_config methods on is_new_instance."""
         net_cfg = {
             'version': 1, 'config': [
@@ -175,7 +283,9 @@ class TestInit(CiTestCase):
                  'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
 
         def fake_network_config():
-            return net_cfg, 'fallback'
+            return net_cfg, NetworkConfigSource.fallback
+
+        m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
 
         self.init._find_networking_config = fake_network_config
         self.init.apply_network_config(True)
@@ -195,7 +305,7 @@ class TestInit(CiTestCase):
                  'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
 
         def fake_network_config():
-            return net_cfg, 'fallback'
+            return net_cfg, NetworkConfigSource.fallback
 
         self.init._find_networking_config = fake_network_config
         self.init.apply_network_config(True)
@@ -206,8 +316,9 @@ class TestInit(CiTestCase):
             " nor datasource network update on '%s' event" % EventType.BOOT,
             self.logs.getvalue())
 
+    @mock.patch('cloudinit.net.get_interfaces_by_mac')
     @mock.patch('cloudinit.distros.ubuntu.Distro')
-    def test_apply_network_on_datasource_allowed_event(self, m_ubuntu):
+    def test_apply_network_on_datasource_allowed_event(self, m_ubuntu, m_macs):
         """Apply network if datasource.update_metadata permits BOOT event."""
         old_instance_id = os.path.join(
             self.init.paths.get_cpath('data'), 'instance-id')
@@ -218,7 +329,9 @@ class TestInit(CiTestCase):
                  'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
 
         def fake_network_config():
-            return net_cfg, 'fallback'
+            return net_cfg, NetworkConfigSource.fallback
+
+        m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
 
         self.init._find_networking_config = fake_network_config
         self.init.datasource = FakeDataSource(paths=self.init.paths)
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 0af0d9e..44ee61d 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -199,18 +199,19 @@ def _get_ssl_args(url, ssl_details):
 def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
             headers=None, headers_cb=None, ssl_details=None,
             check_status=True, allow_redirects=True, exception_cb=None,
-            session=None, infinite=False, log_req_resp=True):
+            session=None, infinite=False, log_req_resp=True,
+            request_method=None):
     url = _cleanurl(url)
     req_args = {
         'url': url,
     }
     req_args.update(_get_ssl_args(url, ssl_details))
     req_args['allow_redirects'] = allow_redirects
-    req_args['method'] = 'GET'
+    if not request_method:
+        request_method = 'POST' if data else 'GET'
+    req_args['method'] = request_method
     if timeout is not None:
         req_args['timeout'] = max(float(timeout), 0)
-    if data:
-        req_args['method'] = 'POST'
     # It doesn't seem like config
     # was added in older library versions (or newer ones either), thus we
     # need to manually do the retries if it wasn't...
diff --git a/cloudinit/version.py b/cloudinit/version.py
index ddcd436..b04b11f 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "19.1"
+__VERSION__ = "19.2"
 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
diff --git a/debian/changelog b/debian/changelog
index e14ee1c..418863a 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,67 @@
-cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.2) UNRELEASED; urgency=medium
+cloud-init (19.2-21-ge6383719-0ubuntu1~16.04.1) xenial; urgency=medium
 
   * refresh patches:
    + debian/patches/ubuntu-advantage-revert-tip.patch
-
- -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Tue, 04 Jun 2019 14:59:14 -0600
+  * refresh patches:
+   + debian/patches/azure-apply-network-config-false.patch
+   + debian/patches/azure-use-walinux-agent.patch
+   + debian/patches/ubuntu-advantage-revert-tip.patch
+  * New upstream snapshot. (LP: #1841099)
+    - ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA
+    - Add missing #cloud-config comment on first example in documentation.
+      [Florian Müller]
+    - ubuntu-drivers: emit latelink=true debconf to accept nvidia eula
+    - DataSourceOracle: prefer DS network config over initramfs
+    - format.rst: add text/jinja2 to list of content types (+ cleanups)
+    - Add GitHub pull request template to point people at hacking doc
+    - cloudinit/distros/parsers/sys_conf: add docstring to SysConf
+    - pyflakes: remove unused variable [Joshua Powers]
+    - Azure: Record boot timestamps, system information, and diagnostic events
+      [Anh Vo]
+    - DataSourceOracle: configure secondary NICs on Virtual Machines
+    - distros: fix confusing variable names
+    - azure/net: generate_fallback_nic emits network v2 config instead of v1
+    - Add support for publishing host keys to GCE guest attributes
+      [Rick Wright]
+    - New data source for the Exoscale.com cloud platform [Chris Glass]
+    - doc: remove intersphinx extension
+    - cc_set_passwords: rewrite documentation
+    - net/cmdline: split interfaces_by_mac and init network config
+      determination
+    - stages: allow data sources to override network config source order
+    - cloud_tests: updates and fixes
+    - Fix bug rendering MTU on bond or vlan when input was netplan.
+      [Scott Moser]
+    - net: update net sequence, include wait on netdevs, opensuse netrules path
+    - Release 19.2
+    - net: add rfc3442 (classless static routes) to EphemeralDHCP
+    - templates/ntp.conf.debian.tmpl: fix missing newline for pools
+    - Support netplan renderer in Arch Linux [Conrad Hoffmann]
+    - Fix typo in publicly viewable documentation. [David Medberry]
+    - Add a cdrom size checker for OVF ds to ds-identify [Pengpeng Sun]
+    - VMWare: Trigger the post customization script via cc_scripts module.
+      [Xiaofeng Wang]
+    - Cloud-init analyze module: Added ability to analyze boot events.
+      [Sam Gilson]
+    - Update debian eni network configuration location, retain Ubuntu setting
+      [Janos Lenart]
+    - net: skip bond interfaces in get_interfaces [Stanislav Makar]
+    - Fix a couple of issues raised by a coverity scan
+    - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
+    - doc: indicate that netplan is default in Ubuntu now
+    - azure: add region and AZ properties from imds compute location metadata
+    - sysconfig: support more bonding options [Penghui Liao]
+    - cloud-init-generator: use libexec path to ds-identify on redhat systems
+    - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
+    - Allow identification of OpenStack by Asset Tag [Mark T. Voelker]
+    - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
+    - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
+    - netplan: update netplan key mappings for gratuitous-arp
+    - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
+    - freebsd: ability to grow root file system [Gonéri Le Bouder]
+    - freebsd: NoCloud data source support [Gonéri Le Bouder]
+
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Thu, 22 Aug 2019 11:55:27 -0600
 
 cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium
 
diff --git a/debian/patches/azure-apply-network-config-false.patch b/debian/patches/azure-apply-network-config-false.patch
index f0c2fcf..d982c1d 100644
--- a/debian/patches/azure-apply-network-config-false.patch
+++ b/debian/patches/azure-apply-network-config-false.patch
@@ -10,7 +10,7 @@ Forwarded: not-needed
 Last-Update: 2018-10-17
 --- a/cloudinit/sources/DataSourceAzure.py
 +++ b/cloudinit/sources/DataSourceAzure.py
-@@ -220,7 +220,7 @@ BUILTIN_DS_CONFIG = {
+@@ -225,7 +225,7 @@ BUILTIN_DS_CONFIG = {
      },
      'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
      'dhclient_lease_file': LEASE_FILE,
diff --git a/debian/patches/azure-use-walinux-agent.patch b/debian/patches/azure-use-walinux-agent.patch
index b4ad76c..3e9ddd9 100644
--- a/debian/patches/azure-use-walinux-agent.patch
+++ b/debian/patches/azure-use-walinux-agent.patch
@@ -6,7 +6,7 @@ Forwarded: not-needed
 Author: Scott Moser <smoser@xxxxxxxxxx>
 --- a/cloudinit/sources/DataSourceAzure.py
 +++ b/cloudinit/sources/DataSourceAzure.py
-@@ -209,7 +209,7 @@ if util.is_FreeBSD():
+@@ -214,7 +214,7 @@ if util.is_FreeBSD():
      PLATFORM_ENTROPY_SOURCE = None
  
  BUILTIN_DS_CONFIG = {
diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch
index 966d2f3..0730a06 100644
--- a/debian/patches/ubuntu-advantage-revert-tip.patch
+++ b/debian/patches/ubuntu-advantage-revert-tip.patch
@@ -9,10 +9,8 @@ Forwarded: not-needed
 Last-Update: 2019-05-10
 ---
 This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
-Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
-===================================================================
---- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py
-+++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py
+--- a/cloudinit/config/cc_ubuntu_advantage.py
++++ b/cloudinit/config/cc_ubuntu_advantage.py
 @@ -1,143 +1,150 @@
 +# Copyright (C) 2018 Canonical Ltd.
 +#
@@ -294,10 +292,8 @@ Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
 +    run_commands(cfgin.get('commands', []))
  
  # vi: ts=4 expandtab
-Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
-===================================================================
---- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py
-+++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
+--- a/cloudinit/config/tests/test_ubuntu_advantage.py
++++ b/cloudinit/config/tests/test_ubuntu_advantage.py
 @@ -1,7 +1,10 @@
  # This file is part of cloud-init. See LICENSE file for license information.
  
diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
index 2651c02..52a2476 100644
--- a/doc/examples/cloud-config-datasources.txt
+++ b/doc/examples/cloud-config-datasources.txt
@@ -38,7 +38,7 @@ datasource:
     # these are optional, but allow you to basically provide a datasource
     # right here
     user-data: |
-       # This is the user-data verbatum
+       # This is the user-data verbatim
     meta-data:
        instance-id: i-87018aed
        local-hostname: myhost.internal
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
index 6a363b7..f588bfb 100644
--- a/doc/examples/cloud-config-user-groups.txt
+++ b/doc/examples/cloud-config-user-groups.txt
@@ -1,3 +1,4 @@
+#cloud-config
 # Add groups to the system
 # The following example adds the ubuntu group with members 'root' and 'sys'
 # and the empty group cloud-users.
diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py
index 50eb05c..4174477 100644
--- a/doc/rtd/conf.py
+++ b/doc/rtd/conf.py
@@ -27,16 +27,11 @@ project = 'Cloud-Init'
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
 extensions = [
-    'sphinx.ext.intersphinx',
     'sphinx.ext.autodoc',
     'sphinx.ext.autosectionlabel',
     'sphinx.ext.viewcode',
 ]
 
-intersphinx_mapping = {
-    'sphinx': ('http://sphinx.pocoo.org', None)
-}
-
 # The suffix of source filenames.
 source_suffix = '.rst'
 
diff --git a/doc/rtd/topics/analyze.rst b/doc/rtd/topics/analyze.rst
new file mode 100644
index 0000000..5cf38bd
--- /dev/null
+++ b/doc/rtd/topics/analyze.rst
@@ -0,0 +1,84 @@
+*************************
+Cloud-init Analyze Module
+*************************
+
+Overview
+========
+The analyze module was added to cloud-init in order to help analyze cloud-init boot time 
+performance. It is loosely based on systemd-analyze where there are 4 main actions: 
+show, blame, dump, and boot.
+
+The 'show' action is similar to 'systemd-analyze critical-chain' which prints a list of units, the 
+time they started and how long they took. For cloud-init, we have four stages, and within each stage
+a number of modules may run depending on configuration.  ‘cloudinit-analyze show’ will, for each 
+boot, print this information and a summary total time, per boot.
+
+The 'blame' action matches 'systemd-analyze blame' where it prints, in descending order, 
+the units that took the longest to run.  This output is highly useful for examining where cloud-init 
+is spending its time during execution.
+
+The 'dump' action simply dumps the cloud-init logs that the analyze module is performing
+the analysis on and returns a list of dictionaries that can be consumed for other reporting needs.
+
+The 'boot' action prints out kernel related timestamps that are not included in any of the
+cloud-init logs. There are three different timestamps that are presented to the user: 
+kernel start, kernel finish boot, and cloud-init start. This was added for additional
+clarity into the boot process that cloud-init does not have control over, to aid in debugging of 
+performance issues related to cloudinit startup or tracking regression.
+
+Usage
+=====
+Using each of the printing formats is as easy as running one of the following bash commands:
+
+.. code-block:: shell-session
+
+  cloud-init analyze show
+  cloud-init analyze blame
+  cloud-init analyze dump
+  cloud-init analyze boot
+
+Cloud-init analyze boot Timestamp Gathering
+===========================================
+The following boot related timestamps are gathered on demand when cloud-init analyze boot runs:
+- Kernel Startup, which is inferred from system uptime
+- Kernel Finishes Initialization, which is inferred from systemd UserSpaceMonotonicTimestamp property
+- Cloud-init activation, which is inferred from the property InactiveExitTimestamp of the cloud-init
+local systemd unit.
+
+In order to gather the necessary timestamps using systemd, running the commands
+
+.. code-block:: shell-session
+
+	systemctl show -p UserspaceTimestampMonotonic  
+	systemctl show cloud-init-local -p InactiveExitTimestampMonotonic
+
+will gather the UserspaceTimestamp and InactiveExitTimestamp. 
+The UserspaceTimestamp tracks when the init system starts, which is used as an indicator of kernel
+finishing initialization. The InactiveExitTimestamp tracks when a particular systemd unit transitions
+from the Inactive to Active state, which can be used to mark the beginning of systemd's activation
+of cloud-init.
+
+Currently this only works for distros that use systemd as the init process. We will be expanding
+support for other distros in the future and this document will be updated accordingly.
+
+If systemd is not present on the system, dmesg is used to attempt to find an event that logs the
+beginning of the init system. However, with this method only the first two timestamps are able to be found;
+dmesg does not monitor userspace processes, so no cloud-init start timestamps are emitted like when
+using systemd.
+
+List of Cloud-init analyze boot supported distros
+=================================================
+- Arch
+- CentOS
+- Debian
+- Fedora
+- OpenSuSE
+- Red Hat Enterprise Linux
+- Ubuntu
+- SUSE Linux Enterprise Server
+- CoreOS
+
+List of Cloud-init analyze boot unsupported distros
+===================================================
+- FreeBSD
+- Gentoo
\ No newline at end of file
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 0d8b894..6d85a99 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -217,6 +217,7 @@ Get detailed reports of where cloud-init spends most of its time. See
 * **dump** Machine-readable JSON dump of all cloud-init tracked events.
 * **show** show time-ordered report of the cost of operations during each
   boot stage.
+* **boot** show timestamps from kernel initialization, kernel finish initialization, and cloud-init start.
 
 .. _cli_devel:
 
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 648c606..2148cd5 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -155,6 +155,7 @@ Follow for more information.
    datasources/configdrive.rst
    datasources/digitalocean.rst
    datasources/ec2.rst
+   datasources/exoscale.rst
    datasources/maas.rst
    datasources/nocloud.rst
    datasources/opennebula.rst
diff --git a/doc/rtd/topics/datasources/exoscale.rst b/doc/rtd/topics/datasources/exoscale.rst
new file mode 100644
index 0000000..27aec9c
--- /dev/null
+++ b/doc/rtd/topics/datasources/exoscale.rst
@@ -0,0 +1,68 @@
+.. _datasource_exoscale:
+
+Exoscale
+========
+
+This datasource supports reading from the metadata server used on the
+`Exoscale platform <https://exoscale.com>`_.
+
+Use of the Exoscale datasource is recommended to benefit from new features of
+the Exoscale platform.
+
+The datasource relies on the availability of a compatible metadata server
+(``http://169.254.169.254`` is used by default) and its companion password
+server, reachable at the same address (by default on port 8080).
+
+Crawling of metadata
+--------------------
+
+The metadata service and password server are crawled slightly differently:
+
+ * The "metadata service" is crawled every boot.
+ * The password server is also crawled every boot (the Exoscale datasource
+   forces the password module to run with "frequency always").
+
+In the password server case, the following rules apply in order to enable the
+"restore instance password" functionality:
+
+ * If a password is returned by the password server, it is then marked "saved"
+   by the cloud-init datasource. Subsequent boots will skip setting the password
+   (the password server will return "saved_password").
+ * When the instance password is reset (via the Exoscale UI), the password
+   server will return the non-empty password at next boot, therefore causing
+   cloud-init to reset the instance's password.
+
+Configuration
+-------------
+
+Users of this datasource are discouraged from changing the default settings
+unless instructed to by Exoscale support.
+
+The following settings are available and can be set for the datasource in system
+configuration (in `/etc/cloud/cloud.cfg.d/`).
+
+The settings available are:
+
+ * **metadata_url**: The URL for the metadata service (defaults to
+   ``http://169.254.169.254``)
+ * **api_version**: The API version path on which to query the instance metadata
+   (defaults to ``1.0``)
+ * **password_server_port**: The port (on the metadata server) on which the
+   password server listens (defaults to ``8080``).
+ * **timeout**: the timeout value provided to urlopen for each individual http
+   request. (defaults to ``10``)
+ * **retries**: The number of retries that should be done for an http request
+   (defaults to ``6``)
+
+
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+   datasource:
+     Exoscale:
+       metadata_url: "http://169.254.169.254";
+       api_version: "1.0"
+       password_server_port: 8080
+       timeout: 10
+       retries: 6
diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst
index f2383ce..98c4657 100644
--- a/doc/rtd/topics/datasources/oracle.rst
+++ b/doc/rtd/topics/datasources/oracle.rst
@@ -8,7 +8,7 @@ This datasource reads metadata, vendor-data and user-data from
 
 Oracle Platform
 ---------------
-OCI provides bare metal and virtual machines.  In both cases, 
+OCI provides bare metal and virtual machines.  In both cases,
 the platform identifies itself via DMI data in the chassis asset tag
 with the string 'OracleCloud.com'.
 
@@ -22,5 +22,28 @@ Cloud-init has a specific datasource for Oracle in order to:
     implementation.
 
 
+Configuration
+-------------
+
+The following configuration can be set for the datasource in system
+configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
+
+The settings that may be configured are:
+
+* **configure_secondary_nics**: A boolean, defaulting to False.  If set
+  to True on an OCI Virtual Machine, cloud-init will fetch networking
+  metadata from Oracle's IMDS and use it to configure the non-primary
+  network interface controllers in the system.  If set to True on an
+  OCI Bare Metal Machine, it will have no effect (though this may
+  change in the future).
+
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   Oracle:
+    configure_secondary_nics: false
+
 .. _Oracle Compute Infrastructure: https://cloud.oracle.com/
 .. vi: textwidth=78
diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst
index 51363ea..e13d915 100644
--- a/doc/rtd/topics/debugging.rst
+++ b/doc/rtd/topics/debugging.rst
@@ -68,6 +68,19 @@ subcommands default to reading /var/log/cloud-init.log.
          00.00100s (modules-final/config-rightscale_userdata)
          ...
 
+* ``analyze boot`` Make subprocess calls to the kernel in order to get relevant 
+  pre-cloud-init timestamps, such as the kernel start, kernel finish boot, and cloud-init start.
+
+.. code-block:: shell-session
+
+    $ cloud-init analyze boot 
+    -- Most Recent Boot Record --
+    	Kernel Started at: 2019-06-13 15:59:55.809385
+    	Kernel ended boot at: 2019-06-13 16:00:00.944740
+    	Kernel time to boot (seconds): 5.135355
+    	Cloud-init start: 2019-06-13 16:00:05.738396
+    	Time between Kernel boot and Cloud-init start (seconds): 4.793656
+
 
 Analyze quickstart - LXC
 ---------------------------
diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst
index 15234d2..74d1fee 100644
--- a/doc/rtd/topics/format.rst
+++ b/doc/rtd/topics/format.rst
@@ -23,14 +23,15 @@ For example, both a user data script and a cloud-config type could be specified.
 
 Supported content-types:
 
-- text/x-include-once-url
-- text/x-include-url
-- text/cloud-config-archive
-- text/upstart-job
+- text/cloud-boothook
 - text/cloud-config
+- text/cloud-config-archive
+- text/jinja2
 - text/part-handler
+- text/upstart-job
+- text/x-include-once-url
+- text/x-include-url
 - text/x-shellscript
-- text/cloud-boothook
 
 Helper script to generate mime messages
 ---------------------------------------
@@ -38,16 +39,16 @@ Helper script to generate mime messages
 .. code-block:: python
 
    #!/usr/bin/python
-   
+
    import sys
-   
+
    from email.mime.multipart import MIMEMultipart
    from email.mime.text import MIMEText
-   
+
    if len(sys.argv) == 1:
        print("%s input-file:type ..." % (sys.argv[0]))
        sys.exit(1)
-   
+
    combined_message = MIMEMultipart()
    for i in sys.argv[1:]:
        (filename, format_type) = i.split(":", 1)
@@ -56,7 +57,7 @@ Helper script to generate mime messages
        sub_message = MIMEText(contents, format_type, sys.getdefaultencoding())
        sub_message.add_header('Content-Disposition', 'attachment; filename="%s"' % (filename))
        combined_message.attach(sub_message)
-   
+
    print(combined_message)
 
 
@@ -78,10 +79,10 @@ Example
 ::
 
   $ cat myscript.sh
-  
+
   #!/bin/sh
   echo "Hello World.  The time is now $(date -R)!" | tee /root/output.txt
-  
+
   $ euca-run-instances --key mykey --user-data-file myscript.sh ami-a07d95c9 
 
 Include File
diff --git a/doc/rtd/topics/network-config-format-v2.rst b/doc/rtd/topics/network-config-format-v2.rst
index ea370ef..50f5fa6 100644
--- a/doc/rtd/topics/network-config-format-v2.rst
+++ b/doc/rtd/topics/network-config-format-v2.rst
@@ -14,7 +14,7 @@ it must include ``version: 2``  and one or more of possible device
 
 Cloud-init will read this format from system config.
 For example the following could be present in
-``/etc/cloud/cloud.cfg.d/custom-networking.cfg``:
+``/etc/cloud/cloud.cfg.d/custom-networking.cfg``::
 
   network:
     version: 2
diff --git a/doc/rtd/topics/network-config.rst b/doc/rtd/topics/network-config.rst
index 1e99455..51ced4d 100644
--- a/doc/rtd/topics/network-config.rst
+++ b/doc/rtd/topics/network-config.rst
@@ -163,10 +163,11 @@ found in Ubuntu and Debian.
 
 - **Netplan**
 
-Since Ubuntu 16.10, codename Yakkety, the ``netplan`` project has been an
-optional network configuration tool which consumes :ref:`network_config_v2`
-input and renders network configuration for supported backends such as
-``systemd-networkd`` and ``NetworkManager``.
+Introduced in Ubuntu 16.10 (Yakkety Yak), `netplan <https://netplan.io/>`_ has
+been the default network configuration tool in Ubuntu since 17.10 (Artful
+Aardvark).  netplan consumes :ref:`network_config_v2` input and renders
+network configuration for supported backends such as ``systemd-networkd`` and
+``NetworkManager``.
 
 - **Sysconfig**
 
diff --git a/integration-requirements.txt b/integration-requirements.txt
index 880d988..fe5ad45 100644
--- a/integration-requirements.txt
+++ b/integration-requirements.txt
@@ -10,7 +10,8 @@ unittest2
 boto3==1.5.9
 
 # ssh communication
-paramiko==2.4.1
+paramiko==2.4.2
+cryptography==2.4.2
 
 
 # lxd backend
diff --git a/systemd/cloud-init-generator.tmpl b/systemd/cloud-init-generator.tmpl
index cfa5eb5..45efa24 100755
--- a/systemd/cloud-init-generator.tmpl
+++ b/systemd/cloud-init-generator.tmpl
@@ -82,7 +82,12 @@ default() {
 }
 
 check_for_datasource() {
-    local ds_rc="" dsidentify="/usr/lib/cloud-init/ds-identify"
+    local ds_rc=""
+{% if variant in ["redhat", "fedora", "centos"] %}
+    local dsidentify="/usr/libexec/cloud-init/ds-identify"
+{% else %}
+    local dsidentify="/usr/lib/cloud-init/ds-identify"
+{% endif %}
     if [ ! -x "$dsidentify" ]; then
         debug 1 "no ds-identify in $dsidentify. _RET=$FOUND"
         return 0
diff --git a/templates/ntp.conf.debian.tmpl b/templates/ntp.conf.debian.tmpl
index 3f07eea..affe983 100644
--- a/templates/ntp.conf.debian.tmpl
+++ b/templates/ntp.conf.debian.tmpl
@@ -19,7 +19,8 @@ filegen clockstats file clockstats type day enable
 # pool.ntp.org maps to about 1000 low-stratum NTP servers.  Your server will
 # pick a different set every time it starts up.  Please consider joining the
 # pool: <http://www.pool.ntp.org/join.html>
-{% if pools -%}# pools{% endif %}
+{% if pools %}# pools
+{% endif %}
 {% for pool in pools -%}
 pool {{pool}} iburst
 {% endfor %}
diff --git a/tests/cloud_tests/platforms.yaml b/tests/cloud_tests/platforms.yaml
index 448aa98..652a705 100644
--- a/tests/cloud_tests/platforms.yaml
+++ b/tests/cloud_tests/platforms.yaml
@@ -66,5 +66,6 @@ platforms:
                 {{ config_get("user.vendor-data", properties.default) }}
     nocloud-kvm:
         enabled: true
+        cache_mode: cache=none,aio=native
 
 # vi: ts=4 expandtab
diff --git a/tests/cloud_tests/platforms/nocloudkvm/instance.py b/tests/cloud_tests/platforms/nocloudkvm/instance.py
index 33ff3f2..96185b7 100644
--- a/tests/cloud_tests/platforms/nocloudkvm/instance.py
+++ b/tests/cloud_tests/platforms/nocloudkvm/instance.py
@@ -74,6 +74,8 @@ class NoCloudKVMInstance(Instance):
         self.pid_file = None
         self.console_file = None
         self.disk = image_path
+        self.cache_mode = platform.config.get('cache_mode',
+                                              'cache=none,aio=native')
         self.meta_data = meta_data
 
     def shutdown(self, wait=True):
@@ -113,7 +115,10 @@ class NoCloudKVMInstance(Instance):
                 pass
 
         if self.pid_file:
-            os.remove(self.pid_file)
+            try:
+                os.remove(self.pid_file)
+            except Exception:
+                pass
 
         self.pid = None
         self._ssh_close()
@@ -160,13 +165,13 @@ class NoCloudKVMInstance(Instance):
         self.ssh_port = self.get_free_port()
 
         cmd = ['./tools/xkvm',
-               '--disk', '%s,cache=unsafe' % self.disk,
-               '--disk', '%s,cache=unsafe' % seed,
+               '--disk', '%s,%s' % (self.disk, self.cache_mode),
+               '--disk', '%s' % seed,
                '--netdev', ','.join(['user',
                                      'hostfwd=tcp::%s-:22' % self.ssh_port,
                                      'dnssearch=%s' % CI_DOMAIN]),
                '--', '-pidfile', self.pid_file, '-vnc', 'none',
-               '-m', '2G', '-smp', '2', '-nographic',
+               '-m', '2G', '-smp', '2', '-nographic', '-name', self.name,
                '-serial', 'file:' + self.console_file]
         subprocess.Popen(cmd,
                          close_fds=True,
diff --git a/tests/cloud_tests/platforms/platforms.py b/tests/cloud_tests/platforms/platforms.py
index abbfebb..bebdf1c 100644
--- a/tests/cloud_tests/platforms/platforms.py
+++ b/tests/cloud_tests/platforms/platforms.py
@@ -48,7 +48,7 @@ class Platform(object):
         if os.path.exists(filename):
             c_util.del_file(filename)
 
-        c_util.subp(['ssh-keygen', '-t', 'rsa', '-b', '4096',
+        c_util.subp(['ssh-keygen', '-m', 'PEM', '-t', 'rsa', '-b', '4096',
                      '-f', filename, '-P', '',
                      '-C', 'ubuntu@cloud_test'],
                     capture=True)
diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py
index 39f4517..a8aaba1 100644
--- a/tests/cloud_tests/setup_image.py
+++ b/tests/cloud_tests/setup_image.py
@@ -222,7 +222,8 @@ def setup_image(args, image):
              for name, func, desc in handlers if getattr(args, name, None)]
 
     try:
-        data = yaml.load(image.read_data("/etc/cloud/build.info", decode=True))
+        data = yaml.safe_load(
+            image.read_data("/etc/cloud/build.info", decode=True))
         info = ' '.join(["%s=%s" % (k, data.get(k))
                          for k in ("build_name", "serial") if k in data])
     except Exception as e:
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index afb614e..3547dd9 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -12,6 +12,7 @@ from cloudinit.tests.helpers import (
     HttprettyTestCase, CiTestCase, populate_dir, mock, wrap_and_call,
     ExitStack, resourceLocation)
 
+import copy
 import crypt
 import httpretty
 import json
@@ -84,6 +85,25 @@ def construct_valid_ovf_env(data=None, pubkeys=None,
 
 
 NETWORK_METADATA = {
+    "compute": {
+        "location": "eastus2",
+        "name": "my-hostname",
+        "offer": "UbuntuServer",
+        "osType": "Linux",
+        "placementGroupId": "",
+        "platformFaultDomain": "0",
+        "platformUpdateDomain": "0",
+        "publisher": "Canonical",
+        "resourceGroupName": "srugroup1",
+        "sku": "19.04-DAILY",
+        "subscriptionId": "12aad61c-6de4-4e53-a6c6-5aff52a83777",
+        "tags": "",
+        "version": "19.04.201906190",
+        "vmId": "ff702a6b-cb6a-4fcd-ad68-b4ce38227642",
+        "vmScaleSetName": "",
+        "vmSize": "Standard_DS1_v2",
+        "zone": ""
+    },
     "network": {
         "interface": [
             {
@@ -110,6 +130,26 @@ NETWORK_METADATA = {
     }
 }
 
+SECONDARY_INTERFACE = {
+    "macAddress": "220D3A047598",
+    "ipv6": {
+        "ipAddress": []
+    },
+    "ipv4": {
+        "subnet": [
+            {
+                "prefix": "24",
+                "address": "10.0.1.0"
+            }
+        ],
+        "ipAddress": [
+            {
+                "privateIpAddress": "10.0.1.5",
+            }
+        ]
+    }
+}
+
 MOCKPATH = 'cloudinit.sources.DataSourceAzure.'
 
 
@@ -141,7 +181,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
             self.logs.getvalue())
 
     @mock.patch(MOCKPATH + 'readurl')
-    @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
+    @mock.patch(MOCKPATH + 'EphemeralDHCPv4WithReporting')
     @mock.patch(MOCKPATH + 'net.is_up')
     def test_get_metadata_performs_dhcp_when_network_is_down(
             self, m_net_is_up, m_dhcp, m_readurl):
@@ -155,7 +195,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
             dsaz.get_metadata_from_imds('eth9', retries=2))
 
         m_net_is_up.assert_called_with('eth9')
-        m_dhcp.assert_called_with('eth9')
+        m_dhcp.assert_called_with(mock.ANY, 'eth9')
         self.assertIn(
             "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
             self.logs.getvalue())
@@ -478,13 +518,7 @@ scbus-1 on xpt0 bus 0
         expected_metadata = {
             'azure_data': {
                 'configurationsettype': 'LinuxProvisioningConfiguration'},
-            'imds': {'network': {'interface': [{
-                'ipv4': {'ipAddress': [
-                     {'privateIpAddress': '10.0.0.4',
-                      'publicIpAddress': '104.46.124.81'}],
-                      'subnet': [{'address': '10.0.0.0', 'prefix': '24'}]},
-                'ipv6': {'ipAddress': []},
-                'macAddress': '000D3A047598'}]}},
+            'imds': NETWORK_METADATA,
             'instance-id': 'test-instance-id',
             'local-hostname': u'myhost',
             'random_seed': 'wild'}
@@ -518,7 +552,8 @@ scbus-1 on xpt0 bus 0
             dsrc.crawl_metadata()
         self.assertEqual(str(cm.exception), error_msg)
 
-    @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
+    @mock.patch(
+        'cloudinit.sources.DataSourceAzure.EphemeralDHCPv4WithReporting')
     @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
     @mock.patch(
         'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
@@ -606,12 +641,67 @@ scbus-1 on xpt0 bus 0
             'ethernets': {
                 'eth0': {'set-name': 'eth0',
                          'match': {'macaddress': '00:0d:3a:04:75:98'},
-                         'dhcp4': True}},
+                         'dhcp4': True,
+                         'dhcp4-overrides': {'route-metric': 100}}},
             'version': 2}
         dsrc = self._get_ds(data)
         dsrc.get_data()
         self.assertEqual(expected_network_config, dsrc.network_config)
 
+    def test_network_config_set_from_imds_route_metric_for_secondary_nic(self):
+        """Datasource.network_config adds route-metric to secondary nics."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+        odata = {}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
+        expected_network_config = {
+            'ethernets': {
+                'eth0': {'set-name': 'eth0',
+                         'match': {'macaddress': '00:0d:3a:04:75:98'},
+                         'dhcp4': True,
+                         'dhcp4-overrides': {'route-metric': 100}},
+                'eth1': {'set-name': 'eth1',
+                         'match': {'macaddress': '22:0d:3a:04:75:98'},
+                         'dhcp4': True,
+                         'dhcp4-overrides': {'route-metric': 200}},
+                'eth2': {'set-name': 'eth2',
+                         'match': {'macaddress': '33:0d:3a:04:75:98'},
+                         'dhcp4': True,
+                         'dhcp4-overrides': {'route-metric': 300}}},
+            'version': 2}
+        imds_data = copy.deepcopy(NETWORK_METADATA)
+        imds_data['network']['interface'].append(SECONDARY_INTERFACE)
+        third_intf = copy.deepcopy(SECONDARY_INTERFACE)
+        third_intf['macAddress'] = third_intf['macAddress'].replace('22', '33')
+        third_intf['ipv4']['subnet'][0]['address'] = '10.0.2.0'
+        third_intf['ipv4']['ipAddress'][0]['privateIpAddress'] = '10.0.2.6'
+        imds_data['network']['interface'].append(third_intf)
+
+        self.m_get_metadata_from_imds.return_value = imds_data
+        dsrc = self._get_ds(data)
+        dsrc.get_data()
+        self.assertEqual(expected_network_config, dsrc.network_config)
+
+    def test_availability_zone_set_from_imds(self):
+        """Datasource.availability returns IMDS platformFaultDomain."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+        odata = {}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
+        dsrc = self._get_ds(data)
+        dsrc.get_data()
+        self.assertEqual('0', dsrc.availability_zone)
+
+    def test_region_set_from_imds(self):
+        """Datasource.region returns IMDS region location."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
+        odata = {}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
+        dsrc = self._get_ds(data)
+        dsrc.get_data()
+        self.assertEqual('eastus2', dsrc.region)
+
     def test_user_cfg_set_agent_command(self):
         # set dscfg in via base64 encoded yaml
         cfg = {'agent_command': "my_command"}
@@ -892,6 +982,7 @@ scbus-1 on xpt0 bus 0
         expected_cfg = {
             'ethernets': {
                 'eth0': {'dhcp4': True,
+                         'dhcp4-overrides': {'route-metric': 100},
                          'match': {'macaddress': '00:0d:3a:04:75:98'},
                          'set-name': 'eth0'}},
             'version': 2}
@@ -1218,7 +1309,9 @@ class TestAzureBounce(CiTestCase):
         self.assertEqual(initial_host_name,
                          self.set_hostname.call_args_list[-1][0][0])
 
-    def test_environment_correct_for_bounce_command(self):
+    @mock.patch.object(dsaz, 'get_boot_telemetry')
+    def test_environment_correct_for_bounce_command(
+            self, mock_get_boot_telemetry):
         interface = 'int0'
         hostname = 'my-new-host'
         old_hostname = 'my-old-host'
@@ -1234,7 +1327,9 @@ class TestAzureBounce(CiTestCase):
         self.assertEqual(hostname, bounce_env['hostname'])
         self.assertEqual(old_hostname, bounce_env['old_hostname'])
 
-    def test_default_bounce_command_ifup_used_by_default(self):
+    @mock.patch.object(dsaz, 'get_boot_telemetry')
+    def test_default_bounce_command_ifup_used_by_default(
+            self, mock_get_boot_telemetry):
         cfg = {'hostname_bounce': {'policy': 'force'}}
         data = self.get_ovf_env_with_dscfg('some-hostname', cfg)
         dsrc = self._get_ds(data, agent_command=['not', '__builtin__'])
@@ -1774,7 +1869,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
-            prefix_or_mask='255.255.255.0', router='192.168.2.1')
+            prefix_or_mask='255.255.255.0', router='192.168.2.1',
+            static_routes=None)
         self.assertEqual(m_net.call_count, 2)
 
     def test__reprovision_calls__poll_imds(self, fake_resp,
@@ -1812,7 +1908,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
-            prefix_or_mask='255.255.255.0', router='192.168.2.1')
+            prefix_or_mask='255.255.255.0', router='192.168.2.1',
+            static_routes=None)
         self.assertEqual(m_net.call_count, 2)
 
 
diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py
index 6b01a4e..61a7a76 100644
--- a/tests/unittests/test_datasource/test_common.py
+++ b/tests/unittests/test_datasource/test_common.py
@@ -13,6 +13,7 @@ from cloudinit.sources import (
     DataSourceConfigDrive as ConfigDrive,
     DataSourceDigitalOcean as DigitalOcean,
     DataSourceEc2 as Ec2,
+    DataSourceExoscale as Exoscale,
     DataSourceGCE as GCE,
     DataSourceHetzner as Hetzner,
     DataSourceIBMCloud as IBMCloud,
@@ -53,6 +54,7 @@ DEFAULT_NETWORK = [
     CloudStack.DataSourceCloudStack,
     DSNone.DataSourceNone,
     Ec2.DataSourceEc2,
+    Exoscale.DataSourceExoscale,
     GCE.DataSourceGCE,
     MAAS.DataSourceMAAS,
     NoCloud.DataSourceNoCloudNet,
@@ -83,4 +85,15 @@ class ExpectedDataSources(test_helpers.TestCase):
         self.assertEqual(set([AliYun.DataSourceAliYun]), set(found))
 
 
+class TestDataSourceInvariants(test_helpers.TestCase):
+
+    def test_data_sources_have_valid_network_config_sources(self):
+        for ds in DEFAULT_LOCAL + DEFAULT_NETWORK:
+            for cfg_src in ds.network_config_sources:
+                fail_msg = ('{} has an invalid network_config_sources entry:'
+                            ' {}'.format(str(ds), cfg_src))
+                self.assertTrue(hasattr(sources.NetworkConfigSource, cfg_src),
+                                fail_msg)
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index 20d59bf..1ec8e00 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -538,7 +538,8 @@ class TestEc2(test_helpers.HttprettyTestCase):
         m_dhcp.assert_called_once_with('eth9')
         m_net.assert_called_once_with(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
-            prefix_or_mask='255.255.255.0', router='192.168.2.1')
+            prefix_or_mask='255.255.255.0', router='192.168.2.1',
+            static_routes=None)
         self.assertIn('Crawl of metadata service took', self.logs.getvalue())
 
 
diff --git a/tests/unittests/test_datasource/test_exoscale.py b/tests/unittests/test_datasource/test_exoscale.py
new file mode 100644
index 0000000..350c330
--- /dev/null
+++ b/tests/unittests/test_datasource/test_exoscale.py
@@ -0,0 +1,203 @@
+# Author: Mathieu Corbin <mathieu.corbin@xxxxxxxxxxxx>
+# Author: Christopher Glass <christopher.glass@xxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+from cloudinit import helpers
+from cloudinit.sources.DataSourceExoscale import (
+    API_VERSION,
+    DataSourceExoscale,
+    METADATA_URL,
+    get_password,
+    PASSWORD_SERVER_PORT,
+    read_metadata)
+from cloudinit.tests.helpers import HttprettyTestCase, mock
+
+import httpretty
+import requests
+
+
+TEST_PASSWORD_URL = "{}:{}/{}/".format(METADATA_URL,
+                                       PASSWORD_SERVER_PORT,
+                                       API_VERSION)
+
+TEST_METADATA_URL = "{}/{}/meta-data/".format(METADATA_URL,
+                                              API_VERSION)
+
+TEST_USERDATA_URL = "{}/{}/user-data".format(METADATA_URL,
+                                             API_VERSION)
+
+
+@httpretty.activate
+class TestDatasourceExoscale(HttprettyTestCase):
+
+    def setUp(self):
+        super(TestDatasourceExoscale, self).setUp()
+        self.tmp = self.tmp_dir()
+        self.password_url = TEST_PASSWORD_URL
+        self.metadata_url = TEST_METADATA_URL
+        self.userdata_url = TEST_USERDATA_URL
+
+    def test_password_saved(self):
+        """The password is not set when it is not found
+        in the metadata service."""
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body="saved_password")
+        self.assertFalse(get_password())
+
+    def test_password_empty(self):
+        """No password is set if the metadata service returns
+        an empty string."""
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body="")
+        self.assertFalse(get_password())
+
+    def test_password(self):
+        """The password is set to what is found in the metadata
+        service."""
+        expected_password = "p@ssw0rd"
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body=expected_password)
+        password = get_password()
+        self.assertEqual(expected_password, password)
+
+    def test_get_data(self):
+        """The datasource conforms to expected behavior when supplied
+        full test data."""
+        path = helpers.Paths({'run_dir': self.tmp})
+        ds = DataSourceExoscale({}, None, path)
+        ds._is_platform_viable = lambda: True
+        expected_password = "p@ssw0rd"
+        expected_id = "12345"
+        expected_hostname = "myname"
+        expected_userdata = "#cloud-config"
+        httpretty.register_uri(httpretty.GET,
+                               self.userdata_url,
+                               body=expected_userdata)
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body=expected_password)
+        httpretty.register_uri(httpretty.GET,
+                               self.metadata_url,
+                               body="instance-id\nlocal-hostname")
+        httpretty.register_uri(httpretty.GET,
+                               "{}local-hostname".format(self.metadata_url),
+                               body=expected_hostname)
+        httpretty.register_uri(httpretty.GET,
+                               "{}instance-id".format(self.metadata_url),
+                               body=expected_id)
+        self.assertTrue(ds._get_data())
+        self.assertEqual(ds.userdata_raw.decode("utf-8"), "#cloud-config")
+        self.assertEqual(ds.metadata, {"instance-id": expected_id,
+                                       "local-hostname": expected_hostname})
+        self.assertEqual(ds.get_config_obj(),
+                         {'ssh_pwauth': True,
+                          'password': expected_password,
+                          'cloud_config_modules': [
+                              ["set-passwords", "always"]],
+                          'chpasswd': {
+                              'expire': False,
+                          }})
+
+    def test_get_data_saved_password(self):
+        """The datasource conforms to expected behavior when saved_password is
+        returned by the password server."""
+        path = helpers.Paths({'run_dir': self.tmp})
+        ds = DataSourceExoscale({}, None, path)
+        ds._is_platform_viable = lambda: True
+        expected_answer = "saved_password"
+        expected_id = "12345"
+        expected_hostname = "myname"
+        expected_userdata = "#cloud-config"
+        httpretty.register_uri(httpretty.GET,
+                               self.userdata_url,
+                               body=expected_userdata)
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body=expected_answer)
+        httpretty.register_uri(httpretty.GET,
+                               self.metadata_url,
+                               body="instance-id\nlocal-hostname")
+        httpretty.register_uri(httpretty.GET,
+                               "{}local-hostname".format(self.metadata_url),
+                               body=expected_hostname)
+        httpretty.register_uri(httpretty.GET,
+                               "{}instance-id".format(self.metadata_url),
+                               body=expected_id)
+        self.assertTrue(ds._get_data())
+        self.assertEqual(ds.userdata_raw.decode("utf-8"), "#cloud-config")
+        self.assertEqual(ds.metadata, {"instance-id": expected_id,
+                                       "local-hostname": expected_hostname})
+        self.assertEqual(ds.get_config_obj(),
+                         {'cloud_config_modules': [
+                             ["set-passwords", "always"]]})
+
+    def test_get_data_no_password(self):
+        """The datasource conforms to expected behavior when no password is
+        returned by the password server."""
+        path = helpers.Paths({'run_dir': self.tmp})
+        ds = DataSourceExoscale({}, None, path)
+        ds._is_platform_viable = lambda: True
+        expected_answer = ""
+        expected_id = "12345"
+        expected_hostname = "myname"
+        expected_userdata = "#cloud-config"
+        httpretty.register_uri(httpretty.GET,
+                               self.userdata_url,
+                               body=expected_userdata)
+        httpretty.register_uri(httpretty.GET,
+                               self.password_url,
+                               body=expected_answer)
+        httpretty.register_uri(httpretty.GET,
+                               self.metadata_url,
+                               body="instance-id\nlocal-hostname")
+        httpretty.register_uri(httpretty.GET,
+                               "{}local-hostname".format(self.metadata_url),
+                               body=expected_hostname)
+        httpretty.register_uri(httpretty.GET,
+                               "{}instance-id".format(self.metadata_url),
+                               body=expected_id)
+        self.assertTrue(ds._get_data())
+        self.assertEqual(ds.userdata_raw.decode("utf-8"), "#cloud-config")
+        self.assertEqual(ds.metadata, {"instance-id": expected_id,
+                                       "local-hostname": expected_hostname})
+        self.assertEqual(ds.get_config_obj(),
+                         {'cloud_config_modules': [
+                             ["set-passwords", "always"]]})
+
+    @mock.patch('cloudinit.sources.DataSourceExoscale.get_password')
+    def test_read_metadata_when_password_server_unreachable(self, m_password):
+        """The read_metadata function returns partial results in case the
+        password server (only) is unreachable."""
+        expected_id = "12345"
+        expected_hostname = "myname"
+        expected_userdata = "#cloud-config"
+
+        m_password.side_effect = requests.Timeout('Fake Connection Timeout')
+        httpretty.register_uri(httpretty.GET,
+                               self.userdata_url,
+                               body=expected_userdata)
+        httpretty.register_uri(httpretty.GET,
+                               self.metadata_url,
+                               body="instance-id\nlocal-hostname")
+        httpretty.register_uri(httpretty.GET,
+                               "{}local-hostname".format(self.metadata_url),
+                               body=expected_hostname)
+        httpretty.register_uri(httpretty.GET,
+                               "{}instance-id".format(self.metadata_url),
+                               body=expected_id)
+
+        result = read_metadata()
+
+        self.assertIsNone(result.get("password"))
+        self.assertEqual(result.get("user-data").decode("utf-8"),
+                         expected_userdata)
+
+    def test_non_viable_platform(self):
+        """The datasource fails fast when the platform is not viable."""
+        path = helpers.Paths({'run_dir': self.tmp})
+        ds = DataSourceExoscale({}, None, path)
+        ds._is_platform_viable = lambda: False
+        self.assertFalse(ds._get_data())
diff --git a/tests/unittests/test_datasource/test_gce.py b/tests/unittests/test_datasource/test_gce.py
index 41176c6..67744d3 100644
--- a/tests/unittests/test_datasource/test_gce.py
+++ b/tests/unittests/test_datasource/test_gce.py
@@ -55,6 +55,8 @@ GCE_USER_DATA_TEXT = {
 HEADERS = {'Metadata-Flavor': 'Google'}
 MD_URL_RE = re.compile(
     r'http://metadata.google.internal/computeMetadata/v1/.*')
+GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/'
+                        'v1/instance/guest-attributes/hostkeys/')
 
 
 def _set_mock_metadata(gce_meta=None):
@@ -341,4 +343,20 @@ class TestDataSourceGCE(test_helpers.HttprettyTestCase):
             public_key_data, default_user='default')
         self.assertEqual(sorted(found), sorted(expected))
 
+    @mock.patch("cloudinit.url_helper.readurl")
+    def test_publish_host_keys(self, m_readurl):
+        hostkeys = [('ssh-rsa', 'asdfasdf'),
+                    ('ssh-ed25519', 'qwerqwer')]
+        readurl_expected_calls = [
+            mock.call(check_status=False, data=b'asdfasdf', headers=HEADERS,
+                      request_method='PUT',
+                      url='%s%s' % (GUEST_ATTRIBUTES_URL, 'ssh-rsa')),
+            mock.call(check_status=False, data=b'qwerqwer', headers=HEADERS,
+                      request_method='PUT',
+                      url='%s%s' % (GUEST_ATTRIBUTES_URL, 'ssh-ed25519')),
+        ]
+        self.ds.publish_host_keys(hostkeys)
+        m_readurl.assert_has_calls(readurl_expected_calls, any_order=True)
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py
index c3c0c8c..07b5c0a 100644
--- a/tests/unittests/test_distros/test_netconfig.py
+++ b/tests/unittests/test_distros/test_netconfig.py
@@ -614,6 +614,92 @@ class TestNetCfgDistroOpensuse(TestNetCfgDistroBase):
                                expected_cfgs=expected_cfgs.copy())
 
 
+class TestNetCfgDistroArch(TestNetCfgDistroBase):
+    def setUp(self):
+        super(TestNetCfgDistroArch, self).setUp()
+        self.distro = self._get_distro('arch', renderers=['netplan'])
+
+    def _apply_and_verify(self, apply_fn, config, expected_cfgs=None,
+                          bringup=False, with_netplan=False):
+        if not expected_cfgs:
+            raise ValueError('expected_cfg must not be None')
+
+        tmpd = None
+        with mock.patch('cloudinit.net.netplan.available',
+                        return_value=with_netplan):
+            with self.reRooted(tmpd) as tmpd:
+                apply_fn(config, bringup)
+
+        results = dir2dict(tmpd)
+        for cfgpath, expected in expected_cfgs.items():
+            print("----------")
+            print(expected)
+            print("^^^^ expected | rendered VVVVVVV")
+            print(results[cfgpath])
+            print("----------")
+            self.assertEqual(expected, results[cfgpath])
+            self.assertEqual(0o644, get_mode(cfgpath, tmpd))
+
+    def netctl_path(self, iface):
+        return '/etc/netctl/%s' % iface
+
+    def netplan_path(self):
+        return '/etc/netplan/50-cloud-init.yaml'
+
+    def test_apply_network_config_v1_without_netplan(self):
+        # Note that this is in fact an invalid netctl config:
+        #  "Address=None/None"
+        # But this is what the renderer has been writing out for a long time,
+        # and the test's purpose is to assert that the netctl renderer is
+        # still being used in absence of netplan, not the correctness of the
+        # rendered netctl config.
+        expected_cfgs = {
+            self.netctl_path('eth0'): dedent("""\
+                Address=192.168.1.5/255.255.255.0
+                Connection=ethernet
+                DNS=()
+                Gateway=192.168.1.254
+                IP=static
+                Interface=eth0
+                """),
+            self.netctl_path('eth1'): dedent("""\
+                Address=None/None
+                Connection=ethernet
+                DNS=()
+                Gateway=
+                IP=dhcp
+                Interface=eth1
+                """),
+            }
+
+        # ub_distro.apply_network_config(V1_NET_CFG, False)
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG,
+                               expected_cfgs=expected_cfgs.copy(),
+                               with_netplan=False)
+
+    def test_apply_network_config_v1_with_netplan(self):
+        expected_cfgs = {
+            self.netplan_path(): dedent("""\
+                # generated by cloud-init
+                network:
+                    version: 2
+                    ethernets:
+                        eth0:
+                            addresses:
+                            - 192.168.1.5/24
+                            gateway4: 192.168.1.254
+                        eth1:
+                            dhcp4: true
+                """),
+        }
+
+        self._apply_and_verify(self.distro.apply_network_config,
+                               V1_NET_CFG,
+                               expected_cfgs=expected_cfgs.copy(),
+                               with_netplan=True)
+
+
 def get_mode(path, target=None):
     return os.stat(util.target_path(target, path)).st_mode & 0o777
 
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 7575223..587e699 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -524,6 +524,30 @@ class TestDsIdentify(DsIdentifyBase):
             self._check_via_dict(
                 ovf_cdrom_by_label, rc=RC_FOUND, dslist=['OVF', DS_NONE])
 
+    def test_ovf_on_vmware_iso_found_by_cdrom_with_different_size(self):
+        """OVF is identified by well-known iso9660 labels."""
+        ovf_cdrom_with_size = copy.deepcopy(VALID_CFG['OVF'])
+
+        # Set cdrom size to 20480 (10MB in 512 byte units)
+        ovf_cdrom_with_size['files']['sys/class/block/sr0/size'] = '20480\n'
+        self._check_via_dict(
+            ovf_cdrom_with_size, rc=RC_NOT_FOUND, policy_dmi="disabled")
+
+        # Set cdrom size to 204800 (100MB in 512 byte units)
+        ovf_cdrom_with_size['files']['sys/class/block/sr0/size'] = '204800\n'
+        self._check_via_dict(
+            ovf_cdrom_with_size, rc=RC_NOT_FOUND, policy_dmi="disabled")
+
+        # Set cdrom size to 18432 (9MB in 512 byte units)
+        ovf_cdrom_with_size['files']['sys/class/block/sr0/size'] = '18432\n'
+        self._check_via_dict(
+            ovf_cdrom_with_size, rc=RC_FOUND, dslist=['OVF', DS_NONE])
+
+        # Set cdrom size to 2048 (1MB in 512 byte units)
+        ovf_cdrom_with_size['files']['sys/class/block/sr0/size'] = '2048\n'
+        self._check_via_dict(
+            ovf_cdrom_with_size, rc=RC_FOUND, dslist=['OVF', DS_NONE])
+
     def test_default_nocloud_as_vdb_iso9660(self):
         """NoCloud is found with iso9660 filesystem on non-cdrom disk."""
         self._test_ds_found('NoCloud')
@@ -815,6 +839,7 @@ VALID_CFG = {
         ],
         'files': {
             'dev/sr0': 'pretend ovf iso has ' + OVF_MATCH_STRING + '\n',
+            'sys/class/block/sr0/size': '2048\n',
         }
     },
     'OVF-guestinfo': {
diff --git a/tests/unittests/test_handler/test_handler_apt_source_v3.py b/tests/unittests/test_handler/test_handler_apt_source_v3.py
index 90fe6ee..2f21b6d 100644
--- a/tests/unittests/test_handler/test_handler_apt_source_v3.py
+++ b/tests/unittests/test_handler/test_handler_apt_source_v3.py
@@ -998,6 +998,17 @@ deb http://ubuntu.com/ubuntu/ xenial-proposed main""")
 
 class TestDebconfSelections(TestCase):
 
+    @mock.patch("cloudinit.config.cc_apt_configure.util.subp")
+    def test_set_sel_appends_newline_if_absent(self, m_subp):
+        """Automatically append a newline to debconf-set-selections config."""
+        selections = b'some/setting boolean true'
+        cc_apt_configure.debconf_set_selections(selections=selections)
+        cc_apt_configure.debconf_set_selections(selections=selections + b'\n')
+        m_call = mock.call(
+            ['debconf-set-selections'], data=selections + b'\n', capture=True,
+            target=None)
+        self.assertEqual([m_call, m_call], m_subp.call_args_list)
+
     @mock.patch("cloudinit.config.cc_apt_configure.debconf_set_selections")
     def test_no_set_sel_if_none_to_set(self, m_set_sel):
         cc_apt_configure.apply_debconf_selections({'foo': 'bar'})
diff --git a/tests/unittests/test_handler/test_handler_ntp.py b/tests/unittests/test_handler/test_handler_ntp.py
index 0f22e57..463d892 100644
--- a/tests/unittests/test_handler/test_handler_ntp.py
+++ b/tests/unittests/test_handler/test_handler_ntp.py
@@ -268,17 +268,22 @@ class TestNtp(FilesystemMockingTestCase):
                                                  template_fn=template_fn)
                 content = util.load_file(confpath)
                 if client in ['ntp', 'chrony']:
-                    expected_servers = '\n'.join([
-                        'server {0} iburst'.format(srv) for srv in servers])
+                    content_lines = content.splitlines()
+                    expected_servers = [
+                        'server {0} iburst'.format(srv) for srv in servers]
                     print('distro=%s client=%s' % (distro, client))
-                    self.assertIn(expected_servers, content,
-                                  ('failed to render {0} conf'
-                                   ' for distro:{1}'.format(client, distro)))
-                    expected_pools = '\n'.join([
-                        'pool {0} iburst'.format(pool) for pool in pools])
-                    self.assertIn(expected_pools, content,
-                                  ('failed to render {0} conf'
-                                   ' for distro:{1}'.format(client, distro)))
+                    for sline in expected_servers:
+                        self.assertIn(sline, content_lines,
+                                      ('failed to render {0} conf'
+                                       ' for distro:{1}'.format(client,
+                                                                distro)))
+                    expected_pools = [
+                        'pool {0} iburst'.format(pool) for pool in pools]
+                    for pline in expected_pools:
+                        self.assertIn(pline, content_lines,
+                                      ('failed to render {0} conf'
+                                       ' for distro:{1}'.format(client,
+                                                                distro)))
                 elif client == 'systemd-timesyncd':
                     expected_content = (
                         "# cloud-init generated file\n" +
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index b936bc9..4f7e420 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -515,7 +515,7 @@ nameserver 172.19.0.12
 [main]
 dns = none
 """.lstrip()),
-            ('etc/udev/rules.d/70-persistent-net.rules',
+            ('etc/udev/rules.d/85-persistent-net-cloud-init.rules',
              "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
                       'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
         'out_sysconfig_rhel': [
@@ -619,7 +619,7 @@ nameserver 172.19.0.12
 [main]
 dns = none
 """.lstrip()),
-            ('etc/udev/rules.d/70-persistent-net.rules',
+            ('etc/udev/rules.d/85-persistent-net-cloud-init.rules',
              "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
                       'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
         'out_sysconfig_rhel': [
@@ -750,7 +750,7 @@ nameserver 172.19.0.12
 [main]
 dns = none
 """.lstrip()),
-            ('etc/udev/rules.d/70-persistent-net.rules',
+            ('etc/udev/rules.d/85-persistent-net-cloud-init.rules',
              "".join(['SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ',
                       'ATTR{address}=="fa:16:3e:ed:9a:59", NAME="eth0"\n']))],
         'out_sysconfig_rhel': [
@@ -1540,6 +1540,12 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
                   bond-mode: active-backup
                   bond_miimon: 100
                   bond-xmit-hash-policy: "layer3+4"
+                  bond-num-grat-arp: 5
+                  bond-downdelay: 10
+                  bond-updelay: 20
+                  bond-fail-over-mac: active
+                  bond-primary: bond0s0
+                  bond-primary-reselect: always
                 subnets:
                   - type: static
                     address: 192.168.0.2/24
@@ -1586,9 +1592,15 @@ pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true
                      macaddress: aa:bb:cc:dd:e8:ff
                      mtu: 9000
                      parameters:
+                         down-delay: 10
+                         fail-over-mac-policy: active
+                         gratuitious-arp: 5
                          mii-monitor-interval: 100
                          mode: active-backup
+                         primary: bond0s0
+                         primary-reselect-policy: always
                          transmit-hash-policy: layer3+4
+                         up-delay: 20
                      routes:
                      -   to: 10.1.3.0/24
                          via: 192.168.0.3
@@ -1604,15 +1616,27 @@ iface lo inet loopback
 
 auto bond0s0
 iface bond0s0 inet manual
+    bond-downdelay 10
+    bond-fail-over-mac active
     bond-master bond0
     bond-mode active-backup
+    bond-num-grat-arp 5
+    bond-primary bond0s0
+    bond-primary-reselect always
+    bond-updelay 20
     bond-xmit-hash-policy layer3+4
     bond_miimon 100
 
 auto bond0s1
 iface bond0s1 inet manual
+    bond-downdelay 10
+    bond-fail-over-mac active
     bond-master bond0
     bond-mode active-backup
+    bond-num-grat-arp 5
+    bond-primary bond0s0
+    bond-primary-reselect always
+    bond-updelay 20
     bond-xmit-hash-policy layer3+4
     bond_miimon 100
 
@@ -1620,8 +1644,14 @@ auto bond0
 iface bond0 inet static
     address 192.168.0.2/24
     gateway 192.168.0.1
+    bond-downdelay 10
+    bond-fail-over-mac active
     bond-mode active-backup
+    bond-num-grat-arp 5
+    bond-primary bond0s0
+    bond-primary-reselect always
     bond-slaves none
+    bond-updelay 20
     bond-xmit-hash-policy layer3+4
     bond_miimon 100
     hwaddress aa:bb:cc:dd:e8:ff
@@ -1666,10 +1696,15 @@ iface bond0 inet6 static
                 - eth0
                 - vf0
                 parameters:
+                    down-delay: 10
+                    fail-over-mac-policy: active
+                    gratuitious-arp: 5
                     mii-monitor-interval: 100
                     mode: active-backup
-                    primary: vf0
-                    transmit-hash-policy: "layer3+4"
+                    primary: bond0s0
+                    primary-reselect-policy: always
+                    transmit-hash-policy: layer3+4
+                    up-delay: 20
                 routes:
                 -   to: 10.1.3.0/24
                     via: 192.168.0.3
@@ -1692,10 +1727,15 @@ iface bond0 inet6 static
                      - eth0
                      - vf0
                      parameters:
+                         down-delay: 10
+                         fail-over-mac-policy: active
+                         gratuitious-arp: 5
                          mii-monitor-interval: 100
                          mode: active-backup
-                         primary: vf0
+                         primary: bond0s0
+                         primary-reselect-policy: always
                          transmit-hash-policy: layer3+4
+                         up-delay: 20
                      routes:
                      -   to: 10.1.3.0/24
                          via: 192.168.0.3
@@ -1720,7 +1760,12 @@ iface bond0 inet6 static
         'expected_sysconfig_opensuse': {
             'ifcfg-bond0': textwrap.dedent("""\
         BONDING_MASTER=yes
-        BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 miimon=100"
+        BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 """
+                                           """miimon=100 num_grat_arp=5 """
+                                           """downdelay=10 updelay=20 """
+                                           """fail_over_mac=active """
+                                           """primary=bond0s0 """
+                                           """primary_reselect=always"
         BONDING_SLAVE0=bond0s0
         BONDING_SLAVE1=bond0s1
         BOOTPROTO=none
@@ -1776,7 +1821,12 @@ iface bond0 inet6 static
         'expected_sysconfig_rhel': {
             'ifcfg-bond0': textwrap.dedent("""\
         BONDING_MASTER=yes
-        BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 miimon=100"
+        BONDING_OPTS="mode=active-backup xmit_hash_policy=layer3+4 """
+                                           """miimon=100 num_grat_arp=5 """
+                                           """downdelay=10 updelay=20 """
+                                           """fail_over_mac=active """
+                                           """primary=bond0s0 """
+                                           """primary_reselect=always"
         BONDING_SLAVE0=bond0s0
         BONDING_SLAVE1=bond0s1
         BOOTPROTO=none
@@ -2106,7 +2156,7 @@ DEFAULT_DEV_ATTRS = {
         "carrier": False,
         "dormant": False,
         "operstate": "down",
-        "address": "07-1C-C6-75-A4-BE",
+        "address": "07-1c-c6-75-a4-be",
         "device/driver": None,
         "device/device": None,
         "name_assign_type": "4",
@@ -2157,6 +2207,39 @@ class TestGenerateFallbackConfig(CiTestCase):
     @mock.patch("cloudinit.net.sys_dev_path")
     @mock.patch("cloudinit.net.read_sys_net")
     @mock.patch("cloudinit.net.get_devicelist")
+    def test_device_driver_v2(self, mock_get_devicelist, mock_read_sys_net,
+                              mock_sys_dev_path):
+        """Network configuration for generate_fallback_config is version 2."""
+        devices = {
+            'eth0': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'hv_netsvc', 'device/device': '0x3',
+                'name_assign_type': '4'},
+            'eth1': {
+                'bridge': False, 'carrier': False, 'dormant': False,
+                'operstate': 'down', 'address': '00:11:22:33:44:55',
+                'device/driver': 'mlx4_core', 'device/device': '0x7',
+                'name_assign_type': '4'},
+
+        }
+
+        tmp_dir = self.tmp_dir()
+        _setup_test(tmp_dir, mock_get_devicelist,
+                    mock_read_sys_net, mock_sys_dev_path,
+                    dev_attrs=devices)
+
+        network_cfg = net.generate_fallback_config(config_driver=True)
+        expected = {
+            'ethernets': {'eth0': {'dhcp4': True, 'set-name': 'eth0',
+                                   'match': {'macaddress': '00:11:22:33:44:55',
+                                             'driver': 'hv_netsvc'}}},
+            'version': 2}
+        self.assertEqual(expected, network_cfg)
+
+    @mock.patch("cloudinit.net.sys_dev_path")
+    @mock.patch("cloudinit.net.read_sys_net")
+    @mock.patch("cloudinit.net.get_devicelist")
     def test_device_driver(self, mock_get_devicelist, mock_read_sys_net,
                            mock_sys_dev_path):
         devices = {
@@ -2436,7 +2519,7 @@ class TestRhelSysConfigRendering(CiTestCase):
 #
 BOOTPROTO=dhcp
 DEVICE=eth1000
-HWADDR=07-1C-C6-75-A4-BE
+HWADDR=07-1c-c6-75-a4-be
 NM_CONTROLLED=no
 ONBOOT=yes
 STARTMODE=auto
@@ -2806,6 +2889,97 @@ USERCTL=no
         self._compare_files_to_expected(entry['expected_sysconfig'], found)
         self._assert_headers(found)
 
+    def test_from_v2_vlan_mtu(self):
+        """verify mtu gets rendered on bond when source is netplan."""
+        v2data = {
+            'version': 2,
+            'ethernets': {'eno1': {}},
+            'vlans': {
+                'eno1.1000': {
+                    'addresses': ["192.6.1.9/24"],
+                    'id': 1000, 'link': 'eno1', 'mtu': 1495}}}
+        expected = {
+            'ifcfg-eno1': textwrap.dedent("""\
+                BOOTPROTO=none
+                DEVICE=eno1
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                STARTMODE=auto
+                TYPE=Ethernet
+                USERCTL=no
+                """),
+            'ifcfg-eno1.1000': textwrap.dedent("""\
+                BOOTPROTO=none
+                DEVICE=eno1.1000
+                IPADDR=192.6.1.9
+                MTU=1495
+                NETMASK=255.255.255.0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                PHYSDEV=eno1
+                STARTMODE=auto
+                TYPE=Ethernet
+                USERCTL=no
+                VLAN=yes
+                """)
+            }
+        self._compare_files_to_expected(
+            expected, self._render_and_read(network_config=v2data))
+
+    def test_from_v2_bond_mtu(self):
+        """verify mtu gets rendered on bond when source is netplan."""
+        v2data = {
+            'version': 2,
+            'bonds': {
+                'bond0': {'addresses': ['10.101.8.65/26'],
+                          'interfaces': ['enp0s0', 'enp0s1'],
+                          'mtu': 1334,
+                          'parameters': {}}}
+        }
+        expected = {
+            'ifcfg-bond0': textwrap.dedent("""\
+                BONDING_MASTER=yes
+                BONDING_SLAVE0=enp0s0
+                BONDING_SLAVE1=enp0s1
+                BOOTPROTO=none
+                DEVICE=bond0
+                IPADDR=10.101.8.65
+                MTU=1334
+                NETMASK=255.255.255.192
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                STARTMODE=auto
+                TYPE=Bond
+                USERCTL=no
+                """),
+            'ifcfg-enp0s0': textwrap.dedent("""\
+                BONDING_MASTER=yes
+                BOOTPROTO=none
+                DEVICE=enp0s0
+                MASTER=bond0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                SLAVE=yes
+                STARTMODE=auto
+                TYPE=Bond
+                USERCTL=no
+                """),
+            'ifcfg-enp0s1': textwrap.dedent("""\
+                BONDING_MASTER=yes
+                BOOTPROTO=none
+                DEVICE=enp0s1
+                MASTER=bond0
+                NM_CONTROLLED=no
+                ONBOOT=yes
+                SLAVE=yes
+                STARTMODE=auto
+                TYPE=Bond
+                USERCTL=no
+                """)
+        }
+        self._compare_files_to_expected(
+            expected, self._render_and_read(network_config=v2data))
+
 
 class TestOpenSuseSysConfigRendering(CiTestCase):
 
@@ -2889,7 +3063,7 @@ class TestOpenSuseSysConfigRendering(CiTestCase):
 #
 BOOTPROTO=dhcp
 DEVICE=eth1000
-HWADDR=07-1C-C6-75-A4-BE
+HWADDR=07-1c-c6-75-a4-be
 NM_CONTROLLED=no
 ONBOOT=yes
 STARTMODE=auto
@@ -3201,13 +3375,13 @@ class TestNetplanNetRendering(CiTestCase):
 
         expected = """
 network:
-    version: 2
     ethernets:
         eth1000:
             dhcp4: true
             match:
                 macaddress: 07-1c-c6-75-a4-be
             set-name: eth1000
+    version: 2
 """
         self.assertEqual(expected.lstrip(), contents.lstrip())
         self.assertEqual(1, mock_clean_default.call_count)
@@ -3417,13 +3591,13 @@ class TestCmdlineConfigParsing(CiTestCase):
         self.assertEqual(found, self.simple_cfg)
 
 
-class TestCmdlineReadKernelConfig(FilesystemMockingTestCase):
+class TestCmdlineReadInitramfsConfig(FilesystemMockingTestCase):
     macs = {
         'eth0': '14:02:ec:42:48:00',
         'eno1': '14:02:ec:42:48:01',
     }
 
-    def test_ip_cmdline_without_ip(self):
+    def test_without_ip(self):
         content = {'/run/net-eth0.conf': DHCP_CONTENT_1,
                    cmdline._OPEN_ISCSI_INTERFACE_FILE: "eth0\n"}
         exp1 = copy.deepcopy(DHCP_EXPECTED_1)
@@ -3433,12 +3607,12 @@ class TestCmdlineReadKernelConfig(FilesystemMockingTestCase):
         populate_dir(root, content)
         self.reRoot(root)
 
-        found = cmdline.read_kernel_cmdline_config(
+        found = cmdline.read_initramfs_config(
             cmdline='foo root=/root/bar', mac_addrs=self.macs)
         self.assertEqual(found['version'], 1)
         self.assertEqual(found['config'], [exp1])
 
-    def test_ip_cmdline_read_kernel_cmdline_ip(self):
+    def test_with_ip(self):
         content = {'/run/net-eth0.conf': DHCP_CONTENT_1}
         exp1 = copy.deepcopy(DHCP_EXPECTED_1)
         exp1['mac_address'] = self.macs['eth0']
@@ -3447,18 +3621,18 @@ class TestCmdlineReadKernelConfig(FilesystemMockingTestCase):
         populate_dir(root, content)
         self.reRoot(root)
 
-        found = cmdline.read_kernel_cmdline_config(
+        found = cmdline.read_initramfs_config(
             cmdline='foo ip=dhcp', mac_addrs=self.macs)
         self.assertEqual(found['version'], 1)
         self.assertEqual(found['config'], [exp1])
 
-    def test_ip_cmdline_read_kernel_cmdline_ip6(self):
+    def test_with_ip6(self):
         content = {'/run/net6-eno1.conf': DHCP6_CONTENT_1}
         root = self.tmp_dir()
         populate_dir(root, content)
         self.reRoot(root)
 
-        found = cmdline.read_kernel_cmdline_config(
+        found = cmdline.read_initramfs_config(
             cmdline='foo ip6=dhcp root=/dev/sda',
             mac_addrs=self.macs)
         self.assertEqual(
@@ -3470,15 +3644,15 @@ class TestCmdlineReadKernelConfig(FilesystemMockingTestCase):
                   {'dns_nameservers': ['2001:67c:1562:8010::2:1'],
                    'control': 'manual', 'type': 'dhcp6', 'netmask': '64'}]}]})
 
-    def test_ip_cmdline_read_kernel_cmdline_none(self):
+    def test_with_no_ip_or_ip6(self):
         # if there is no ip= or ip6= on cmdline, return value should be None
         content = {'net6-eno1.conf': DHCP6_CONTENT_1}
         files = sorted(populate_dir(self.tmp_dir(), content))
-        found = cmdline.read_kernel_cmdline_config(
+        found = cmdline.read_initramfs_config(
             files=files, cmdline='foo root=/dev/sda', mac_addrs=self.macs)
         self.assertIsNone(found)
 
-    def test_ip_cmdline_both_ip_ip6(self):
+    def test_with_both_ip_ip6(self):
         content = {
             '/run/net-eth0.conf': DHCP_CONTENT_1,
             '/run/net6-eth0.conf': DHCP6_CONTENT_1.replace('eno1', 'eth0')}
@@ -3493,7 +3667,7 @@ class TestCmdlineReadKernelConfig(FilesystemMockingTestCase):
         populate_dir(root, content)
         self.reRoot(root)
 
-        found = cmdline.read_kernel_cmdline_config(
+        found = cmdline.read_initramfs_config(
             cmdline='foo ip=dhcp ip6=dhcp', mac_addrs=self.macs)
 
         self.assertEqual(found['version'], 1)
diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py
old mode 100755
new mode 100644
index d01ed5b..640895a
--- a/tests/unittests/test_reporting_hyperv.py
+++ b/tests/unittests/test_reporting_hyperv.py
@@ -7,9 +7,12 @@ import json
 import os
 import struct
 import time
+import re
+import mock
 
 from cloudinit import util
 from cloudinit.tests.helpers import CiTestCase
+from cloudinit.sources.helpers import azure
 
 
 class TestKvpEncoding(CiTestCase):
@@ -126,3 +129,65 @@ class TextKvpReporter(CiTestCase):
         reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
         kvps = list(reporter._iterate_kvps(0))
         self.assertEqual(0, len(kvps))
+
+    @mock.patch('cloudinit.distros.uses_systemd')
+    @mock.patch('cloudinit.util.subp')
+    def test_get_boot_telemetry(self, m_subp, m_sysd):
+        reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+        datetime_pattern = r"\d{4}-[01]\d-[0-3]\dT[0-2]\d:[0-5]"
+        r"\d:[0-5]\d\.\d+([+-][0-2]\d:[0-5]\d|Z)"
+
+        # get_boot_telemetry makes two subp calls to systemctl. We provide
+        # a list of values that the subp calls should return
+        m_subp.side_effect = [
+            ('UserspaceTimestampMonotonic=1844838', ''),
+            ('InactiveExitTimestampMonotonic=3068203', '')]
+        m_sysd.return_value = True
+
+        reporter.publish_event(azure.get_boot_telemetry())
+        reporter.q.join()
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(1, len(kvps))
+
+        evt_msg = kvps[0]['value']
+        if not re.search("kernel_start=" + datetime_pattern, evt_msg):
+            raise AssertionError("missing kernel_start timestamp")
+        if not re.search("user_start=" + datetime_pattern, evt_msg):
+            raise AssertionError("missing user_start timestamp")
+        if not re.search("cloudinit_activation=" + datetime_pattern,
+                         evt_msg):
+            raise AssertionError(
+                "missing cloudinit_activation timestamp")
+
+    def test_get_system_info(self):
+        reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+        pattern = r"[^=\s]+"
+
+        reporter.publish_event(azure.get_system_info())
+        reporter.q.join()
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(1, len(kvps))
+        evt_msg = kvps[0]['value']
+
+        # the most important information is cloudinit version,
+        # kernel_version, and the distro variant. It is ok if
+        # if the rest is not available
+        if not re.search("cloudinit_version=" + pattern, evt_msg):
+            raise AssertionError("missing cloudinit_version string")
+        if not re.search("kernel_version=" + pattern, evt_msg):
+            raise AssertionError("missing kernel_version string")
+        if not re.search("variant=" + pattern, evt_msg):
+            raise AssertionError("missing distro variant string")
+
+    def test_report_diagnostic_event(self):
+        reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
+
+        reporter.publish_event(
+            azure.report_diagnostic_event("test_diagnostic"))
+        reporter.q.join()
+        kvps = list(reporter._iterate_kvps(0))
+        self.assertEqual(1, len(kvps))
+        evt_msg = kvps[0]['value']
+
+        if "test_diagnostic" not in evt_msg:
+            raise AssertionError("missing expected diagnostic message")
diff --git a/tests/unittests/test_vmware/test_custom_script.py b/tests/unittests/test_vmware/test_custom_script.py
index 2d9519b..f89f815 100644
--- a/tests/unittests/test_vmware/test_custom_script.py
+++ b/tests/unittests/test_vmware/test_custom_script.py
@@ -1,10 +1,12 @@
 # Copyright (C) 2015 Canonical Ltd.
-# Copyright (C) 2017 VMware INC.
+# Copyright (C) 2017-2019 VMware INC.
 #
 # Author: Maitreyee Saikia <msaikia@xxxxxxxxxx>
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import os
+import stat
 from cloudinit import util
 from cloudinit.sources.helpers.vmware.imc.config_custom_script import (
     CustomScriptConstant,
@@ -18,6 +20,10 @@ from cloudinit.tests.helpers import CiTestCase, mock
 class TestVmwareCustomScript(CiTestCase):
     def setUp(self):
         self.tmpDir = self.tmp_dir()
+        # Mock the tmpDir as the root dir in VM.
+        self.execDir = os.path.join(self.tmpDir, ".customization")
+        self.execScript = os.path.join(self.execDir,
+                                       ".customize.sh")
 
     def test_prepare_custom_script(self):
         """
@@ -37,63 +43,67 @@ class TestVmwareCustomScript(CiTestCase):
 
         # Custom script exists.
         custScript = self.tmp_path("test-cust", self.tmpDir)
-        util.write_file(custScript, "test-CR-strip/r/r")
-        postCust = PostCustomScript("test-cust", self.tmpDir)
-        self.assertEqual("test-cust", postCust.scriptname)
-        self.assertEqual(self.tmpDir, postCust.directory)
-        self.assertEqual(custScript, postCust.scriptpath)
-        self.assertFalse(postCust.postreboot)
-        postCust.prepare_script()
-        # Check if all carraige returns are stripped from script.
-        self.assertFalse("/r" in custScript)
+        util.write_file(custScript, "test-CR-strip\r\r")
+        with mock.patch.object(CustomScriptConstant,
+                               "CUSTOM_TMP_DIR",
+                               self.execDir):
+            with mock.patch.object(CustomScriptConstant,
+                                   "CUSTOM_SCRIPT",
+                                   self.execScript):
+                postCust = PostCustomScript("test-cust",
+                                            self.tmpDir,
+                                            self.tmpDir)
+                self.assertEqual("test-cust", postCust.scriptname)
+                self.assertEqual(self.tmpDir, postCust.directory)
+                self.assertEqual(custScript, postCust.scriptpath)
+                postCust.prepare_script()
 
-    def test_rc_local_exists(self):
-        """
-        This test is designed to verify the different scenarios associated
-        with the presence of rclocal.
-        """
-        # test when rc local does not exist
-        postCust = PostCustomScript("test-cust", self.tmpDir)
-        with mock.patch.object(CustomScriptConstant, "RC_LOCAL", "/no/path"):
-            rclocal = postCust.find_rc_local()
-            self.assertEqual("", rclocal)
-
-        # test when rc local exists
-        rclocalFile = self.tmp_path("vmware-rclocal", self.tmpDir)
-        util.write_file(rclocalFile, "# Run post-reboot guest customization",
-                        omode="w")
-        with mock.patch.object(CustomScriptConstant, "RC_LOCAL", rclocalFile):
-            rclocal = postCust.find_rc_local()
-            self.assertEqual(rclocalFile, rclocal)
-            self.assertTrue(postCust.has_previous_agent, rclocal)
-
-        # test when rc local is a symlink
-        rclocalLink = self.tmp_path("dummy-rclocal-link", self.tmpDir)
-        util.sym_link(rclocalFile, rclocalLink, True)
-        with mock.patch.object(CustomScriptConstant, "RC_LOCAL", rclocalLink):
-            rclocal = postCust.find_rc_local()
-            self.assertEqual(rclocalFile, rclocal)
+                # Custom script is copied with exec privilege
+                self.assertTrue(os.path.exists(self.execScript))
+                st = os.stat(self.execScript)
+                self.assertTrue(st.st_mode & stat.S_IEXEC)
+                with open(self.execScript, "r") as f:
+                    content = f.read()
+                self.assertEqual(content, "test-CR-strip")
+                # Check if all carraige returns are stripped from script.
+                self.assertFalse("\r" in content)
 
     def test_execute_post_cust(self):
         """
-        This test is to identify if rclocal was properly populated to be
-        run after reboot.
+        This test is designed to verify the behavior after execute post
+        customization.
         """
-        customscript = self.tmp_path("vmware-post-cust-script", self.tmpDir)
-        rclocal = self.tmp_path("vmware-rclocal", self.tmpDir)
-        # Create a temporary rclocal file
-        open(customscript, "w")
-        util.write_file(rclocal, "tests\nexit 0", omode="w")
-        postCust = PostCustomScript("vmware-post-cust-script", self.tmpDir)
-        with mock.patch.object(CustomScriptConstant, "RC_LOCAL", rclocal):
-            # Test that guest customization agent is not installed initially.
-            self.assertFalse(postCust.postreboot)
-            self.assertIs(postCust.has_previous_agent(rclocal), False)
-            postCust.install_agent()
+        # Prepare the customize package
+        postCustRun = self.tmp_path("post-customize-guest.sh", self.tmpDir)
+        util.write_file(postCustRun, "This is the script to run post cust")
+        userScript = self.tmp_path("test-cust", self.tmpDir)
+        util.write_file(userScript, "This is the post cust script")
 
-            # Assert rclocal has been modified to have guest customization
-            # agent.
-            self.assertTrue(postCust.postreboot)
-            self.assertTrue(postCust.has_previous_agent, rclocal)
+        # Mock the cc_scripts_per_instance dir and marker file.
+        # Create another tmp dir for cc_scripts_per_instance.
+        ccScriptDir = self.tmp_dir()
+        ccScript = os.path.join(ccScriptDir, "post-customize-guest.sh")
+        markerFile = os.path.join(self.tmpDir, ".markerFile")
+        with mock.patch.object(CustomScriptConstant,
+                               "CUSTOM_TMP_DIR",
+                               self.execDir):
+            with mock.patch.object(CustomScriptConstant,
+                                   "CUSTOM_SCRIPT",
+                                   self.execScript):
+                with mock.patch.object(CustomScriptConstant,
+                                       "POST_CUSTOM_PENDING_MARKER",
+                                       markerFile):
+                    postCust = PostCustomScript("test-cust",
+                                                self.tmpDir,
+                                                ccScriptDir)
+                    postCust.execute()
+                    # Check cc_scripts_per_instance and marker file
+                    # are created.
+                    self.assertTrue(os.path.exists(ccScript))
+                    with open(ccScript, "r") as f:
+                        content = f.read()
+                    self.assertEqual(content,
+                                     "This is the script to run post cust")
+                    self.assertTrue(os.path.exists(markerFile))
 
 # vi: ts=4 expandtab
diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd
index dc3b974..8ae6456 100755
--- a/tools/build-on-freebsd
+++ b/tools/build-on-freebsd
@@ -3,36 +3,43 @@
 # installing cloud-init. This script takes care of building and installing. It
 # will optionally make a first run at the end.
 
+set -eux
+
 fail() { echo "FAILED:" "$@" 1>&2; exit 1; }
 
+PYTHON="${PYTHON:-python3}"
+if [ ! $(which ${PYTHON}) ]; then
+    echo "Please install python first."
+    exit 1
+fi
+py_prefix=$(${PYTHON} -c 'import sys; print("py%d%d" % (sys.version_info.major, sys.version_info.minor))')
+
 # Check dependencies:
 depschecked=/tmp/c-i.dependencieschecked
 pkgs="
-   bash
-   chpasswd
-   dmidecode
-   e2fsprogs
-   py27-Jinja2
-   py27-boto
-   py27-cheetah
-   py27-configobj
-   py27-jsonpatch
-   py27-jsonpointer
-   py27-jsonschema
-   py27-oauthlib
-   py27-requests
-   py27-serial
-   py27-six
-   py27-yaml
-   python
-   sudo
+    bash
+    chpasswd
+    dmidecode
+    e2fsprogs
+    $py_prefix-Jinja2
+    $py_prefix-boto
+    $py_prefix-configobj
+    $py_prefix-jsonpatch
+    $py_prefix-jsonpointer
+    $py_prefix-jsonschema
+    $py_prefix-oauthlib
+    $py_prefix-requests
+    $py_prefix-serial
+    $py_prefix-six
+    $py_prefix-yaml
+    sudo
 "
-[ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages"
+[ -f "$depschecked" ] || pkg install --yes ${pkgs} || fail "install packages"
 touch $depschecked
 
 # Build the code and install in /usr/local/:
-python2.7 setup.py build
-python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd
+${PYTHON} setup.py build
+${PYTHON} setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd
 
 # Enable cloud-init in /etc/rc.conf:
 sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf
@@ -40,21 +47,21 @@ echo 'cloudinit_enable="YES"' >> /etc/rc.conf
 
 echo "Installation completed."
 
-if [ "$1" = "run" ]; then
-	echo "Ok, now let's see if it works."
+if [ "$#" -gt 1 ] && [ "$1" = "run" ]; then
+    echo "Ok, now let's see if it works."
 
-	# Backup SSH keys
-	mv /etc/ssh/ssh_host_* /tmp/
+    # Backup SSH keys
+    mv /etc/ssh/ssh_host_* /tmp/
 
-	# Remove old metadata
-	rm -rf /var/lib/cloud
+    # Remove old metadata
+    rm -rf /var/lib/cloud
 
-	# Just log everything, quick&dirty
-	rm /usr/local/etc/cloud/cloud.cfg.d/05_logging.cfg 
+    # Just log everything, quick&dirty
+    rm /usr/local/etc/cloud/cloud.cfg.d/05_logging.cfg
 
-	# Start:
-	/usr/local/etc/rc.d/cloudinit start
+    # Start:
+    /usr/local/etc/rc.d/cloudinit start
 
-	# Restore SSH keys
-	mv /tmp/ssh_host_* /etc/ssh/
+    # Restore SSH keys
+    mv /tmp/ssh_host_* /etc/ssh/
 fi
diff --git a/tools/ds-identify b/tools/ds-identify
index e16708f..e0d4865 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -124,7 +124,7 @@ DI_DSNAME=""
 # be searched if there is no setting found in config.
 DI_DSLIST_DEFAULT="MAAS ConfigDrive NoCloud AltCloud Azure Bigstep \
 CloudSigma CloudStack DigitalOcean AliYun Ec2 GCE OpenNebula OpenStack \
-OVF SmartOS Scaleway Hetzner IBMCloud Oracle"
+OVF SmartOS Scaleway Hetzner IBMCloud Oracle Exoscale"
 DI_DSLIST=""
 DI_MODE=""
 DI_ON_FOUND=""
@@ -553,6 +553,11 @@ dscheck_CloudStack() {
     return $DS_NOT_FOUND
 }
 
+dscheck_Exoscale() {
+    dmi_product_name_matches "Exoscale*" && return $DS_FOUND
+    return $DS_NOT_FOUND
+}
+
 dscheck_CloudSigma() {
     # http://paste.ubuntu.com/23624795/
     dmi_product_name_matches "CloudSigma" && return $DS_FOUND
@@ -766,10 +771,34 @@ is_cdrom_ovf() {
         config-2|CONFIG-2|rd_rdfe_stable*|cidata|CIDATA) return 1;;
     esac
 
+    # skip device which size is 10MB or larger
+    local size="" sfile="${PATH_SYS_CLASS_BLOCK}/${dev##*/}/size"
+    [ -f "$sfile" ] || return 1
+    read size <"$sfile" || { warn "failed reading from $sfile"; return 1; }
+    # size is in 512 byte units. so convert to MB (integer division)
+    if [ $((size/2048)) -ge 10 ]; then
+        debug 2 "$dev: size $((size/2048))MB is considered too large for OVF"
+        return 1
+    fi
+
     local idstr="http://schemas.dmtf.org/ovf/environment/1";
     grep --quiet --ignore-case "$idstr" "${PATH_ROOT}$dev"
 }
 
+has_ovf_cdrom() {
+    # DI_ISO9660_DEVS is <device>=label,<device>=label2
+    # like /dev/sr0=OVF-TRANSPORT,/dev/other=with spaces
+    if [ "${DI_ISO9660_DEVS#${UNAVAILABLE}:}" = "${DI_ISO9660_DEVS}" ]; then
+        local oifs="$IFS"
+        # shellcheck disable=2086
+        { IFS=","; set -- ${DI_ISO9660_DEVS}; IFS="$oifs"; }
+        for tok in "$@"; do
+            is_cdrom_ovf "${tok%%=*}" "${tok#*=}" && return 0
+        done
+    fi
+    return 1
+}
+
 dscheck_OVF() {
     check_seed_dir ovf ovf-env.xml && return "${DS_FOUND}"
 
@@ -780,20 +809,9 @@ dscheck_OVF() {
 
     ovf_vmware_transport_guestinfo && return "${DS_FOUND}"
 
-    # DI_ISO9660_DEVS is <device>=label,<device>=label2
-    # like /dev/sr0=OVF-TRANSPORT,/dev/other=with spaces
-    if [ "${DI_ISO9660_DEVS#${UNAVAILABLE}:}" = "${DI_ISO9660_DEVS}" ]; then
-        local oifs="$IFS"
-        # shellcheck disable=2086
-        { IFS=","; set -- ${DI_ISO9660_DEVS}; IFS="$oifs"; }
-        for tok in "$@"; do
-            is_cdrom_ovf "${tok%%=*}" "${tok#*=}" && return $DS_FOUND
-        done
-    fi
+    has_ovf_cdrom && return "${DS_FOUND}"
 
-    if ovf_vmware_guest_customization; then
-        return ${DS_FOUND}
-    fi
+    ovf_vmware_guest_customization && return "${DS_FOUND}"
 
     return ${DS_NOT_FOUND}
 }
diff --git a/tools/xkvm b/tools/xkvm
index a30ba91..8d44cad 100755
--- a/tools/xkvm
+++ b/tools/xkvm
@@ -1,4 +1,6 @@
 #!/bin/bash
+# This file is part of cloud-init.
+# See LICENSE file for copyright and license info.
 
 set -f
 
@@ -11,6 +13,8 @@ TAPDEVS=( )
 # OVS_CLEANUP gets populated with bridge:devname pairs used with ovs
 OVS_CLEANUP=( )
 MAC_PREFIX="52:54:00:12:34"
+# allow this to be set externally.
+_QEMU_SUPPORTS_FILE_LOCKING="${_QEMU_SUPPORTS_FILE_LOCKING}"
 KVM="kvm"
 declare -A KVM_DEVOPTS
 
@@ -119,6 +123,21 @@ isdevopt() {
     return 1
 }
 
+qemu_supports_file_locking() {
+    # hackily check if qemu has file.locking in -drive params (LP: #1716028)
+    if [ -z "$_QEMU_SUPPORTS_FILE_LOCKING" ]; then
+        # The only way we could find to check presense of file.locking is
+        # qmp (query-qmp-schema).  Simply checking if the virtio-blk driver
+        # supports 'share-rw' is expected to be equivalent and simpler.
+        isdevopt virtio-blk share-rw &&
+            _QEMU_SUPPORTS_FILE_LOCKING=true ||
+            _QEMU_SUPPORTS_FILE_LOCKING=false
+        debug 1 "qemu supports file locking = ${_QEMU_SUPPORTS_FILE_LOCKING}"
+    fi
+    [ "$_QEMU_SUPPORTS_FILE_LOCKING" = "true" ]
+    return
+}
+
 padmac() {
     # return a full mac, given a subset.
     # assume whatever is input is the last portion to be
@@ -367,7 +386,7 @@ main() {
     [ ${#netdevs[@]} -eq 0 ] && netdevs=( "${DEF_BRIDGE}" )
     pt=( "$@" )
 
-    local kvm_pkg="" virtio_scsi_bus="virtio-scsi-pci"
+    local kvm_pkg="" virtio_scsi_bus="virtio-scsi-pci" virtio_rng_device="virtio-rng-pci"
     [ -n "$kvm" ] && kvm_pkg="none"
     case $(uname -m) in
         i?86)
@@ -382,7 +401,10 @@ main() {
             [ -n "$kvm" ] ||
                 { kvm="qemu-system-s390x"; kvm_pkg="qemu-system-misc"; }
             def_netmodel=${DEF_NETMODEL:-"virtio-net-ccw"}
+            # disable virtio-scsi-bus
             virtio_scsi_bus="virtio-scsi-ccw"
+            virtio_blk_bus="virtio-blk-ccw"
+            virtio_rng_device="virtio-rng-ccw"
             ;;
         ppc64*)
             [ -n "$kvm" ] ||
@@ -408,7 +430,7 @@ main() {
     bios_opts=( "${_RET[@]}" )
 
     local out="" fmt="" bus="" unit="" index="" serial="" driver="" devopts=""
-    local busorindex="" driveopts="" cur="" val="" file=""
+    local busorindex="" driveopts="" cur="" val="" file="" wwn=""
     for((i=0;i<${#diskdevs[@]};i++)); do
         cur=${diskdevs[$i]}
         IFS=","; set -- $cur; IFS="$oifs"
@@ -420,6 +442,7 @@ main() {
         unit=""
         index=""
         serial=""
+        wwn=""
         for tok in "$@"; do
             [ "${tok#*=}" = "${tok}" -a -f "${tok}" -a -z "$file" ] && file="$tok"
             val=${tok#*=}
@@ -433,6 +456,7 @@ main() {
                 file=*) file=$val;;
                 fmt=*|format=*) fmt=$val;;
                 serial=*) serial=$val;;
+                wwn=*) wwn=$val;;
                 bus=*) bus=$val;;
                 unit=*) unit=$val;;
                 index=*) index=$val;;
@@ -443,14 +467,19 @@ main() {
             out=$(LANG=C qemu-img info "$file") &&
                 fmt=$(echo "$out" | awk '$0 ~ /^file format:/ { print $3 }') ||
                 { error "failed to determine format of $file"; return 1; }
-        else
+        elif [ -z "$fmt" ]; then
             fmt=raw
         fi
         if [ -z "$driver" ]; then
             driver="$def_disk_driver"
         fi
         if [ -z "$serial" ]; then
-            serial="${file##*/}"
+            # use filename as serial if not provided a wwn
+            if [ -n "$wwn" ]; then
+                serial="$wwn"
+            else
+                serial="${file##*/}"
+            fi
         fi
 
         # make sure we add either bus= or index=
@@ -470,11 +499,21 @@ main() {
                 id=*|if=*|driver=*|$file|file=*) continue;;
                 fmt=*|format=*) continue;;
                 serial=*|bus=*|unit=*|index=*) continue;;
+                file.locking=*)
+                    qemu_supports_file_locking || {
+                        debug 2 "qemu has no file locking." \
+                            "Dropping '$tok' from: $cur"
+                        continue
+                    };;
             esac
             isdevopt "$driver" "$tok" && devopts="${devopts},$tok" ||
                 diskopts="${diskopts},${tok}"
         done
-
+        case $driver in
+            virtio-blk-ccw)
+                # disable scsi when using virtio-blk-ccw
+                devopts="${devopts},scsi=off";;
+        esac
         diskargs=( "${diskargs[@]}" -drive "$diskopts" -device "$devopts" )
     done
 
@@ -623,10 +662,16 @@ main() {
     done
 
     local bus_devices
-    bus_devices=( -device "$virtio_scsi_bus,id=virtio-scsi-xkvm" )
-    cmd=( "${kvmcmd[@]}" "${archopts[@]}" 
+    if [ -n "${virtio_scsi_bus}" ]; then
+        bus_devices=( -device "$virtio_scsi_bus,id=virtio-scsi-xkvm" )
+    fi
+    local rng_devices
+    rng_devices=( -object "rng-random,filename=/dev/urandom,id=objrng0"
+                  -device "$virtio_rng_device,rng=objrng0,id=rng0" )
+    cmd=( "${kvmcmd[@]}" "${archopts[@]}"
           "${bios_opts[@]}"
           "${bus_devices[@]}"
+          "${rng_devices[@]}"
           "${netargs[@]}"
           "${diskargs[@]}" "${pt[@]}" )
     local pcmd=$(quote_cmd "${cmd[@]}")
@@ -661,4 +706,4 @@ else
     main "$@"
 fi
 
-# vi: ts=4 expandtab
+# vi: ts=4 expandtab syntax=sh

Follow ups