← Back to team overview

cloud-init-dev team mailing list archive

[Merge] ~chad.smith/cloud-init:ubuntu/cosmic into cloud-init:ubuntu/cosmic

 

Chad Smith has proposed merging ~chad.smith/cloud-init:ubuntu/cosmic into cloud-init:ubuntu/cosmic.

Commit message:
sync new-upstream snapshot for release into cosmic via SRU

Requested reviews:
  cloud-init commiters (cloud-init-dev)
Related bugs:
  Bug #1799779 in cloud-init: "LXD module installs the wrong ZFS package if it's missing"
  https://bugs.launchpad.net/cloud-init/+bug/1799779
  Bug #1811446 in cloud-init: "chpasswd: is mangling certain password hashes"
  https://bugs.launchpad.net/cloud-init/+bug/1811446
  Bug #1813361 in cloud-init: "disco: python37 unittest/tox support "
  https://bugs.launchpad.net/cloud-init/+bug/1813361
  Bug #1813383 in cloud-init: "opennebula: fail to sbuild, bash environment var failure EPOCHREALTIME"
  https://bugs.launchpad.net/cloud-init/+bug/1813383

For more details, see:
https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/362280
-- 
Your team cloud-init commiters is requested to review the proposed merge of ~chad.smith/cloud-init:ubuntu/cosmic into cloud-init:ubuntu/cosmic.
diff --git a/ChangeLog b/ChangeLog
index 9c043b0..8fa6fdd 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,57 @@
+18.5:
+ - tests: add Disco release [Joshua Powers]
+ - net: render 'metric' values in per-subnet routes (LP: #1805871)
+ - write_files: add support for appending to files. [James Baxter]
+ - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
+   (LP: #1805854)
+ - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
+ - NoCloud: Allow top level 'network' key in network-config. (LP: #1798117)
+ - ovf: Fix ovf network config generation gateway/routes (LP: #1806103)
+ - azure: detect vnet migration via netlink media change event
+   [Tamilmani Manoharan]
+ - Azure: fix copy/paste error in error handling when reading azure ovf.
+   [Adam DePue]
+ - tests: fix incorrect order of mocks in test_handle_zfs_root.
+ - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
+ - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
+ - logs: collect-logs ignore instance-data-sensitive.json on non-root user
+   (LP: #1805201)
+ - net: Ephemeral*Network: add connectivity check via URL
+ - azure: _poll_imds only retry on 404. Fail on Timeout (LP: #1803598)
+ - resizefs: Prefix discovered devpath with '/dev/' when path does not
+   exist [Igor Galić]
+ - azure: retry imds polling on requests.Timeout (LP: #1800223)
+ - azure: Accept variation in error msg from mount for ntfs volumes
+   [Jason Zions] (LP: #1799338)
+ - azure: fix regression introduced when persisting ephemeral dhcp lease
+   [asakkurr]
+ - azure: add udev rules to create cloud-init Gen2 disk name symlinks
+   (LP: #1797480)
+ - tests: ec2 mock missing httpretty user-data and instance-identity routes
+ - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
+ - azure: report ready to fabric after reprovision and reduce logging
+   [asakkurr] (LP: #1799594)
+ - query: better error when missing read permission on instance-data
+ - instance-data: fallback to instance-data.json if sensitive is absent.
+   (LP: #1798189)
+ - docs: remove colon from network v1 config example. [Tomer Cohen]
+ - Add cloud-id binary to packages for SUSE [Jason Zions]
+ - systemd: On SUSE ensure cloud-init.service runs before wicked
+   [Robert Schweikert] (LP: #1799709)
+ - update detection of openSUSE variants [Robert Schweikert]
+ - azure: Add apply_network_config option to disable network from IMDS
+   (LP: #1798424)
+ - Correct spelling in an error message (udevadm). [Katie McLaughlin]
+ - tests: meta_data key changed to meta-data in ec2 instance-data.json
+   (LP: #1797231)
+ - tests: fix kvm integration test to assert flexible config-disk path
+   (LP: #1797199)
+ - tools: Add cloud-id command line utility
+ - instance-data: Add standard keys platform and subplatform. Refactor ec2.
+ - net: ignore nics that have "zero" mac address. (LP: #1796917)
+ - tests: fix apt_configure_primary to be more flexible
+ - Ubuntu: update sources.list to comment out deb-src entries. (LP: #74747)
+
 18.4:
  - add rtd example docs about new standardized keys
  - use ds._crawled_metadata instance attribute if set when writing
diff --git a/HACKING.rst b/HACKING.rst
index 3bb555c..fcdfa4f 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -11,10 +11,10 @@ Do these things once
 
 * To contribute, you must sign the Canonical `contributor license agreement`_
 
-  If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group.  Unfortunately there is no easy way to check if an organization or company you are doing work for has signed.  If you are unsure or have questions, email `Scott Moser <mailto:scott.moser@xxxxxxxxxxxxx>`_ or ping smoser in ``#cloud-init`` channel via freenode.
+  If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group.  Unfortunately there is no easy way to check if an organization or company you are doing work for has signed.  If you are unsure or have questions, email `Josh Powers <mailto:josh.powers@xxxxxxxxxxxxx>`_ or ping powersj in ``#cloud-init`` channel via freenode.
 
   When prompted for 'Project contact' or 'Canonical Project Manager' enter
-  'Scott Moser'.
+  'Josh Powers'.
 
 * Configure git with your email and name for commit messages.
 
diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init
index 8c25032..a9577e9 100644
--- a/bash_completion/cloud-init
+++ b/bash_completion/cloud-init
@@ -30,7 +30,10 @@ _cloudinit_complete()
                 devel)
                     COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word))
                     ;;
-                dhclient-hook|features)
+                dhclient-hook)
+                    COMPREPLY=($(compgen -W "--help up down" -- $cur_word))
+                    ;;
+                features)
                     COMPREPLY=($(compgen -W "--help" -- $cur_word))
                     ;;
                 init)
diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
index df72520..4c086b5 100644
--- a/cloudinit/cmd/devel/logs.py
+++ b/cloudinit/cmd/devel/logs.py
@@ -5,14 +5,16 @@
 """Define 'collect-logs' utility and handler to include in cloud-init cmd."""
 
 import argparse
-from cloudinit.util import (
-    ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
-from cloudinit.temp_utils import tempdir
 from datetime import datetime
 import os
 import shutil
 import sys
 
+from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
+from cloudinit.temp_utils import tempdir
+from cloudinit.util import (
+    ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
+
 
 CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
 CLOUDINIT_RUN_DIR = '/run/cloud-init'
@@ -46,6 +48,13 @@ def get_parser(parser=None):
     return parser
 
 
+def _copytree_ignore_sensitive_files(curdir, files):
+    """Return a list of files to ignore if we are non-root"""
+    if os.getuid() == 0:
+        return ()
+    return (INSTANCE_JSON_SENSITIVE_FILE,)  # Ignore root-permissioned files
+
+
 def _write_command_output_to_file(cmd, filename, msg, verbosity):
     """Helper which runs a command and writes output or error to filename."""
     try:
@@ -78,6 +87,11 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
     @param tarfile: The path of the tar-gzipped file to create.
     @param include_userdata: Boolean, true means include user-data.
     """
+    if include_userdata and os.getuid() != 0:
+        sys.stderr.write(
+            "To include userdata, root user is required."
+            " Try sudo cloud-init collect-logs\n")
+        return 1
     tarfile = os.path.abspath(tarfile)
     date = datetime.utcnow().date().strftime('%Y-%m-%d')
     log_dir = 'cloud-init-logs-{0}'.format(date)
@@ -110,7 +124,8 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
         ensure_dir(run_dir)
         if os.path.exists(CLOUDINIT_RUN_DIR):
             shutil.copytree(CLOUDINIT_RUN_DIR,
-                            os.path.join(run_dir, 'cloud-init'))
+                            os.path.join(run_dir, 'cloud-init'),
+                            ignore=_copytree_ignore_sensitive_files)
             _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)
         else:
             _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,
@@ -118,21 +133,21 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
         with chdir(tmp_dir):
             subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
     sys.stderr.write("Wrote %s\n" % tarfile)
+    return 0
 
 
 def handle_collect_logs_args(name, args):
     """Handle calls to 'cloud-init collect-logs' as a subcommand."""
-    collect_logs(args.tarfile, args.userdata, args.verbosity)
+    return collect_logs(args.tarfile, args.userdata, args.verbosity)
 
 
 def main():
     """Tool to collect and tar all cloud-init related logs."""
     parser = get_parser()
-    handle_collect_logs_args('collect-logs', parser.parse_args())
-    return 0
+    return handle_collect_logs_args('collect-logs', parser.parse_args())
 
 
 if __name__ == '__main__':
-    main()
+    sys.exit(main())
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/net_convert.py b/cloudinit/cmd/devel/net_convert.py
index a0f58a0..1ad7e0b 100755
--- a/cloudinit/cmd/devel/net_convert.py
+++ b/cloudinit/cmd/devel/net_convert.py
@@ -9,6 +9,7 @@ import yaml
 
 from cloudinit.sources.helpers import openstack
 from cloudinit.sources import DataSourceAzure as azure
+from cloudinit.sources import DataSourceOVF as ovf
 
 from cloudinit import distros
 from cloudinit.net import eni, netplan, network_state, sysconfig
@@ -31,7 +32,7 @@ def get_parser(parser=None):
                         metavar="PATH", required=True)
     parser.add_argument("-k", "--kind",
                         choices=['eni', 'network_data.json', 'yaml',
-                                 'azure-imds'],
+                                 'azure-imds', 'vmware-imc'],
                         required=True)
     parser.add_argument("-d", "--directory",
                         metavar="PATH",
@@ -76,7 +77,6 @@ def handle_args(name, args):
     net_data = args.network_data.read()
     if args.kind == "eni":
         pre_ns = eni.convert_eni_data(net_data)
-        ns = network_state.parse_net_config_data(pre_ns)
     elif args.kind == "yaml":
         pre_ns = yaml.load(net_data)
         if 'network' in pre_ns:
@@ -85,15 +85,16 @@ def handle_args(name, args):
             sys.stderr.write('\n'.join(
                 ["Input YAML",
                  yaml.dump(pre_ns, default_flow_style=False, indent=4), ""]))
-        ns = network_state.parse_net_config_data(pre_ns)
     elif args.kind == 'network_data.json':
         pre_ns = openstack.convert_net_json(
             json.loads(net_data), known_macs=known_macs)
-        ns = network_state.parse_net_config_data(pre_ns)
     elif args.kind == 'azure-imds':
         pre_ns = azure.parse_network_config(json.loads(net_data))
-        ns = network_state.parse_net_config_data(pre_ns)
+    elif args.kind == 'vmware-imc':
+        config = ovf.Config(ovf.ConfigFile(args.network_data.name))
+        pre_ns = ovf.get_network_config_from_conf(config, False)
 
+    ns = network_state.parse_net_config_data(pre_ns)
     if not ns:
         raise RuntimeError("No valid network_state object created from"
                            "input data")
@@ -111,6 +112,10 @@ def handle_args(name, args):
     elif args.output_kind == "netplan":
         r_cls = netplan.Renderer
         config = distro.renderer_configs.get('netplan')
+        # don't run netplan generate/apply
+        config['postcmds'] = False
+        # trim leading slash
+        config['netplan_path'] = config['netplan_path'][1:]
     else:
         r_cls = sysconfig.Renderer
         config = distro.renderer_configs.get('sysconfig')
diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py
index 2ba6b68..1bc2240 100755
--- a/cloudinit/cmd/devel/render.py
+++ b/cloudinit/cmd/devel/render.py
@@ -8,11 +8,10 @@ import sys
 
 from cloudinit.handlers.jinja_template import render_jinja_payload_from_file
 from cloudinit import log
-from cloudinit.sources import INSTANCE_JSON_FILE
+from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
 from . import addLogHandlerCLI, read_cfg_paths
 
 NAME = 'render'
-DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json'
 
 LOG = log.getLogger(NAME)
 
@@ -47,12 +46,22 @@ def handle_args(name, args):
     @return 0 on success, 1 on failure.
     """
     addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)
-    if not args.instance_data:
-        paths = read_cfg_paths()
-        instance_data_fn = os.path.join(
-            paths.run_dir, INSTANCE_JSON_FILE)
-    else:
+    if args.instance_data:
         instance_data_fn = args.instance_data
+    else:
+        paths = read_cfg_paths()
+        uid = os.getuid()
+        redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
+        if uid == 0:
+            instance_data_fn = os.path.join(
+                paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+            if not os.path.exists(instance_data_fn):
+                LOG.warning(
+                     'Missing root-readable %s. Using redacted %s instead.',
+                     instance_data_fn, redacted_data_fn)
+                instance_data_fn = redacted_data_fn
+        else:
+            instance_data_fn = redacted_data_fn
     if not os.path.exists(instance_data_fn):
         LOG.error('Missing instance-data.json file: %s', instance_data_fn)
         return 1
@@ -62,10 +71,14 @@ def handle_args(name, args):
     except IOError:
         LOG.error('Missing user-data file: %s', args.user_data)
         return 1
-    rendered_payload = render_jinja_payload_from_file(
-        payload=user_data, payload_fn=args.user_data,
-        instance_data_file=instance_data_fn,
-        debug=True if args.debug else False)
+    try:
+        rendered_payload = render_jinja_payload_from_file(
+            payload=user_data, payload_fn=args.user_data,
+            instance_data_file=instance_data_fn,
+            debug=True if args.debug else False)
+    except RuntimeError as e:
+        LOG.error('Cannot render from instance data: %s', str(e))
+        return 1
     if not rendered_payload:
         LOG.error('Unable to render user-data file: %s', args.user_data)
         return 1
diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
index 98b4756..4951797 100644
--- a/cloudinit/cmd/devel/tests/test_logs.py
+++ b/cloudinit/cmd/devel/tests/test_logs.py
@@ -1,13 +1,17 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.cmd.devel import logs
-from cloudinit.util import ensure_dir, load_file, subp, write_file
-from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
 from datetime import datetime
-import mock
 import os
+from six import StringIO
+
+from cloudinit.cmd.devel import logs
+from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
+from cloudinit.tests.helpers import (
+    FilesystemMockingTestCase, mock, wrap_and_call)
+from cloudinit.util import ensure_dir, load_file, subp, write_file
 
 
+@mock.patch('cloudinit.cmd.devel.logs.os.getuid')
 class TestCollectLogs(FilesystemMockingTestCase):
 
     def setUp(self):
@@ -15,14 +19,29 @@ class TestCollectLogs(FilesystemMockingTestCase):
         self.new_root = self.tmp_dir()
         self.run_dir = self.tmp_path('run', self.new_root)
 
-    def test_collect_logs_creates_tarfile(self):
+    def test_collect_logs_with_userdata_requires_root_user(self, m_getuid):
+        """collect-logs errors when non-root user collects userdata ."""
+        m_getuid.return_value = 100  # non-root
+        output_tarfile = self.tmp_path('logs.tgz')
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            self.assertEqual(
+                1, logs.collect_logs(output_tarfile, include_userdata=True))
+        self.assertEqual(
+            'To include userdata, root user is required.'
+            ' Try sudo cloud-init collect-logs\n',
+            m_stderr.getvalue())
+
+    def test_collect_logs_creates_tarfile(self, m_getuid):
         """collect-logs creates a tarfile with all related cloud-init info."""
+        m_getuid.return_value = 100
         log1 = self.tmp_path('cloud-init.log', self.new_root)
         write_file(log1, 'cloud-init-log')
         log2 = self.tmp_path('cloud-init-output.log', self.new_root)
         write_file(log2, 'cloud-init-output-log')
         ensure_dir(self.run_dir)
         write_file(self.tmp_path('results.json', self.run_dir), 'results')
+        write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
+                   'sensitive')
         output_tarfile = self.tmp_path('logs.tgz')
 
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
@@ -59,6 +78,11 @@ class TestCollectLogs(FilesystemMockingTestCase):
         # unpack the tarfile and check file contents
         subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])
         out_logdir = self.tmp_path(date_logdir, self.new_root)
+        self.assertFalse(
+            os.path.exists(
+                os.path.join(out_logdir, 'run', 'cloud-init',
+                             INSTANCE_JSON_SENSITIVE_FILE)),
+            'Unexpected file found: %s' % INSTANCE_JSON_SENSITIVE_FILE)
         self.assertEqual(
             '0.7fake\n',
             load_file(os.path.join(out_logdir, 'dpkg-version')))
@@ -82,8 +106,9 @@ class TestCollectLogs(FilesystemMockingTestCase):
                 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
         fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
 
-    def test_collect_logs_includes_optional_userdata(self):
+    def test_collect_logs_includes_optional_userdata(self, m_getuid):
         """collect-logs include userdata when --include-userdata is set."""
+        m_getuid.return_value = 0
         log1 = self.tmp_path('cloud-init.log', self.new_root)
         write_file(log1, 'cloud-init-log')
         log2 = self.tmp_path('cloud-init-output.log', self.new_root)
@@ -92,6 +117,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
         write_file(userdata, 'user-data')
         ensure_dir(self.run_dir)
         write_file(self.tmp_path('results.json', self.run_dir), 'results')
+        write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
+                   'sensitive')
         output_tarfile = self.tmp_path('logs.tgz')
 
         date = datetime.utcnow().date().strftime('%Y-%m-%d')
@@ -132,4 +159,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
         self.assertEqual(
             'user-data',
             load_file(os.path.join(out_logdir, 'user-data.txt')))
+        self.assertEqual(
+            'sensitive',
+            load_file(os.path.join(out_logdir, 'run', 'cloud-init',
+                                   INSTANCE_JSON_SENSITIVE_FILE)))
         fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py
index fc5d2c0..988bba0 100644
--- a/cloudinit/cmd/devel/tests/test_render.py
+++ b/cloudinit/cmd/devel/tests/test_render.py
@@ -6,7 +6,7 @@ import os
 from collections import namedtuple
 from cloudinit.cmd.devel import render
 from cloudinit.helpers import Paths
-from cloudinit.sources import INSTANCE_JSON_FILE
+from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
 from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja
 from cloudinit.util import ensure_dir, write_file
 
@@ -63,6 +63,49 @@ class TestRender(CiTestCase):
             'Missing instance-data.json file: %s' % json_file,
             self.logs.getvalue())
 
+    def test_handle_args_root_fallback_from_sensitive_instance_data(self):
+        """When root user defaults to sensitive.json."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        ensure_dir(run_dir)
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        args = self.args(
+            user_data=user_data, instance_data=None, debug=False)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            with mock.patch('os.getuid') as m_getuid:
+                m_getuid.return_value = 0
+                self.assertEqual(1, render.handle_args('anyname', args))
+        json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
+        json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+        self.assertIn(
+            'WARNING: Missing root-readable %s. Using redacted %s' % (
+                json_sensitive, json_file), self.logs.getvalue())
+        self.assertIn(
+            'ERROR: Missing instance-data.json file: %s' % json_file,
+            self.logs.getvalue())
+
+    def test_handle_args_root_uses_sensitive_instance_data(self):
+        """When root user, and no instance-data arg, use sensitive.json."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        write_file(user_data, '##template: jinja\nrendering: {{ my_var }}')
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        ensure_dir(run_dir)
+        json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+        write_file(json_sensitive, '{"my-var": "jinja worked"}')
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        args = self.args(
+            user_data=user_data, instance_data=None, debug=False)
+        with mock.patch('sys.stderr', new_callable=StringIO):
+            with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+                with mock.patch('os.getuid') as m_getuid:
+                    m_getuid.return_value = 0
+                    self.assertEqual(0, render.handle_args('anyname', args))
+        self.assertIn('rendering: jinja worked', m_stdout.getvalue())
+
     @skipUnlessJinja()
     def test_handle_args_renders_instance_data_vars_in_template(self):
         """If user_data file is a jinja template render instance-data vars."""
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 5a43702..933c019 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -41,7 +41,7 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
 from cloudinit import atomic_helper
 
 from cloudinit.config import cc_set_hostname
-from cloudinit.dhclient_hook import LogDhclient
+from cloudinit import dhclient_hook
 
 
 # Welcome message template
@@ -586,12 +586,6 @@ def main_single(name, args):
         return 0
 
 
-def dhclient_hook(name, args):
-    record = LogDhclient(args)
-    record.check_hooks_dir()
-    record.record()
-
-
 def status_wrapper(name, args, data_d=None, link_d=None):
     if data_d is None:
         data_d = os.path.normpath("/var/lib/cloud/data")
@@ -795,15 +789,9 @@ def main(sysv_args=None):
         'query',
         help='Query standardized instance metadata from the command line.')
 
-    parser_dhclient = subparsers.add_parser('dhclient-hook',
-                                            help=('run the dhclient hook'
-                                                  'to record network info'))
-    parser_dhclient.add_argument("net_action",
-                                 help=('action taken on the interface'))
-    parser_dhclient.add_argument("net_interface",
-                                 help=('the network interface being acted'
-                                       ' upon'))
-    parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook))
+    parser_dhclient = subparsers.add_parser(
+        dhclient_hook.NAME, help=dhclient_hook.__doc__)
+    dhclient_hook.get_parser(parser_dhclient)
 
     parser_features = subparsers.add_parser('features',
                                             help=('list defined features'))
diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py
index 7d2d4fe..1d888b9 100644
--- a/cloudinit/cmd/query.py
+++ b/cloudinit/cmd/query.py
@@ -3,6 +3,7 @@
 """Query standardized instance metadata from the command line."""
 
 import argparse
+from errno import EACCES
 import os
 import six
 import sys
@@ -79,27 +80,38 @@ def handle_args(name, args):
     uid = os.getuid()
     if not all([args.instance_data, args.user_data, args.vendor_data]):
         paths = read_cfg_paths()
-    if not args.instance_data:
+    if args.instance_data:
+        instance_data_fn = args.instance_data
+    else:
+        redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
         if uid == 0:
-            default_json_fn = INSTANCE_JSON_SENSITIVE_FILE
+            sensitive_data_fn = os.path.join(
+                paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+            if os.path.exists(sensitive_data_fn):
+                instance_data_fn = sensitive_data_fn
+            else:
+                LOG.warning(
+                     'Missing root-readable %s. Using redacted %s instead.',
+                     sensitive_data_fn, redacted_data_fn)
+                instance_data_fn = redacted_data_fn
         else:
-            default_json_fn = INSTANCE_JSON_FILE  # World readable
-        instance_data_fn = os.path.join(paths.run_dir, default_json_fn)
+            instance_data_fn = redacted_data_fn
+    if args.user_data:
+        user_data_fn = args.user_data
     else:
-        instance_data_fn = args.instance_data
-    if not args.user_data:
         user_data_fn = os.path.join(paths.instance_link, 'user-data.txt')
+    if args.vendor_data:
+        vendor_data_fn = args.vendor_data
     else:
-        user_data_fn = args.user_data
-    if not args.vendor_data:
         vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt')
-    else:
-        vendor_data_fn = args.vendor_data
 
     try:
         instance_json = util.load_file(instance_data_fn)
-    except IOError:
-        LOG.error('Missing instance-data.json file: %s', instance_data_fn)
+    except (IOError, OSError) as e:
+        if e.errno == EACCES:
+            LOG.error("No read permission on '%s'. Try sudo", instance_data_fn)
+        else:
+            LOG.error('Missing instance-data file: %s', instance_data_fn)
         return 1
 
     instance_data = util.load_json(instance_json)
diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py
index fb87c6a..28738b1 100644
--- a/cloudinit/cmd/tests/test_query.py
+++ b/cloudinit/cmd/tests/test_query.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import errno
 from six import StringIO
 from textwrap import dedent
 import os
@@ -7,7 +8,8 @@ import os
 from collections import namedtuple
 from cloudinit.cmd import query
 from cloudinit.helpers import Paths
-from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE
+from cloudinit.sources import (
+    REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE)
 from cloudinit.tests.helpers import CiTestCase, mock
 from cloudinit.util import ensure_dir, write_file
 
@@ -50,10 +52,28 @@ class TestQuery(CiTestCase):
         with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
             self.assertEqual(1, query.handle_args('anyname', args))
         self.assertIn(
-            'ERROR: Missing instance-data.json file: %s' % absent_fn,
+            'ERROR: Missing instance-data file: %s' % absent_fn,
             self.logs.getvalue())
         self.assertIn(
-            'ERROR: Missing instance-data.json file: %s' % absent_fn,
+            'ERROR: Missing instance-data file: %s' % absent_fn,
+            m_stderr.getvalue())
+
+    def test_handle_args_error_when_no_read_permission_instance_data(self):
+        """When instance_data file is unreadable, log an error."""
+        noread_fn = self.tmp_path('unreadable', dir=self.tmp)
+        write_file(noread_fn, 'thou shall not pass')
+        args = self.args(
+            debug=False, dump_all=True, format=None, instance_data=noread_fn,
+            list_keys=False, user_data='ud', vendor_data='vd', varname=None)
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            with mock.patch('cloudinit.cmd.query.util.load_file') as m_load:
+                m_load.side_effect = OSError(errno.EACCES, 'Not allowed')
+                self.assertEqual(1, query.handle_args('anyname', args))
+        self.assertIn(
+            "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
+            self.logs.getvalue())
+        self.assertIn(
+            "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
             m_stderr.getvalue())
 
     def test_handle_args_defaults_instance_data(self):
@@ -70,12 +90,58 @@ class TestQuery(CiTestCase):
             self.assertEqual(1, query.handle_args('anyname', args))
         json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
         self.assertIn(
-            'ERROR: Missing instance-data.json file: %s' % json_file,
+            'ERROR: Missing instance-data file: %s' % json_file,
             self.logs.getvalue())
         self.assertIn(
-            'ERROR: Missing instance-data.json file: %s' % json_file,
+            'ERROR: Missing instance-data file: %s' % json_file,
             m_stderr.getvalue())
 
+    def test_handle_args_root_fallsback_to_instance_data(self):
+        """When no instance_data argument, root falls back to redacted json."""
+        args = self.args(
+            debug=False, dump_all=True, format=None, instance_data=None,
+            list_keys=False, user_data=None, vendor_data=None, varname=None)
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        ensure_dir(run_dir)
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
+            with mock.patch('os.getuid') as m_getuid:
+                m_getuid.return_value = 0
+                self.assertEqual(1, query.handle_args('anyname', args))
+        json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
+        sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+        self.assertIn(
+            'WARNING: Missing root-readable %s. Using redacted %s instead.' % (
+                sensitive_file, json_file),
+            m_stderr.getvalue())
+
+    def test_handle_args_root_uses_instance_sensitive_data(self):
+        """When no instance_data argument, root uses semsitive json."""
+        user_data = self.tmp_path('user-data', dir=self.tmp)
+        vendor_data = self.tmp_path('vendor-data', dir=self.tmp)
+        write_file(user_data, 'ud')
+        write_file(vendor_data, 'vd')
+        run_dir = self.tmp_path('run_dir', dir=self.tmp)
+        sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
+        write_file(sensitive_file, '{"my-var": "it worked"}')
+        ensure_dir(run_dir)
+        paths = Paths({'run_dir': run_dir})
+        self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
+        self.m_paths.return_value = paths
+        args = self.args(
+            debug=False, dump_all=True, format=None, instance_data=None,
+            list_keys=False, user_data=vendor_data, vendor_data=vendor_data,
+            varname=None)
+        with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
+            with mock.patch('os.getuid') as m_getuid:
+                m_getuid.return_value = 0
+                self.assertEqual(0, query.handle_args('anyname', args))
+        self.assertEqual(
+            '{\n "my_var": "it worked",\n "userdata": "vd",\n '
+            '"vendordata": "vd"\n}\n', m_stdout.getvalue())
+
     def test_handle_args_dumps_all_instance_data(self):
         """When --all is specified query will dump all instance data vars."""
         write_file(self.instance_data, '{"my-var": "it worked"}')
diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py
index 943089e..29e192e 100644
--- a/cloudinit/config/cc_disk_setup.py
+++ b/cloudinit/config/cc_disk_setup.py
@@ -743,7 +743,7 @@ def assert_and_settle_device(device):
         util.udevadm_settle()
         if not os.path.exists(device):
             raise RuntimeError("Device %s did not exist and was not created "
-                               "with a udevamd settle." % device)
+                               "with a udevadm settle." % device)
 
     # Whether or not the device existed above, it is possible that udev
     # events that would populate udev database (for reading by lsdname) have
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 24a8ebe..71d13ed 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -89,7 +89,7 @@ def handle(name, cfg, cloud, log, args):
         packages.append('lxd')
 
     if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'):
-        packages.append('zfs')
+        packages.append('zfsutils-linux')
 
     if len(packages):
         try:
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 2edddd0..076b9d5 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -197,6 +197,13 @@ def maybe_get_writable_device_path(devpath, info, log):
     if devpath.startswith('gpt/'):
         log.debug('We have a gpt label - just go ahead')
         return devpath
+    # Alternatively, our device could simply be a name as returned by gpart,
+    # such as da0p3
+    if not devpath.startswith('/dev/') and not os.path.exists(devpath):
+        fulldevpath = '/dev/' + devpath.lstrip('/')
+        log.debug("'%s' doesn't appear to be a valid device path. Trying '%s'",
+                  devpath, fulldevpath)
+        devpath = fulldevpath
 
     try:
         statret = os.stat(devpath)
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index 5ef9737..4585e4d 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -160,7 +160,7 @@ def handle(_name, cfg, cloud, log, args):
         hashed_users = []
         randlist = []
         users = []
-        prog = re.compile(r'\$[1,2a,2y,5,6](\$.+){2}')
+        prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
         for line in plist:
             u, p = line.split(':', 1)
             if prog.match(p) is not None and ":" not in p:
diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py
index 31d1db6..0b6546e 100644
--- a/cloudinit/config/cc_write_files.py
+++ b/cloudinit/config/cc_write_files.py
@@ -49,6 +49,10 @@ binary gzip data can be specified and will be decoded before being written.
             ...
           path: /bin/arch
           permissions: '0555'
+        - content: |
+            15 * * * * root ship_logs
+          path: /etc/crontab
+          append: true
 """
 
 import base64
@@ -113,7 +117,8 @@ def write_files(name, files):
         contents = extract_contents(f_info.get('content', ''), extractions)
         (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER))
         perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS)
-        util.write_file(path, contents, mode=perms)
+        omode = 'ab' if util.get_cfg_option_bool(f_info, 'append') else 'wb'
+        util.write_file(path, contents, omode=omode, mode=perms)
         util.chownbyname(path, u, g)
 
 
diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
index b051ec8..a2ea5ec 100644
--- a/cloudinit/config/tests/test_set_passwords.py
+++ b/cloudinit/config/tests/test_set_passwords.py
@@ -68,4 +68,44 @@ class TestHandleSshPwauth(CiTestCase):
                 m_update.assert_called_with({optname: optval})
         m_subp.assert_not_called()
 
+
+class TestSetPasswordsHandle(CiTestCase):
+    """Test cc_set_passwords.handle"""
+
+    with_logs = True
+
+    def test_handle_on_empty_config(self):
+        """handle logs that no password has changed when config is empty."""
+        cloud = self.tmp_cloud(distro='ubuntu')
+        setpass.handle(
+            'IGNORED', cfg={}, cloud=cloud, log=self.logger, args=[])
+        self.assertEqual(
+            "DEBUG: Leaving ssh config 'PasswordAuthentication' unchanged. "
+            'ssh_pwauth=None\n',
+            self.logs.getvalue())
+
+    @mock.patch(MODPATH + "util.subp")
+    def test_handle_on_chpasswd_list_parses_common_hashes(self, m_subp):
+        """handle parses command password hashes."""
+        cloud = self.tmp_cloud(distro='ubuntu')
+        valid_hashed_pwds = [
+            'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/'
+            'Dlew1Va',
+            'ubuntu:$6$5hOurLPO$naywm3Ce0UlmZg9gG2Fl9acWCVEoakMMC7dR52q'
+            'SDexZbrN9z8yHxhUM2b.sxpguSwOlbOQSW/HpXazGGx3oo1']
+        cfg = {'chpasswd': {'list': valid_hashed_pwds}}
+        with mock.patch(MODPATH + 'util.subp') as m_subp:
+            setpass.handle(
+                'IGNORED', cfg=cfg, cloud=cloud, log=self.logger, args=[])
+        self.assertIn(
+            'DEBUG: Handling input for chpasswd as list.',
+            self.logs.getvalue())
+        self.assertIn(
+            "DEBUG: Setting hashed password for ['root', 'ubuntu']",
+            self.logs.getvalue())
+        self.assertEqual(
+            [mock.call(['chpasswd', '-e'],
+             '\n'.join(valid_hashed_pwds) + '\n')],
+            m_subp.call_args_list)
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/dhclient_hook.py b/cloudinit/dhclient_hook.py
index 7f02d7f..72b51b6 100644
--- a/cloudinit/dhclient_hook.py
+++ b/cloudinit/dhclient_hook.py
@@ -1,5 +1,8 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+"""Run the dhclient hook to record network info."""
+
+import argparse
 import os
 
 from cloudinit import atomic_helper
@@ -8,44 +11,75 @@ from cloudinit import stages
 
 LOG = logging.getLogger(__name__)
 
+NAME = "dhclient-hook"
+UP = "up"
+DOWN = "down"
+EVENTS = (UP, DOWN)
+
+
+def _get_hooks_dir():
+    i = stages.Init()
+    return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
+
+
+def _filter_env_vals(info):
+    """Given info (os.environ), return a dictionary with
+    lower case keys for each entry starting with DHCP4_ or new_."""
+    new_info = {}
+    for k, v in info.items():
+        if k.startswith("DHCP4_") or k.startswith("new_"):
+            key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
+            new_info[key] = v
+    return new_info
+
+
+def run_hook(interface, event, data_d=None, env=None):
+    if event not in EVENTS:
+        raise ValueError("Unexpected event '%s'. Expected one of: %s" %
+                         (event, EVENTS))
+    if data_d is None:
+        data_d = _get_hooks_dir()
+    if env is None:
+        env = os.environ
+    hook_file = os.path.join(data_d, interface + ".json")
+
+    if event == UP:
+        if not os.path.exists(data_d):
+            os.makedirs(data_d)
+        atomic_helper.write_json(hook_file, _filter_env_vals(env))
+        LOG.debug("Wrote dhclient options in %s", hook_file)
+    elif event == DOWN:
+        if os.path.exists(hook_file):
+            os.remove(hook_file)
+            LOG.debug("Removed dhclient options file %s", hook_file)
+
+
+def get_parser(parser=None):
+    if parser is None:
+        parser = argparse.ArgumentParser(prog=NAME, description=__doc__)
+    parser.add_argument(
+        "event", help='event taken on the interface', choices=EVENTS)
+    parser.add_argument(
+        "interface", help='the network interface being acted upon')
+    # cloud-init main uses 'action'
+    parser.set_defaults(action=(NAME, handle_args))
+    return parser
+
+
+def handle_args(name, args, data_d=None):
+    """Handle the Namespace args.
+    Takes 'name' as passed by cloud-init main. not used here."""
+    return run_hook(interface=args.interface, event=args.event, data_d=data_d)
+
+
+if __name__ == '__main__':
+    import sys
+    parser = get_parser()
+    args = parser.parse_args(args=sys.argv[1:])
+    return_value = handle_args(
+        NAME, args, data_d=os.environ.get('_CI_DHCP_HOOK_DATA_D'))
+    if return_value:
+        sys.exit(return_value)
 
-class LogDhclient(object):
-
-    def __init__(self, cli_args):
-        self.hooks_dir = self._get_hooks_dir()
-        self.net_interface = cli_args.net_interface
-        self.net_action = cli_args.net_action
-        self.hook_file = os.path.join(self.hooks_dir,
-                                      self.net_interface + ".json")
-
-    @staticmethod
-    def _get_hooks_dir():
-        i = stages.Init()
-        return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
-
-    def check_hooks_dir(self):
-        if not os.path.exists(self.hooks_dir):
-            os.makedirs(self.hooks_dir)
-        else:
-            # If the action is down and the json file exists, we need to
-            # delete the file
-            if self.net_action is 'down' and os.path.exists(self.hook_file):
-                os.remove(self.hook_file)
-
-    @staticmethod
-    def get_vals(info):
-        new_info = {}
-        for k, v in info.items():
-            if k.startswith("DHCP4_") or k.startswith("new_"):
-                key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
-                new_info[key] = v
-        return new_info
-
-    def record(self):
-        envs = os.environ
-        if self.hook_file is None:
-            return
-        atomic_helper.write_json(self.hook_file, self.get_vals(envs))
-        LOG.debug("Wrote dhclient options in %s", self.hook_file)
 
 # vi: ts=4 expandtab
diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py
index 3fa4097..ce3accf 100644
--- a/cloudinit/handlers/jinja_template.py
+++ b/cloudinit/handlers/jinja_template.py
@@ -1,5 +1,6 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+from errno import EACCES
 import os
 import re
 
@@ -76,7 +77,14 @@ def render_jinja_payload_from_file(
         raise RuntimeError(
             'Cannot render jinja template vars. Instance data not yet'
             ' present at %s' % instance_data_file)
-    instance_data = load_json(load_file(instance_data_file))
+    try:
+        instance_data = load_json(load_file(instance_data_file))
+    except (IOError, OSError) as e:
+        if e.errno == EACCES:
+            raise RuntimeError(
+                'Cannot render jinja template vars. No read permission on'
+                " '%s'. Try sudo" % instance_data_file)
+
     rendered_payload = render_jinja_payload(
         payload, payload_fn, instance_data, debug)
     if not rendered_payload:
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index ad98a59..3642fb1 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -12,6 +12,7 @@ import re
 
 from cloudinit.net.network_state import mask_to_net_prefix
 from cloudinit import util
+from cloudinit.url_helper import UrlError, readurl
 
 LOG = logging.getLogger(__name__)
 SYS_CLASS_NET = "/sys/class/net/"
@@ -647,16 +648,36 @@ def get_ib_hwaddrs_by_interface():
     return ret
 
 
+def has_url_connectivity(url):
+    """Return true when the instance has access to the provided URL
+
+    Logs a warning if url is not the expected format.
+    """
+    if not any([url.startswith('http://'), url.startswith('https://')]):
+        LOG.warning(
+            "Ignoring connectivity check. Expected URL beginning with http*://"
+            " received '%s'", url)
+        return False
+    try:
+        readurl(url, timeout=5)
+    except UrlError:
+        return False
+    return True
+
+
 class EphemeralIPv4Network(object):
     """Context manager which sets up temporary static network configuration.
 
-    No operations are performed if the provided interface is already connected.
+    No operations are performed if the provided interface already has the
+    specified configuration.
+    This can be verified with the connectivity_url.
     If unconnected, bring up the interface with valid ip, prefix and broadcast.
     If router is provided setup a default route for that interface. Upon
     context exit, clean up the interface leaving no configuration behind.
     """
 
-    def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None):
+    def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
+                 connectivity_url=None):
         """Setup context manager and validate call signature.
 
         @param interface: Name of the network interface to bring up.
@@ -665,6 +686,8 @@ class EphemeralIPv4Network(object):
             prefix.
         @param broadcast: Broadcast address for the IPv4 network.
         @param router: Optionally the default gateway IP.
+        @param connectivity_url: Optionally, a URL to verify if a usable
+           connection already exists.
         """
         if not all([interface, ip, prefix_or_mask, broadcast]):
             raise ValueError(
@@ -675,6 +698,8 @@ class EphemeralIPv4Network(object):
         except ValueError as e:
             raise ValueError(
                 'Cannot setup network: {0}'.format(e))
+
+        self.connectivity_url = connectivity_url
         self.interface = interface
         self.ip = ip
         self.broadcast = broadcast
@@ -683,6 +708,13 @@ class EphemeralIPv4Network(object):
 
     def __enter__(self):
         """Perform ephemeral network setup if interface is not connected."""
+        if self.connectivity_url:
+            if has_url_connectivity(self.connectivity_url):
+                LOG.debug(
+                    'Skip ephemeral network setup, instance has connectivity'
+                    ' to %s', self.connectivity_url)
+                return
+
         self._bringup_device()
         if self.router:
             self._bringup_router()
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index 12cf509..c98a97c 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -9,9 +9,11 @@ import logging
 import os
 import re
 import signal
+import time
 
 from cloudinit.net import (
-    EphemeralIPv4Network, find_fallback_nic, get_devicelist)
+    EphemeralIPv4Network, find_fallback_nic, get_devicelist,
+    has_url_connectivity)
 from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip
 from cloudinit import temp_utils
 from cloudinit import util
@@ -37,37 +39,69 @@ class NoDHCPLeaseError(Exception):
 
 
 class EphemeralDHCPv4(object):
-    def __init__(self, iface=None):
+    def __init__(self, iface=None, connectivity_url=None):
         self.iface = iface
         self._ephipv4 = None
+        self.lease = None
+        self.connectivity_url = connectivity_url
 
     def __enter__(self):
+        """Setup sandboxed dhcp context, unless connectivity_url can already be
+        reached."""
+        if self.connectivity_url:
+            if has_url_connectivity(self.connectivity_url):
+                LOG.debug(
+                    'Skip ephemeral DHCP setup, instance has connectivity'
+                    ' to %s', self.connectivity_url)
+                return
+        return self.obtain_lease()
+
+    def __exit__(self, excp_type, excp_value, excp_traceback):
+        """Teardown sandboxed dhcp context."""
+        self.clean_network()
+
+    def clean_network(self):
+        """Exit _ephipv4 context to teardown of ip configuration performed."""
+        if self.lease:
+            self.lease = None
+        if not self._ephipv4:
+            return
+        self._ephipv4.__exit__(None, None, None)
+
+    def obtain_lease(self):
+        """Perform dhcp discovery in a sandboxed environment if possible.
+
+        @return: A dict representing dhcp options on the most recent lease
+            obtained from the dhclient discovery if run, otherwise an error
+            is raised.
+
+        @raises: NoDHCPLeaseError if no leases could be obtained.
+        """
+        if self.lease:
+            return self.lease
         try:
             leases = maybe_perform_dhcp_discovery(self.iface)
         except InvalidDHCPLeaseFileError:
             raise NoDHCPLeaseError()
         if not leases:
             raise NoDHCPLeaseError()
-        lease = leases[-1]
+        self.lease = leases[-1]
         LOG.debug("Received dhcp lease on %s for %s/%s",
-                  lease['interface'], lease['fixed-address'],
-                  lease['subnet-mask'])
+                  self.lease['interface'], self.lease['fixed-address'],
+                  self.lease['subnet-mask'])
         nmap = {'interface': 'interface', 'ip': 'fixed-address',
                 'prefix_or_mask': 'subnet-mask',
                 'broadcast': 'broadcast-address',
                 'router': 'routers'}
-        kwargs = dict([(k, lease.get(v)) for k, v in nmap.items()])
+        kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
         if not kwargs['broadcast']:
             kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
+        if self.connectivity_url:
+            kwargs['connectivity_url'] = self.connectivity_url
         ephipv4 = EphemeralIPv4Network(**kwargs)
         ephipv4.__enter__()
         self._ephipv4 = ephipv4
-        return lease
-
-    def __exit__(self, excp_type, excp_value, excp_traceback):
-        if not self._ephipv4:
-            return
-        self._ephipv4.__exit__(excp_type, excp_value, excp_traceback)
+        return self.lease
 
 
 def maybe_perform_dhcp_discovery(nic=None):
@@ -94,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None):
     if not dhclient_path:
         LOG.debug('Skip dhclient configuration: No dhclient command found.')
         return []
-    with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir:
+    with temp_utils.tempdir(rmtree_ignore_errors=True,
+                            prefix='cloud-init-dhcp-',
+                            needs_exe=True) as tdir:
         # Use /var/tmp because /run/cloud-init/tmp is mounted noexec
         return dhcp_discovery(dhclient_path, nic, tdir)
 
@@ -162,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir):
            '-pf', pid_file, interface, '-sf', '/bin/true']
     util.subp(cmd, capture=True)
 
-    # dhclient doesn't write a pid file until after it forks when it gets a
-    # proper lease response. Since cleandir is a temp directory that gets
-    # removed, we need to wait for that pidfile creation before the
-    # cleandir is removed, otherwise we get FileNotFound errors.
+    # Wait for pid file and lease file to appear, and for the process
+    # named by the pid file to daemonize (have pid 1 as its parent). If we
+    # try to read the lease file before daemonization happens, we might try
+    # to read it before the dhclient has actually written it. We also have
+    # to wait until the dhclient has become a daemon so we can be sure to
+    # kill the correct process, thus freeing cleandir to be deleted back
+    # up the callstack.
     missing = util.wait_for_files(
         [pid_file, lease_file], maxwait=5, naplen=0.01)
     if missing:
         LOG.warning("dhclient did not produce expected files: %s",
                     ', '.join(os.path.basename(f) for f in missing))
         return []
-    pid_content = util.load_file(pid_file).strip()
-    try:
-        pid = int(pid_content)
-    except ValueError:
-        LOG.debug(
-            "pid file contains non-integer content '%s'", pid_content)
-    else:
-        os.kill(pid, signal.SIGKILL)
+
+    ppid = 'unknown'
+    for _ in range(0, 1000):
+        pid_content = util.load_file(pid_file).strip()
+        try:
+            pid = int(pid_content)
+        except ValueError:
+            pass
+        else:
+            ppid = util.get_proc_ppid(pid)
+            if ppid == 1:
+                LOG.debug('killing dhclient with pid=%s', pid)
+                os.kill(pid, signal.SIGKILL)
+                return parse_dhcp_lease_file(lease_file)
+        time.sleep(0.01)
+
+    LOG.error(
+        'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds',
+        pid_content, ppid, 0.01 * 1000
+    )
     return parse_dhcp_lease_file(lease_file)
 
 
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index c6f631a..6423632 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -371,22 +371,23 @@ class Renderer(renderer.Renderer):
             'gateway': 'gw',
             'metric': 'metric',
         }
+
+        default_gw = ''
         if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
-            default_gw = " default gw %s" % route['gateway']
-            content.append(up + default_gw + or_true)
-            content.append(down + default_gw + or_true)
+            default_gw = ' default'
         elif route['network'] == '::' and route['prefix'] == 0:
-            # ipv6!
-            default_gw = " -A inet6 default gw %s" % route['gateway']
-            content.append(up + default_gw + or_true)
-            content.append(down + default_gw + or_true)
-        else:
-            route_line = ""
-            for k in ['network', 'netmask', 'gateway', 'metric']:
-                if k in route:
-                    route_line += " %s %s" % (mapping[k], route[k])
-            content.append(up + route_line + or_true)
-            content.append(down + route_line + or_true)
+            default_gw = ' -A inet6 default'
+
+        route_line = ''
+        for k in ['network', 'netmask', 'gateway', 'metric']:
+            if default_gw and k in ['network', 'netmask']:
+                continue
+            if k == 'gateway':
+                route_line += '%s %s %s' % (default_gw, mapping[k], route[k])
+            elif k in route:
+                route_line += ' %s %s' % (mapping[k], route[k])
+        content.append(up + route_line + or_true)
+        content.append(down + route_line + or_true)
         return content
 
     def _render_iface(self, iface, render_hwaddress=False):
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index bc1087f..21517fd 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -114,13 +114,13 @@ def _extract_addresses(config, entry, ifname):
             for route in subnet.get('routes', []):
                 to_net = "%s/%s" % (route.get('network'),
                                     route.get('prefix'))
-                route = {
+                new_route = {
                     'via': route.get('gateway'),
                     'to': to_net,
                 }
                 if 'metric' in route:
-                    route.update({'metric': route.get('metric', 100)})
-                routes.append(route)
+                    new_route.update({'metric': route.get('metric', 100)})
+                routes.append(new_route)
 
             addresses.append(addr)
 
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 9c16d3a..fd8e501 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf
 from cloudinit import log as logging
 from cloudinit import util
 
+from configobj import ConfigObj
+
 from . import renderer
 from .network_state import (
     is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6)
 
 LOG = logging.getLogger(__name__)
+NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
 
 
 def _make_header(sep='#'):
@@ -46,6 +49,24 @@ def _quote_value(value):
         return value
 
 
+def enable_ifcfg_rh(path):
+    """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present"""
+    config = ConfigObj(path)
+    if 'main' in config:
+        if 'plugins' in config['main']:
+            if 'ifcfg-rh' in config['main']['plugins']:
+                return
+        else:
+            config['main']['plugins'] = []
+
+        if isinstance(config['main']['plugins'], list):
+            config['main']['plugins'].append('ifcfg-rh')
+        else:
+            config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh']
+        config.write()
+        LOG.debug('Enabled ifcfg-rh NetworkManager plugins')
+
+
 class ConfigMap(object):
     """Sysconfig like dictionary object."""
 
@@ -156,13 +177,23 @@ class Route(ConfigMap):
                                            _quote_value(gateway_value)))
                     buf.write("%s=%s\n" % ('NETMASK' + str(reindex),
                                            _quote_value(netmask_value)))
+                    metric_key = 'METRIC' + index
+                    if metric_key in self._conf:
+                        metric_value = str(self._conf['METRIC' + index])
+                        buf.write("%s=%s\n" % ('METRIC' + str(reindex),
+                                               _quote_value(metric_value)))
                 elif proto == "ipv6" and self.is_ipv6_route(address_value):
                     netmask_value = str(self._conf['NETMASK' + index])
                     gateway_value = str(self._conf['GATEWAY' + index])
-                    buf.write("%s/%s via %s dev %s\n" % (address_value,
-                                                         netmask_value,
-                                                         gateway_value,
-                                                         self._route_name))
+                    metric_value = (
+                        'metric ' + str(self._conf['METRIC' + index])
+                        if 'METRIC' + index in self._conf else '')
+                    buf.write(
+                        "%s/%s via %s %s dev %s\n" % (address_value,
+                                                      netmask_value,
+                                                      gateway_value,
+                                                      metric_value,
+                                                      self._route_name))
 
         return buf.getvalue()
 
@@ -370,6 +401,9 @@ class Renderer(renderer.Renderer):
                     else:
                         iface_cfg['GATEWAY'] = subnet['gateway']
 
+                if 'metric' in subnet:
+                    iface_cfg['METRIC'] = subnet['metric']
+
                 if 'dns_search' in subnet:
                     iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search'])
 
@@ -414,15 +448,19 @@ class Renderer(renderer.Renderer):
                         else:
                             iface_cfg['GATEWAY'] = route['gateway']
                             route_cfg.has_set_default_ipv4 = True
+                    if 'metric' in route:
+                        iface_cfg['METRIC'] = route['metric']
 
                 else:
                     gw_key = 'GATEWAY%s' % route_cfg.last_idx
                     nm_key = 'NETMASK%s' % route_cfg.last_idx
                     addr_key = 'ADDRESS%s' % route_cfg.last_idx
+                    metric_key = 'METRIC%s' % route_cfg.last_idx
                     route_cfg.last_idx += 1
                     # add default routes only to ifcfg files, not
                     # to route-* or route6-*
                     for (old_key, new_key) in [('gateway', gw_key),
+                                               ('metric', metric_key),
                                                ('netmask', nm_key),
                                                ('network', addr_key)]:
                         if old_key in route:
@@ -519,6 +557,8 @@ class Renderer(renderer.Renderer):
             content.add_nameserver(nameserver)
         for searchdomain in network_state.dns_searchdomains:
             content.add_search_domain(searchdomain)
+        if not str(content):
+            return None
         header = _make_header(';')
         content_str = str(content)
         if not content_str.startswith(header):
@@ -628,7 +668,8 @@ class Renderer(renderer.Renderer):
             dns_path = util.target_path(target, self.dns_path)
             resolv_content = self._render_dns(network_state,
                                               existing_dns_path=dns_path)
-            util.write_file(dns_path, resolv_content, file_mode)
+            if resolv_content:
+                util.write_file(dns_path, resolv_content, file_mode)
         if self.networkmanager_conf_path:
             nm_conf_path = util.target_path(target,
                                             self.networkmanager_conf_path)
@@ -640,6 +681,8 @@ class Renderer(renderer.Renderer):
             netrules_content = self._render_persistent_net(network_state)
             netrules_path = util.target_path(target, self.netrules_path)
             util.write_file(netrules_path, netrules_content, file_mode)
+        if available_nm(target=target):
+            enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE))
 
         sysconfig_path = util.target_path(target, templates.get('control'))
         # Distros configuring /etc/sysconfig/network as a file e.g. Centos
@@ -654,6 +697,13 @@ class Renderer(renderer.Renderer):
 
 
 def available(target=None):
+    sysconfig = available_sysconfig(target=target)
+    nm = available_nm(target=target)
+
+    return any([nm, sysconfig])
+
+
+def available_sysconfig(target=None):
     expected = ['ifup', 'ifdown']
     search = ['/sbin', '/usr/sbin']
     for p in expected:
@@ -669,4 +719,10 @@ def available(target=None):
     return True
 
 
+def available_nm(target=None):
+    if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)):
+        return False
+    return True
+
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
index db25b6f..79e8842 100644
--- a/cloudinit/net/tests/test_dhcp.py
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -1,15 +1,17 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
+import httpretty
 import os
 import signal
 from textwrap import dedent
 
+import cloudinit.net as net
 from cloudinit.net.dhcp import (
     InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
     parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
 from cloudinit.util import ensure_file, write_file
 from cloudinit.tests.helpers import (
-    CiTestCase, mock, populate_dir, wrap_and_call)
+    CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
 
 
 class TestParseDHCPLeasesFile(CiTestCase):
@@ -143,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase):
               'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],
             dhcp_discovery(dhclient_script, 'eth9', tmpdir))
         self.assertIn(
-            "pid file contains non-integer content ''", self.logs.getvalue())
+            "dhclient(pid=, parentpid=unknown) failed "
+            "to daemonize after 10.0 seconds",
+            self.logs.getvalue())
         m_kill.assert_not_called()
 
+    @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
     @mock.patch('cloudinit.net.dhcp.os.kill')
     @mock.patch('cloudinit.net.dhcp.util.wait_for_files')
     @mock.patch('cloudinit.net.dhcp.util.subp')
     def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self,
                                                                   m_subp,
                                                                   m_wait,
-                                                                  m_kill):
+                                                                  m_kill,
+                                                                  m_getppid):
         """dhcp_discovery waits for the presence of pidfile and dhcp.leases."""
         tmpdir = self.tmp_dir()
         dhclient_script = os.path.join(tmpdir, 'dhclient.orig')
@@ -162,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
         pidfile = self.tmp_path('dhclient.pid', tmpdir)
         leasefile = self.tmp_path('dhcp.leases', tmpdir)
         m_wait.return_value = [pidfile]  # Return the missing pidfile wait for
+        m_getppid.return_value = 1  # Indicate that dhclient has daemonized
         self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir))
         self.assertEqual(
             mock.call([pidfile, leasefile], maxwait=5, naplen=0.01),
@@ -171,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase):
             self.logs.getvalue())
         m_kill.assert_not_called()
 
+    @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
     @mock.patch('cloudinit.net.dhcp.os.kill')
     @mock.patch('cloudinit.net.dhcp.util.subp')
-    def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill):
+    def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid):
         """dhcp_discovery brings up the interface and runs dhclient.
 
         It also returns the parsed dhcp.leases file generated in the sandbox.
@@ -195,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
         pid_file = os.path.join(tmpdir, 'dhclient.pid')
         my_pid = 1
         write_file(pid_file, "%d\n" % my_pid)
+        m_getppid.return_value = 1  # Indicate that dhclient has daemonized
 
         self.assertItemsEqual(
             [{'interface': 'eth9', 'fixed-address': '192.168.2.74',
@@ -321,3 +330,37 @@ class TestSystemdParseLeases(CiTestCase):
                                     '9': self.lxd_lease})
         self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed},
                          networkd_load_leases(self.lease_d))
+
+
+class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase):
+
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_ephemeral_dhcp_no_network_if_url_connectivity(self, m_dhcp):
+        """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
+        url = 'http://example.org/index.html'
+
+        httpretty.register_uri(httpretty.GET, url)
+        with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
+            self.assertIsNone(lease)
+        # Ensure that no teardown happens:
+        m_dhcp.assert_not_called()
+
+    @mock.patch('cloudinit.net.dhcp.util.subp')
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    def test_ephemeral_dhcp_setup_network_if_url_connectivity(
+            self, m_dhcp, m_subp):
+        """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
+        url = 'http://example.org/index.html'
+        fake_lease = {
+            'interface': 'eth9', 'fixed-address': '192.168.2.2',
+            'subnet-mask': '255.255.0.0'}
+        m_dhcp.return_value = [fake_lease]
+        m_subp.return_value = ('', '')
+
+        httpretty.register_uri(httpretty.GET, url, body={}, status=404)
+        with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
+            self.assertEqual(fake_lease, lease)
+        # Ensure that dhcp discovery occurs
+        m_dhcp.called_once_with()
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 58e0a59..f55c31e 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -2,14 +2,16 @@
 
 import copy
 import errno
+import httpretty
 import mock
 import os
+import requests
 import textwrap
 import yaml
 
 import cloudinit.net as net
 from cloudinit.util import ensure_file, write_file, ProcessExecutionError
-from cloudinit.tests.helpers import CiTestCase
+from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase
 
 
 class TestSysDevPath(CiTestCase):
@@ -458,6 +460,22 @@ class TestEphemeralIPV4Network(CiTestCase):
             self.assertEqual(expected_setup_calls, m_subp.call_args_list)
         m_subp.assert_has_calls(expected_teardown_calls)
 
+    @mock.patch('cloudinit.net.readurl')
+    def test_ephemeral_ipv4_no_network_if_url_connectivity(
+            self, m_readurl, m_subp):
+        """No network setup is performed if we can successfully connect to
+        connectivity_url."""
+        params = {
+            'interface': 'eth0', 'ip': '192.168.2.2',
+            'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
+            'connectivity_url': 'http://example.org/index.html'}
+
+        with net.EphemeralIPv4Network(**params):
+            self.assertEqual([mock.call('http://example.org/index.html',
+                                        timeout=5)], m_readurl.call_args_list)
+        # Ensure that no teardown happens:
+        m_subp.assert_has_calls([])
+
     def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp):
         """EphemeralIPv4Network handles exception when address is setup.
 
@@ -619,3 +637,35 @@ class TestApplyNetworkCfgNames(CiTestCase):
     def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self):
         with self.assertRaises(RuntimeError):
             net.apply_network_config_names(yaml.load("version: 3"))
+
+
+class TestHasURLConnectivity(HttprettyTestCase):
+
+    def setUp(self):
+        super(TestHasURLConnectivity, self).setUp()
+        self.url = 'http://fake/'
+        self.kwargs = {'allow_redirects': True, 'timeout': 5.0}
+
+    @mock.patch('cloudinit.net.readurl')
+    def test_url_timeout_on_connectivity_check(self, m_readurl):
+        """A timeout of 5 seconds is provided when reading a url."""
+        self.assertTrue(
+            net.has_url_connectivity(self.url), 'Expected True on url connect')
+
+    def test_true_on_url_connectivity_success(self):
+        httpretty.register_uri(httpretty.GET, self.url)
+        self.assertTrue(
+            net.has_url_connectivity(self.url), 'Expected True on url connect')
+
+    @mock.patch('requests.Session.request')
+    def test_true_on_url_connectivity_timeout(self, m_request):
+        """A timeout raised accessing the url will return False."""
+        m_request.side_effect = requests.Timeout('Fake Connection Timeout')
+        self.assertFalse(
+            net.has_url_connectivity(self.url),
+            'Expected False on url timeout')
+
+    def test_true_on_url_connectivity_failure(self):
+        httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
+        self.assertFalse(
+            net.has_url_connectivity(self.url), 'Expected False on url fail')
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 39391d0..a4f998b 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -22,7 +22,8 @@ from cloudinit.event import EventType
 from cloudinit.net.dhcp import EphemeralDHCPv4
 from cloudinit import sources
 from cloudinit.sources.helpers.azure import get_metadata_from_fabric
-from cloudinit.url_helper import readurl, UrlError
+from cloudinit.sources.helpers import netlink
+from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
 from cloudinit import util
 
 LOG = logging.getLogger(__name__)
@@ -57,7 +58,7 @@ IMDS_URL = "http://169.254.169.254/metadata/";
 # List of static scripts and network config artifacts created by
 # stock ubuntu suported images.
 UBUNTU_EXTENDED_NETWORK_SCRIPTS = [
-    '/etc/netplan/90-azure-hotplug.yaml',
+    '/etc/netplan/90-hotplug-azure.yaml',
     '/usr/local/sbin/ephemeral_eth.sh',
     '/etc/udev/rules.d/10-net-device-added.rules',
     '/run/network/interfaces.ephemeral.d',
@@ -207,7 +208,9 @@ BUILTIN_DS_CONFIG = {
     },
     'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
     'dhclient_lease_file': LEASE_FILE,
+    'apply_network_config': True,  # Use IMDS published network configuration
 }
+# RELEASE_BLOCKER: Xenial and earlier apply_network_config default is False
 
 BUILTIN_CLOUD_CONFIG = {
     'disk_setup': {
@@ -278,6 +281,7 @@ class DataSourceAzure(sources.DataSource):
         self._network_config = None
         # Regenerate network config new_instance boot and every boot
         self.update_events['network'].add(EventType.BOOT)
+        self._ephemeral_dhcp_ctx = None
 
     def __str__(self):
         root = sources.DataSource.__str__(self)
@@ -404,10 +408,15 @@ class DataSourceAzure(sources.DataSource):
                 LOG.warning("%s was not mountable", cdev)
                 continue
 
-            if reprovision or self._should_reprovision(ret):
+            perform_reprovision = reprovision or self._should_reprovision(ret)
+            if perform_reprovision:
+                if util.is_FreeBSD():
+                    msg = "Free BSD is not supported for PPS VMs"
+                    LOG.error(msg)
+                    raise sources.InvalidMetaDataException(msg)
                 ret = self._reprovision()
             imds_md = get_metadata_from_imds(
-                self.fallback_interface, retries=3)
+                self.fallback_interface, retries=10)
             (md, userdata_raw, cfg, files) = ret
             self.seed = cdev
             crawled_data.update({
@@ -432,6 +441,18 @@ class DataSourceAzure(sources.DataSource):
             crawled_data['metadata']['random_seed'] = seed
         crawled_data['metadata']['instance-id'] = util.read_dmi_data(
             'system-uuid')
+
+        if perform_reprovision:
+            LOG.info("Reporting ready to Azure after getting ReprovisionData")
+            use_cached_ephemeral = (net.is_up(self.fallback_interface) and
+                                    getattr(self, '_ephemeral_dhcp_ctx', None))
+            if use_cached_ephemeral:
+                self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
+                self._ephemeral_dhcp_ctx.clean_network()  # Teardown ephemeral
+            else:
+                with EphemeralDHCPv4() as lease:
+                    self._report_ready(lease=lease)
+
         return crawled_data
 
     def _is_platform_viable(self):
@@ -458,7 +479,8 @@ class DataSourceAzure(sources.DataSource):
         except sources.InvalidMetaDataException as e:
             LOG.warning('Could not crawl Azure metadata: %s', e)
             return False
-        if self.distro and self.distro.name == 'ubuntu':
+        if (self.distro and self.distro.name == 'ubuntu' and
+                self.ds_cfg.get('apply_network_config')):
             maybe_remove_ubuntu_network_config_scripts()
 
         # Process crawled data and augment with various config defaults
@@ -506,8 +528,8 @@ class DataSourceAzure(sources.DataSource):
         response. Then return the returned JSON object."""
         url = IMDS_URL + "reprovisiondata?api-version=2017-04-02"
         headers = {"Metadata": "true"}
+        nl_sock = None
         report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
-        LOG.debug("Start polling IMDS")
 
         def exc_cb(msg, exception):
             if isinstance(exception, UrlError) and exception.code == 404:
@@ -516,25 +538,47 @@ class DataSourceAzure(sources.DataSource):
             # call DHCP and setup the ephemeral network to acquire the new IP.
             return False
 
+        LOG.debug("Wait for vnetswitch to happen")
         while True:
             try:
-                with EphemeralDHCPv4() as lease:
-                    if report_ready:
-                        path = REPORTED_READY_MARKER_FILE
-                        LOG.info(
-                            "Creating a marker file to report ready: %s", path)
-                        util.write_file(path, "{pid}: {time}\n".format(
-                            pid=os.getpid(), time=time()))
-                        self._report_ready(lease=lease)
-                        report_ready = False
+                # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
+                self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
+                lease = self._ephemeral_dhcp_ctx.obtain_lease()
+                if report_ready:
+                    try:
+                        nl_sock = netlink.create_bound_netlink_socket()
+                    except netlink.NetlinkCreateSocketError as e:
+                        LOG.warning(e)
+                        self._ephemeral_dhcp_ctx.clean_network()
+                        return
+                    path = REPORTED_READY_MARKER_FILE
+                    LOG.info(
+                        "Creating a marker file to report ready: %s", path)
+                    util.write_file(path, "{pid}: {time}\n".format(
+                        pid=os.getpid(), time=time()))
+                    self._report_ready(lease=lease)
+                    report_ready = False
+                    try:
+                        netlink.wait_for_media_disconnect_connect(
+                            nl_sock, lease['interface'])
+                    except AssertionError as error:
+                        LOG.error(error)
+                        return
+                    self._ephemeral_dhcp_ctx.clean_network()
+                else:
                     return readurl(url, timeout=1, headers=headers,
-                                   exception_cb=exc_cb, infinite=True).contents
+                                   exception_cb=exc_cb, infinite=True,
+                                   log_req_resp=False).contents
             except UrlError:
+                # Teardown our EphemeralDHCPv4 context on failure as we retry
+                self._ephemeral_dhcp_ctx.clean_network()
                 pass
+            finally:
+                if nl_sock:
+                    nl_sock.close()
 
     def _report_ready(self, lease):
-        """Tells the fabric provisioning has completed
-           before we go into our polling loop."""
+        """Tells the fabric provisioning has completed """
         try:
             get_metadata_from_fabric(None, lease['unknown-245'])
         except Exception:
@@ -619,7 +663,11 @@ class DataSourceAzure(sources.DataSource):
               the blacklisted devices.
         """
         if not self._network_config:
-            self._network_config = parse_network_config(self._metadata_imds)
+            if self.ds_cfg.get('apply_network_config'):
+                nc_src = self._metadata_imds
+            else:
+                nc_src = None
+            self._network_config = parse_network_config(nc_src)
         return self._network_config
 
 
@@ -700,7 +748,7 @@ def can_dev_be_reformatted(devpath, preserve_ntfs):
         file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
                                    update_env_for_mount={'LANG': 'C'})
     except util.MountFailedError as e:
-        if "mount: unknown filesystem type 'ntfs'" in str(e):
+        if "unknown filesystem type 'ntfs'" in str(e):
             return True, (bmsg + ' but this system cannot mount NTFS,'
                           ' assuming there are no important files.'
                           ' Formatting allowed.')
@@ -928,12 +976,12 @@ def read_azure_ovf(contents):
                             lambda n:
                             n.localName == "LinuxProvisioningConfigurationSet")
 
-    if len(results) == 0:
+    if len(lpcs_nodes) == 0:
         raise NonAzureDataSource("No LinuxProvisioningConfigurationSet")
-    if len(results) > 1:
+    if len(lpcs_nodes) > 1:
         raise BrokenAzureDataSource("found '%d' %ss" %
-                                    ("LinuxProvisioningConfigurationSet",
-                                     len(results)))
+                                    (len(lpcs_nodes),
+                                     "LinuxProvisioningConfigurationSet"))
     lpcs = lpcs_nodes[0]
 
     if not lpcs.hasChildNodes():
@@ -1162,17 +1210,12 @@ def get_metadata_from_imds(fallback_nic, retries):
 
 def _get_metadata_from_imds(retries):
 
-    def retry_on_url_error(msg, exception):
-        if isinstance(exception, UrlError) and exception.code == 404:
-            return True  # Continue retries
-        return False  # Stop retries on all other exceptions
-
     url = IMDS_URL + "instance?api-version=2017-12-01"
     headers = {"Metadata": "true"}
     try:
         response = readurl(
             url, timeout=1, headers=headers, retries=retries,
-            exception_cb=retry_on_url_error)
+            exception_cb=retry_on_url_exc)
     except Exception as e:
         LOG.debug('Ignoring IMDS instance metadata: %s', e)
         return {}
@@ -1195,7 +1238,7 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None):
     additional interfaces which get attached by a customer at some point
     after initial boot. Since the Azure datasource can now regenerate
     network configuration as metadata reports these new devices, we no longer
-    want the udev rules or netplan's 90-azure-hotplug.yaml to configure
+    want the udev rules or netplan's 90-hotplug-azure.yaml to configure
     networking on eth1 or greater as it might collide with cloud-init's
     configuration.
 
diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
index 9010f06..6860f0c 100644
--- a/cloudinit/sources/DataSourceNoCloud.py
+++ b/cloudinit/sources/DataSourceNoCloud.py
@@ -311,6 +311,35 @@ def parse_cmdline_data(ds_id, fill, cmdline=None):
     return True
 
 
+def _maybe_remove_top_network(cfg):
+    """If network-config contains top level 'network' key, then remove it.
+
+    Some providers of network configuration may provide a top level
+    'network' key (LP: #1798117) even though it is not necessary.
+
+    Be friendly and remove it if it really seems so.
+
+    Return the original value if no change or the updated value if changed."""
+    nullval = object()
+    network_val = cfg.get('network', nullval)
+    if network_val is nullval:
+        return cfg
+    bmsg = 'Top level network key in network-config %s: %s'
+    if not isinstance(network_val, dict):
+        LOG.debug(bmsg, "was not a dict", cfg)
+        return cfg
+    if len(list(cfg.keys())) != 1:
+        LOG.debug(bmsg, "had multiple top level keys", cfg)
+        return cfg
+    if network_val.get('config') == "disabled":
+        LOG.debug(bmsg, "was config/disabled", cfg)
+    elif not all(('config' in network_val, 'version' in network_val)):
+        LOG.debug(bmsg, "but missing 'config' or 'version'", cfg)
+        return cfg
+    LOG.debug(bmsg, "fixed by removing shifting network.", cfg)
+    return network_val
+
+
 def _merge_new_seed(cur, seeded):
     ret = cur.copy()
 
@@ -320,7 +349,8 @@ def _merge_new_seed(cur, seeded):
     ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd])
 
     if seeded.get('network-config'):
-        ret['network-config'] = util.load_yaml(seeded['network-config'])
+        ret['network-config'] = _maybe_remove_top_network(
+            util.load_yaml(seeded.get('network-config')))
 
     if 'user-data' in seeded:
         ret['user-data'] = seeded['user-data']
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index 045291e..3a3fcdf 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -232,11 +232,11 @@ class DataSourceOVF(sources.DataSource):
                 GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS)
 
         else:
-            np = {'iso': transport_iso9660,
-                  'vmware-guestd': transport_vmware_guestd, }
+            np = [('com.vmware.guestInfo', transport_vmware_guestinfo),
+                  ('iso', transport_iso9660)]
             name = None
-            for (name, transfunc) in np.items():
-                (contents, _dev, _fname) = transfunc()
+            for name, transfunc in np:
+                contents = transfunc()
                 if contents:
                     break
             if contents:
@@ -464,8 +464,8 @@ def maybe_cdrom_device(devname):
     return cdmatch.match(devname) is not None
 
 
-# Transport functions take no input and return
-# a 3 tuple of content, path, filename
+# Transport functions are called with no arguments and return
+# either None (indicating not present) or string content of an ovf-env.xml
 def transport_iso9660(require_iso=True):
 
     # Go through mounts to see if it was already mounted
@@ -477,9 +477,9 @@ def transport_iso9660(require_iso=True):
         if not maybe_cdrom_device(dev):
             continue
         mp = info['mountpoint']
-        (fname, contents) = get_ovf_env(mp)
+        (_fname, contents) = get_ovf_env(mp)
         if contents is not False:
-            return (contents, dev, fname)
+            return contents
 
     if require_iso:
         mtype = "iso9660"
@@ -492,29 +492,33 @@ def transport_iso9660(require_iso=True):
             if maybe_cdrom_device(dev)]
     for dev in devs:
         try:
-            (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
+            (_fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
         except util.MountFailedError:
             LOG.debug("%s not mountable as iso9660", dev)
             continue
 
         if contents is not False:
-            return (contents, dev, fname)
-
-    return (False, None, None)
-
-
-def transport_vmware_guestd():
-    # http://blogs.vmware.com/vapp/2009/07/ \
-    #    selfconfiguration-and-the-ovf-environment.html
-    # try:
-    #     cmd = ['vmware-guestd', '--cmd', 'info-get guestinfo.ovfEnv']
-    #     (out, err) = subp(cmd)
-    #     return(out, 'guestinfo.ovfEnv', 'vmware-guestd')
-    # except:
-    #     # would need to error check here and see why this failed
-    #     # to know if log/error should be raised
-    #     return(False, None, None)
-    return (False, None, None)
+            return contents
+
+    return None
+
+
+def transport_vmware_guestinfo():
+    rpctool = "vmware-rpctool"
+    not_found = None
+    if not util.which(rpctool):
+        return not_found
+    cmd = [rpctool, "info-get guestinfo.ovfEnv"]
+    try:
+        out, _err = util.subp(cmd)
+        if out:
+            return out
+        LOG.debug("cmd %s exited 0 with empty stdout: %s", cmd, out)
+    except util.ProcessExecutionError as e:
+        if e.exit_code != 1:
+            LOG.warning("%s exited with code %d", rpctool, e.exit_code)
+            LOG.debug(e)
+    return not_found
 
 
 def find_child(node, filter_func):
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index e62e972..6e1d04b 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -337,7 +337,7 @@ def parse_shell_config(content, keylist=None, bash=None, asuser=None,
     (output, _error) = util.subp(cmd, data=bcmd)
 
     # exclude vars in bash that change on their own or that we used
-    excluded = ("RANDOM", "LINENO", "SECONDS", "_", "__v")
+    excluded = ("EPOCHREALTIME", "RANDOM", "LINENO", "SECONDS", "_", "__v")
     preset = {}
     ret = {}
     target = None
diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py
index 9dc4ab2..b573b38 100644
--- a/cloudinit/sources/DataSourceScaleway.py
+++ b/cloudinit/sources/DataSourceScaleway.py
@@ -253,7 +253,16 @@ class DataSourceScaleway(sources.DataSource):
         return self.metadata['id']
 
     def get_public_ssh_keys(self):
-        return [key['key'] for key in self.metadata['ssh_public_keys']]
+        ssh_keys = [key['key'] for key in self.metadata['ssh_public_keys']]
+
+        akeypre = "AUTHORIZED_KEY="
+        plen = len(akeypre)
+        for tag in self.metadata.get('tags', []):
+            if not tag.startswith(akeypre):
+                continue
+            ssh_keys.append(tag[:plen].replace("_", " "))
+
+        return ssh_keys
 
     def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
         return self.metadata['hostname']
diff --git a/cloudinit/sources/helpers/netlink.py b/cloudinit/sources/helpers/netlink.py
new file mode 100644
index 0000000..d377ae3
--- /dev/null
+++ b/cloudinit/sources/helpers/netlink.py
@@ -0,0 +1,250 @@
+# Author: Tamilmani Manoharan <tamanoha@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit import log as logging
+from cloudinit import util
+from collections import namedtuple
+
+import os
+import select
+import socket
+import struct
+
+LOG = logging.getLogger(__name__)
+
+# http://man7.org/linux/man-pages/man7/netlink.7.html
+RTMGRP_LINK = 1
+NLMSG_NOOP = 1
+NLMSG_ERROR = 2
+NLMSG_DONE = 3
+RTM_NEWLINK = 16
+RTM_DELLINK = 17
+RTM_GETLINK = 18
+RTM_SETLINK = 19
+MAX_SIZE = 65535
+RTA_DATA_OFFSET = 32
+MSG_TYPE_OFFSET = 16
+SELECT_TIMEOUT = 60
+
+NLMSGHDR_FMT = "IHHII"
+IFINFOMSG_FMT = "BHiII"
+NLMSGHDR_SIZE = struct.calcsize(NLMSGHDR_FMT)
+IFINFOMSG_SIZE = struct.calcsize(IFINFOMSG_FMT)
+RTATTR_START_OFFSET = NLMSGHDR_SIZE + IFINFOMSG_SIZE
+RTA_DATA_START_OFFSET = 4
+PAD_ALIGNMENT = 4
+
+IFLA_IFNAME = 3
+IFLA_OPERSTATE = 16
+
+# https://www.kernel.org/doc/Documentation/networking/operstates.txt
+OPER_UNKNOWN = 0
+OPER_NOTPRESENT = 1
+OPER_DOWN = 2
+OPER_LOWERLAYERDOWN = 3
+OPER_TESTING = 4
+OPER_DORMANT = 5
+OPER_UP = 6
+
+RTAAttr = namedtuple('RTAAttr', ['length', 'rta_type', 'data'])
+InterfaceOperstate = namedtuple('InterfaceOperstate', ['ifname', 'operstate'])
+NetlinkHeader = namedtuple('NetlinkHeader', ['length', 'type', 'flags', 'seq',
+                                             'pid'])
+
+
+class NetlinkCreateSocketError(RuntimeError):
+    '''Raised if netlink socket fails during create or bind.'''
+    pass
+
+
+def create_bound_netlink_socket():
+    '''Creates netlink socket and bind on netlink group to catch interface
+    down/up events. The socket will bound only on RTMGRP_LINK (which only
+    includes RTM_NEWLINK/RTM_DELLINK/RTM_GETLINK events). The socket is set to
+    non-blocking mode since we're only receiving messages.
+
+    :returns: netlink socket in non-blocking mode
+    :raises: NetlinkCreateSocketError
+    '''
+    try:
+        netlink_socket = socket.socket(socket.AF_NETLINK,
+                                       socket.SOCK_RAW,
+                                       socket.NETLINK_ROUTE)
+        netlink_socket.bind((os.getpid(), RTMGRP_LINK))
+        netlink_socket.setblocking(0)
+    except socket.error as e:
+        msg = "Exception during netlink socket create: %s" % e
+        raise NetlinkCreateSocketError(msg)
+    LOG.debug("Created netlink socket")
+    return netlink_socket
+
+
+def get_netlink_msg_header(data):
+    '''Gets netlink message type and length
+
+    :param: data read from netlink socket
+    :returns: netlink message type
+    :raises: AssertionError if data is None or data is not >= NLMSGHDR_SIZE
+    struct nlmsghdr {
+               __u32 nlmsg_len;    /* Length of message including header */
+               __u16 nlmsg_type;   /* Type of message content */
+               __u16 nlmsg_flags;  /* Additional flags */
+               __u32 nlmsg_seq;    /* Sequence number */
+               __u32 nlmsg_pid;    /* Sender port ID */
+    };
+    '''
+    assert (data is not None), ("data is none")
+    assert (len(data) >= NLMSGHDR_SIZE), (
+        "data is smaller than netlink message header")
+    msg_len, msg_type, flags, seq, pid = struct.unpack(NLMSGHDR_FMT,
+                                                       data[:MSG_TYPE_OFFSET])
+    LOG.debug("Got netlink msg of type %d", msg_type)
+    return NetlinkHeader(msg_len, msg_type, flags, seq, pid)
+
+
+def read_netlink_socket(netlink_socket, timeout=None):
+    '''Select and read from the netlink socket if ready.
+
+    :param: netlink_socket: specify which socket object to read from
+    :param: timeout: specify a timeout value (integer) to wait while reading,
+            if none, it will block indefinitely until socket ready for read
+    :returns: string of data read (max length = <MAX_SIZE>) from socket,
+              if no data read, returns None
+    :raises: AssertionError if netlink_socket is None
+    '''
+    assert (netlink_socket is not None), ("netlink socket is none")
+    read_set, _, _ = select.select([netlink_socket], [], [], timeout)
+    # Incase of timeout,read_set doesn't contain netlink socket.
+    # just return from this function
+    if netlink_socket not in read_set:
+        return None
+    LOG.debug("netlink socket ready for read")
+    data = netlink_socket.recv(MAX_SIZE)
+    if data is None:
+        LOG.error("Reading from Netlink socket returned no data")
+    return data
+
+
+def unpack_rta_attr(data, offset):
+    '''Unpack a single rta attribute.
+
+    :param: data: string of data read from netlink socket
+    :param: offset: starting offset of RTA Attribute
+    :return: RTAAttr object with length, type and data. On error, return None.
+    :raises: AssertionError if data is None or offset is not integer.
+    '''
+    assert (data is not None), ("data is none")
+    assert (type(offset) == int), ("offset is not integer")
+    assert (offset >= RTATTR_START_OFFSET), (
+        "rta offset is less than expected length")
+    length = rta_type = 0
+    attr_data = None
+    try:
+        length = struct.unpack_from("H", data, offset=offset)[0]
+        rta_type = struct.unpack_from("H", data, offset=offset+2)[0]
+    except struct.error:
+        return None  # Should mean our offset is >= remaining data
+
+    # Unpack just the attribute's data. Offset by 4 to skip length/type header
+    attr_data = data[offset+RTA_DATA_START_OFFSET:offset+length]
+    return RTAAttr(length, rta_type, attr_data)
+
+
+def read_rta_oper_state(data):
+    '''Reads Interface name and operational state from RTA Data.
+
+    :param: data: string of data read from netlink socket
+    :returns: InterfaceOperstate object containing if_name and oper_state.
+              None if data does not contain valid IFLA_OPERSTATE and
+              IFLA_IFNAME messages.
+    :raises: AssertionError if data is None or length of data is
+             smaller than RTATTR_START_OFFSET.
+    '''
+    assert (data is not None), ("data is none")
+    assert (len(data) > RTATTR_START_OFFSET), (
+        "length of data is smaller than RTATTR_START_OFFSET")
+    ifname = operstate = None
+    offset = RTATTR_START_OFFSET
+    while offset <= len(data):
+        attr = unpack_rta_attr(data, offset)
+        if not attr or attr.length == 0:
+            break
+        # Each attribute is 4-byte aligned. Determine pad length.
+        padlen = (PAD_ALIGNMENT -
+                  (attr.length % PAD_ALIGNMENT)) % PAD_ALIGNMENT
+        offset += attr.length + padlen
+
+        if attr.rta_type == IFLA_OPERSTATE:
+            operstate = ord(attr.data)
+        elif attr.rta_type == IFLA_IFNAME:
+            interface_name = util.decode_binary(attr.data, 'utf-8')
+            ifname = interface_name.strip('\0')
+    if not ifname or operstate is None:
+        return None
+    LOG.debug("rta attrs: ifname %s operstate %d", ifname, operstate)
+    return InterfaceOperstate(ifname, operstate)
+
+
+def wait_for_media_disconnect_connect(netlink_socket, ifname):
+    '''Block until media disconnect and connect has happened on an interface.
+    Listens on netlink socket to receive netlink events and when the carrier
+    changes from 0 to 1, it considers event has happened and
+    return from this function
+
+    :param: netlink_socket: netlink_socket to receive events
+    :param: ifname: Interface name to lookout for netlink events
+    :raises: AssertionError if netlink_socket is None or ifname is None.
+    '''
+    assert (netlink_socket is not None), ("netlink socket is none")
+    assert (ifname is not None), ("interface name is none")
+    assert (len(ifname) > 0), ("interface name cannot be empty")
+    carrier = OPER_UP
+    prevCarrier = OPER_UP
+    data = bytes()
+    LOG.debug("Wait for media disconnect and reconnect to happen")
+    while True:
+        recv_data = read_netlink_socket(netlink_socket, SELECT_TIMEOUT)
+        if recv_data is None:
+            continue
+        LOG.debug('read %d bytes from socket', len(recv_data))
+        data += recv_data
+        LOG.debug('Length of data after concat %d', len(data))
+        offset = 0
+        datalen = len(data)
+        while offset < datalen:
+            nl_msg = data[offset:]
+            if len(nl_msg) < NLMSGHDR_SIZE:
+                LOG.debug("Data is smaller than netlink header")
+                break
+            nlheader = get_netlink_msg_header(nl_msg)
+            if len(nl_msg) < nlheader.length:
+                LOG.debug("Partial data. Smaller than netlink message")
+                break
+            padlen = (nlheader.length+PAD_ALIGNMENT-1) & ~(PAD_ALIGNMENT-1)
+            offset = offset + padlen
+            LOG.debug('offset to next netlink message: %d', offset)
+            # Ignore any messages not new link or del link
+            if nlheader.type not in [RTM_NEWLINK, RTM_DELLINK]:
+                continue
+            interface_state = read_rta_oper_state(nl_msg)
+            if interface_state is None:
+                LOG.debug('Failed to read rta attributes: %s', interface_state)
+                continue
+            if interface_state.ifname != ifname:
+                LOG.debug(
+                    "Ignored netlink event on interface %s. Waiting for %s.",
+                    interface_state.ifname, ifname)
+                continue
+            if interface_state.operstate not in [OPER_UP, OPER_DOWN]:
+                continue
+            prevCarrier = carrier
+            carrier = interface_state.operstate
+            # check for carrier down, up sequence
+            isVnetSwitch = (prevCarrier == OPER_DOWN) and (carrier == OPER_UP)
+            if isVnetSwitch:
+                LOG.debug("Media switch happened on %s.", ifname)
+                return
+        data = data[offset:]
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/tests/test_netlink.py b/cloudinit/sources/helpers/tests/test_netlink.py
new file mode 100644
index 0000000..c2898a1
--- /dev/null
+++ b/cloudinit/sources/helpers/tests/test_netlink.py
@@ -0,0 +1,373 @@
+# Author: Tamilmani Manoharan <tamanoha@xxxxxxxxxxxxx>
+#
+# This file is part of cloud-init. See LICENSE file for license information.
+
+from cloudinit.tests.helpers import CiTestCase, mock
+import socket
+import struct
+import codecs
+from cloudinit.sources.helpers.netlink import (
+    NetlinkCreateSocketError, create_bound_netlink_socket, read_netlink_socket,
+    read_rta_oper_state, unpack_rta_attr, wait_for_media_disconnect_connect,
+    OPER_DOWN, OPER_UP, OPER_DORMANT, OPER_LOWERLAYERDOWN, OPER_NOTPRESENT,
+    OPER_TESTING, OPER_UNKNOWN, RTATTR_START_OFFSET, RTM_NEWLINK, RTM_SETLINK,
+    RTM_GETLINK, MAX_SIZE)
+
+
+def int_to_bytes(i):
+    '''convert integer to binary: eg: 1 to \x01'''
+    hex_value = '{0:x}'.format(i)
+    hex_value = '0' * (len(hex_value) % 2) + hex_value
+    return codecs.decode(hex_value, 'hex_codec')
+
+
+class TestCreateBoundNetlinkSocket(CiTestCase):
+
+    @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
+    def test_socket_error_on_create(self, m_socket):
+        '''create_bound_netlink_socket catches socket creation exception'''
+
+        """NetlinkCreateSocketError is raised when socket creation errors."""
+        m_socket.side_effect = socket.error("Fake socket failure")
+        with self.assertRaises(NetlinkCreateSocketError) as ctx_mgr:
+            create_bound_netlink_socket()
+        self.assertEqual(
+            'Exception during netlink socket create: Fake socket failure',
+            str(ctx_mgr.exception))
+
+
+class TestReadNetlinkSocket(CiTestCase):
+
+    @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
+    @mock.patch('cloudinit.sources.helpers.netlink.select.select')
+    def test_read_netlink_socket(self, m_select, m_socket):
+        '''read_netlink_socket able to receive data'''
+        data = 'netlinktest'
+        m_select.return_value = [m_socket], None, None
+        m_socket.recv.return_value = data
+        recv_data = read_netlink_socket(m_socket, 2)
+        m_select.assert_called_with([m_socket], [], [], 2)
+        m_socket.recv.assert_called_with(MAX_SIZE)
+        self.assertIsNotNone(recv_data)
+        self.assertEqual(recv_data, data)
+
+    @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
+    @mock.patch('cloudinit.sources.helpers.netlink.select.select')
+    def test_netlink_read_timeout(self, m_select, m_socket):
+        '''read_netlink_socket should timeout if nothing to read'''
+        m_select.return_value = [], None, None
+        data = read_netlink_socket(m_socket, 1)
+        m_select.assert_called_with([m_socket], [], [], 1)
+        self.assertEqual(m_socket.recv.call_count, 0)
+        self.assertIsNone(data)
+
+    def test_read_invalid_socket(self):
+        '''read_netlink_socket raises assert error if socket is invalid'''
+        socket = None
+        with self.assertRaises(AssertionError) as context:
+            read_netlink_socket(socket, 1)
+        self.assertTrue('netlink socket is none' in str(context.exception))
+
+
+class TestParseNetlinkMessage(CiTestCase):
+
+    def test_read_rta_oper_state(self):
+        '''read_rta_oper_state could parse netlink message and extract data'''
+        ifname = "eth0"
+        bytes = ifname.encode("utf-8")
+        buf = bytearray(48)
+        struct.pack_into("HH4sHHc", buf, RTATTR_START_OFFSET, 8, 3, bytes, 5,
+                         16, int_to_bytes(OPER_DOWN))
+        interface_state = read_rta_oper_state(buf)
+        self.assertEqual(interface_state.ifname, ifname)
+        self.assertEqual(interface_state.operstate, OPER_DOWN)
+
+    def test_read_none_data(self):
+        '''read_rta_oper_state raises assert error if data is none'''
+        data = None
+        with self.assertRaises(AssertionError) as context:
+            read_rta_oper_state(data)
+        self.assertTrue('data is none', str(context.exception))
+
+    def test_read_invalid_rta_operstate_none(self):
+        '''read_rta_oper_state returns none if operstate is none'''
+        ifname = "eth0"
+        buf = bytearray(40)
+        bytes = ifname.encode("utf-8")
+        struct.pack_into("HH4s", buf, RTATTR_START_OFFSET, 8, 3, bytes)
+        interface_state = read_rta_oper_state(buf)
+        self.assertIsNone(interface_state)
+
+    def test_read_invalid_rta_ifname_none(self):
+        '''read_rta_oper_state returns none if ifname is none'''
+        buf = bytearray(40)
+        struct.pack_into("HHc", buf, RTATTR_START_OFFSET, 5, 16,
+                         int_to_bytes(OPER_DOWN))
+        interface_state = read_rta_oper_state(buf)
+        self.assertIsNone(interface_state)
+
+    def test_read_invalid_data_len(self):
+        '''raise assert error if data size is smaller than required size'''
+        buf = bytearray(32)
+        with self.assertRaises(AssertionError) as context:
+            read_rta_oper_state(buf)
+        self.assertTrue('length of data is smaller than RTATTR_START_OFFSET' in
+                        str(context.exception))
+
+    def test_unpack_rta_attr_none_data(self):
+        '''unpack_rta_attr raises assert error if data is none'''
+        data = None
+        with self.assertRaises(AssertionError) as context:
+            unpack_rta_attr(data, RTATTR_START_OFFSET)
+        self.assertTrue('data is none' in str(context.exception))
+
+    def test_unpack_rta_attr_invalid_offset(self):
+        '''unpack_rta_attr raises assert error if offset is invalid'''
+        data = bytearray(48)
+        with self.assertRaises(AssertionError) as context:
+            unpack_rta_attr(data, "offset")
+        self.assertTrue('offset is not integer' in str(context.exception))
+        with self.assertRaises(AssertionError) as context:
+            unpack_rta_attr(data, 31)
+        self.assertTrue('rta offset is less than expected length' in
+                        str(context.exception))
+
+
+@mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
+@mock.patch('cloudinit.sources.helpers.netlink.read_netlink_socket')
+class TestWaitForMediaDisconnectConnect(CiTestCase):
+    with_logs = True
+
+    def _media_switch_data(self, ifname, msg_type, operstate):
+        '''construct netlink data with specified fields'''
+        if ifname and operstate is not None:
+            data = bytearray(48)
+            bytes = ifname.encode("utf-8")
+            struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
+                             bytes, 5, 16, int_to_bytes(operstate))
+        elif ifname:
+            data = bytearray(40)
+            bytes = ifname.encode("utf-8")
+            struct.pack_into("HH4s", data, RTATTR_START_OFFSET, 8, 3, bytes)
+        elif operstate:
+            data = bytearray(40)
+            struct.pack_into("HHc", data, RTATTR_START_OFFSET, 5, 16,
+                             int_to_bytes(operstate))
+        struct.pack_into("=LHHLL", data, 0, len(data), msg_type, 0, 0, 0)
+        return data
+
+    def test_media_down_up_scenario(self, m_read_netlink_socket,
+                                    m_socket):
+        '''Test for media down up sequence for required interface name'''
+        ifname = "eth0"
+        # construct data for Oper State down
+        data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
+        # construct data for Oper State up
+        data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
+        m_read_netlink_socket.side_effect = [data_op_down, data_op_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 2)
+
+    def test_wait_for_media_switch_diff_interface(self, m_read_netlink_socket,
+                                                  m_socket):
+        '''wait_for_media_disconnect_connect ignores unexpected interfaces.
+
+        The first two messages are for other interfaces and last two are for
+        expected interface. So the function exit only after receiving last
+        2 messages and therefore the call count for m_read_netlink_socket
+        has to be 4
+        '''
+        other_ifname = "eth1"
+        expected_ifname = "eth0"
+        data_op_down_eth1 = self._media_switch_data(
+                                other_ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up_eth1 = self._media_switch_data(
+                                other_ifname, RTM_NEWLINK, OPER_UP)
+        data_op_down_eth0 = self._media_switch_data(
+                                expected_ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up_eth0 = self._media_switch_data(
+                                expected_ifname, RTM_NEWLINK, OPER_UP)
+        m_read_netlink_socket.side_effect = [data_op_down_eth1,
+                                             data_op_up_eth1,
+                                             data_op_down_eth0,
+                                             data_op_up_eth0]
+        wait_for_media_disconnect_connect(m_socket, expected_ifname)
+        self.assertIn('Ignored netlink event on interface %s' % other_ifname,
+                      self.logs.getvalue())
+        self.assertEqual(m_read_netlink_socket.call_count, 4)
+
+    def test_invalid_msgtype_getlink(self, m_read_netlink_socket, m_socket):
+        '''wait_for_media_disconnect_connect ignores GETLINK events.
+
+        The first two messages are for oper down and up for RTM_GETLINK type
+        which netlink module will ignore. The last 2 messages are RTM_NEWLINK
+        with oper state down and up messages. Therefore the call count for
+        m_read_netlink_socket has to be 4 ignoring first 2 messages
+        of RTM_GETLINK
+        '''
+        ifname = "eth0"
+        data_getlink_down = self._media_switch_data(
+                                    ifname, RTM_GETLINK, OPER_DOWN)
+        data_getlink_up = self._media_switch_data(
+                                    ifname, RTM_GETLINK, OPER_UP)
+        data_newlink_down = self._media_switch_data(
+                                    ifname, RTM_NEWLINK, OPER_DOWN)
+        data_newlink_up = self._media_switch_data(
+                                    ifname, RTM_NEWLINK, OPER_UP)
+        m_read_netlink_socket.side_effect = [data_getlink_down,
+                                             data_getlink_up,
+                                             data_newlink_down,
+                                             data_newlink_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 4)
+
+    def test_invalid_msgtype_setlink(self, m_read_netlink_socket, m_socket):
+        '''wait_for_media_disconnect_connect ignores SETLINK events.
+
+        The first two messages are for oper down and up for RTM_GETLINK type
+        which it will ignore. 3rd and 4th messages are RTM_NEWLINK with down
+        and up messages. This function should exit after 4th messages since it
+        sees down->up scenario. So the call count for m_read_netlink_socket
+        has to be 4 ignoring first 2 messages of RTM_GETLINK and
+        last 2 messages of RTM_NEWLINK
+        '''
+        ifname = "eth0"
+        data_setlink_down = self._media_switch_data(
+                                    ifname, RTM_SETLINK, OPER_DOWN)
+        data_setlink_up = self._media_switch_data(
+                                    ifname, RTM_SETLINK, OPER_UP)
+        data_newlink_down = self._media_switch_data(
+                                    ifname, RTM_NEWLINK, OPER_DOWN)
+        data_newlink_up = self._media_switch_data(
+                                    ifname, RTM_NEWLINK, OPER_UP)
+        m_read_netlink_socket.side_effect = [data_setlink_down,
+                                             data_setlink_up,
+                                             data_newlink_down,
+                                             data_newlink_up,
+                                             data_newlink_down,
+                                             data_newlink_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 4)
+
+    def test_netlink_invalid_switch_scenario(self, m_read_netlink_socket,
+                                             m_socket):
+        '''returns only if it receives UP event after a DOWN event'''
+        ifname = "eth0"
+        data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
+        data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                  OPER_DORMANT)
+        data_op_notpresent = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                     OPER_NOTPRESENT)
+        data_op_lowerdown = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                    OPER_LOWERLAYERDOWN)
+        data_op_testing = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                  OPER_TESTING)
+        data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                  OPER_UNKNOWN)
+        m_read_netlink_socket.side_effect = [data_op_up, data_op_up,
+                                             data_op_dormant, data_op_up,
+                                             data_op_notpresent, data_op_up,
+                                             data_op_lowerdown, data_op_up,
+                                             data_op_testing, data_op_up,
+                                             data_op_unknown, data_op_up,
+                                             data_op_down, data_op_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 14)
+
+    def test_netlink_valid_inbetween_transitions(self, m_read_netlink_socket,
+                                                 m_socket):
+        '''wait_for_media_disconnect_connect handles in between transitions'''
+        ifname = "eth0"
+        data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
+        data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                  OPER_DORMANT)
+        data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
+                                                  OPER_UNKNOWN)
+        m_read_netlink_socket.side_effect = [data_op_down, data_op_dormant,
+                                             data_op_unknown, data_op_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 4)
+
+    def test_netlink_invalid_operstate(self, m_read_netlink_socket, m_socket):
+        '''wait_for_media_disconnect_connect should handle invalid operstates.
+
+        The function should not fail and return even if it receives invalid
+        operstates. It always should wait for down up sequence.
+        '''
+        ifname = "eth0"
+        data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
+        data_op_invalid = self._media_switch_data(ifname, RTM_NEWLINK, 7)
+        m_read_netlink_socket.side_effect = [data_op_invalid, data_op_up,
+                                             data_op_down, data_op_invalid,
+                                             data_op_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 5)
+
+    def test_wait_invalid_socket(self, m_read_netlink_socket, m_socket):
+        '''wait_for_media_disconnect_connect handle none netlink socket.'''
+        socket = None
+        ifname = "eth0"
+        with self.assertRaises(AssertionError) as context:
+            wait_for_media_disconnect_connect(socket, ifname)
+        self.assertTrue('netlink socket is none' in str(context.exception))
+
+    def test_wait_invalid_ifname(self, m_read_netlink_socket, m_socket):
+        '''wait_for_media_disconnect_connect handle none interface name'''
+        ifname = None
+        with self.assertRaises(AssertionError) as context:
+            wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertTrue('interface name is none' in str(context.exception))
+        ifname = ""
+        with self.assertRaises(AssertionError) as context:
+            wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertTrue('interface name cannot be empty' in
+                        str(context.exception))
+
+    def test_wait_invalid_rta_attr(self, m_read_netlink_socket, m_socket):
+        ''' wait_for_media_disconnect_connect handles invalid rta data'''
+        ifname = "eth0"
+        data_invalid1 = self._media_switch_data(None, RTM_NEWLINK, OPER_DOWN)
+        data_invalid2 = self._media_switch_data(ifname, RTM_NEWLINK, None)
+        data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
+        data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
+        m_read_netlink_socket.side_effect = [data_invalid1, data_invalid2,
+                                             data_op_down, data_op_up]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 4)
+
+    def test_read_multiple_netlink_msgs(self, m_read_netlink_socket, m_socket):
+        '''Read multiple messages in single receive call'''
+        ifname = "eth0"
+        bytes = ifname.encode("utf-8")
+        data = bytearray(96)
+        struct.pack_into("=LHHLL", data, 0, 48, RTM_NEWLINK, 0, 0, 0)
+        struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
+                         bytes, 5, 16, int_to_bytes(OPER_DOWN))
+        struct.pack_into("=LHHLL", data, 48, 48, RTM_NEWLINK, 0, 0, 0)
+        struct.pack_into("HH4sHHc", data, 48 + RTATTR_START_OFFSET, 8,
+                         3, bytes, 5, 16, int_to_bytes(OPER_UP))
+        m_read_netlink_socket.return_value = data
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 1)
+
+    def test_read_partial_netlink_msgs(self, m_read_netlink_socket, m_socket):
+        '''Read partial messages in receive call'''
+        ifname = "eth0"
+        bytes = ifname.encode("utf-8")
+        data1 = bytearray(112)
+        data2 = bytearray(32)
+        struct.pack_into("=LHHLL", data1, 0, 48, RTM_NEWLINK, 0, 0, 0)
+        struct.pack_into("HH4sHHc", data1, RTATTR_START_OFFSET, 8, 3,
+                         bytes, 5, 16, int_to_bytes(OPER_DOWN))
+        struct.pack_into("=LHHLL", data1, 48, 48, RTM_NEWLINK, 0, 0, 0)
+        struct.pack_into("HH4sHHc", data1, 80, 8, 3, bytes, 5, 16,
+                         int_to_bytes(OPER_DOWN))
+        struct.pack_into("=LHHLL", data1, 96, 48, RTM_NEWLINK, 0, 0, 0)
+        struct.pack_into("HH4sHHc", data2, 16, 8, 3, bytes, 5, 16,
+                         int_to_bytes(OPER_UP))
+        m_read_netlink_socket.side_effect = [data1, data2]
+        wait_for_media_disconnect_connect(m_socket, ifname)
+        self.assertEqual(m_read_netlink_socket.call_count, 2)
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index e1890e2..77cbf3b 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -165,9 +165,8 @@ class NicConfigurator(object):
 
         # Add routes if there is no primary nic
         if not self._primaryNic and v4.gateways:
-            route_list.extend(self.gen_ipv4_route(nic,
-                                                  v4.gateways,
-                                                  v4.netmask))
+            subnet.update(
+                {'routes': self.gen_ipv4_route(nic, v4.gateways, v4.netmask)})
 
         return ([subnet], route_list)
 
diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py
index c98a1b5..346276e 100644
--- a/cloudinit/temp_utils.py
+++ b/cloudinit/temp_utils.py
@@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs):
 
 
 @contextlib.contextmanager
-def tempdir(**kwargs):
+def tempdir(rmtree_ignore_errors=False, **kwargs):
     # This seems like it was only added in python 3.2
     # Make it since its useful...
     # See: http://bugs.python.org/file12970/tempdir.patch
@@ -89,7 +89,7 @@ def tempdir(**kwargs):
     try:
         yield tdir
     finally:
-        shutil.rmtree(tdir)
+        shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors)
 
 
 def mkdtemp(**kwargs):
diff --git a/cloudinit/tests/test_dhclient_hook.py b/cloudinit/tests/test_dhclient_hook.py
new file mode 100644
index 0000000..7aab8dd
--- /dev/null
+++ b/cloudinit/tests/test_dhclient_hook.py
@@ -0,0 +1,105 @@
+# This file is part of cloud-init. See LICENSE file for license information.
+
+"""Tests for cloudinit.dhclient_hook."""
+
+from cloudinit import dhclient_hook as dhc
+from cloudinit.tests.helpers import CiTestCase, dir2dict, populate_dir
+
+import argparse
+import json
+import mock
+import os
+
+
+class TestDhclientHook(CiTestCase):
+
+    ex_env = {
+        'interface': 'eth0',
+        'new_dhcp_lease_time': '3600',
+        'new_host_name': 'x1',
+        'new_ip_address': '10.145.210.163',
+        'new_subnet_mask': '255.255.255.0',
+        'old_host_name': 'x1',
+        'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
+        'pid': '614',
+        'reason': 'BOUND',
+    }
+
+    # some older versions of dhclient put the same content,
+    # but in upper case with DHCP4_ instead of new_
+    ex_env_dhcp4 = {
+        'REASON': 'BOUND',
+        'DHCP4_dhcp_lease_time': '3600',
+        'DHCP4_host_name': 'x1',
+        'DHCP4_ip_address': '10.145.210.163',
+        'DHCP4_subnet_mask': '255.255.255.0',
+        'INTERFACE': 'eth0',
+        'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
+        'pid': '614',
+    }
+
+    expected = {
+        'dhcp_lease_time': '3600',
+        'host_name': 'x1',
+        'ip_address': '10.145.210.163',
+        'subnet_mask': '255.255.255.0'}
+
+    def setUp(self):
+        super(TestDhclientHook, self).setUp()
+        self.tmp = self.tmp_dir()
+
+    def test_handle_args(self):
+        """quick test of call to handle_args."""
+        nic = 'eth0'
+        args = argparse.Namespace(event=dhc.UP, interface=nic)
+        with mock.patch.dict("os.environ", clear=True, values=self.ex_env):
+            dhc.handle_args(dhc.NAME, args, data_d=self.tmp)
+        found = dir2dict(self.tmp + os.path.sep)
+        self.assertEqual([nic + ".json"], list(found.keys()))
+        self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
+
+    def test_run_hook_up_creates_dir(self):
+        """If dir does not exist, run_hook should create it."""
+        subd = self.tmp_path("subdir", self.tmp)
+        nic = 'eth1'
+        dhc.run_hook(nic, 'up', data_d=subd, env=self.ex_env)
+        self.assertEqual(
+            set([nic + ".json"]), set(dir2dict(subd + os.path.sep)))
+
+    def test_run_hook_up(self):
+        """Test expected use of run_hook_up."""
+        nic = 'eth0'
+        dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env)
+        found = dir2dict(self.tmp + os.path.sep)
+        self.assertEqual([nic + ".json"], list(found.keys()))
+        self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
+
+    def test_run_hook_up_dhcp4_prefix(self):
+        """Test run_hook filters correctly with older DHCP4_ data."""
+        nic = 'eth0'
+        dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env_dhcp4)
+        found = dir2dict(self.tmp + os.path.sep)
+        self.assertEqual([nic + ".json"], list(found.keys()))
+        self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
+
+    def test_run_hook_down_deletes(self):
+        """down should delete the created json file."""
+        nic = 'eth1'
+        populate_dir(
+            self.tmp, {nic + ".json": "{'abcd'}", 'myfile.txt': 'text'})
+        dhc.run_hook(nic, 'down', data_d=self.tmp, env={'old_host_name': 'x1'})
+        self.assertEqual(
+            set(['myfile.txt']),
+            set(dir2dict(self.tmp + os.path.sep)))
+
+    def test_get_parser(self):
+        """Smoke test creation of get_parser."""
+        # cloud-init main uses 'action'.
+        event, interface = (dhc.UP, 'mynic0')
+        self.assertEqual(
+            argparse.Namespace(event=event, interface=interface,
+                               action=(dhc.NAME, dhc.handle_args)),
+            dhc.get_parser().parse_args([event, interface]))
+
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py
index ffbb92c..4a52ef8 100644
--- a/cloudinit/tests/test_temp_utils.py
+++ b/cloudinit/tests/test_temp_utils.py
@@ -2,8 +2,9 @@
 
 """Tests for cloudinit.temp_utils"""
 
-from cloudinit.temp_utils import mkdtemp, mkstemp
+from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir
 from cloudinit.tests.helpers import CiTestCase, wrap_and_call
+import os
 
 
 class TestTempUtils(CiTestCase):
@@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase):
         self.assertEqual('/fake/return/path', retval)
         self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
 
+    def test_tempdir_error_suppression(self):
+        """test tempdir suppresses errors during directory removal."""
+
+        with self.assertRaises(OSError):
+            with tempdir(prefix='cloud-init-dhcp-') as tdir:
+                os.rmdir(tdir)
+                # As a result, the directory is already gone,
+                # so shutil.rmtree should raise OSError
+
+        with tempdir(rmtree_ignore_errors=True,
+                     prefix='cloud-init-dhcp-') as tdir:
+            os.rmdir(tdir)
+            # Since the directory is already gone, shutil.rmtree would raise
+            # OSError, but we suppress that
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
index 113249d..aa9f3ec 100644
--- a/cloudinit/tests/test_url_helper.py
+++ b/cloudinit/tests/test_url_helper.py
@@ -1,10 +1,12 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
-from cloudinit.url_helper import oauth_headers, read_file_or_url
+from cloudinit.url_helper import (
+    NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc)
 from cloudinit.tests.helpers import CiTestCase, mock, skipIf
 from cloudinit import util
 
 import httpretty
+import requests
 
 
 try:
@@ -64,3 +66,24 @@ class TestReadFileOrUrl(CiTestCase):
         result = read_file_or_url(url)
         self.assertEqual(result.contents, data)
         self.assertEqual(str(result), data.decode('utf-8'))
+
+
+class TestRetryOnUrlExc(CiTestCase):
+
+    def test_do_not_retry_non_urlerror(self):
+        """When exception is not UrlError return False."""
+        myerror = IOError('something unexcpected')
+        self.assertFalse(retry_on_url_exc(msg='', exc=myerror))
+
+    def test_perform_retries_on_not_found(self):
+        """When exception is UrlError with a 404 status code return True."""
+        myerror = UrlError(cause=RuntimeError(
+            'something was not found'), code=NOT_FOUND)
+        self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
+
+    def test_perform_retries_on_timeout(self):
+        """When exception is a requests.Timout return True."""
+        myerror = UrlError(cause=requests.Timeout('something timed out'))
+        self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
+
+# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index 749a384..e3d2dba 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -18,25 +18,51 @@ MOUNT_INFO = [
 ]
 
 OS_RELEASE_SLES = dedent("""\
-    NAME="SLES"\n
-    VERSION="12-SP3"\n
-    VERSION_ID="12.3"\n
-    PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n
-    ID="sles"\nANSI_COLOR="0;32"\n
-    CPE_NAME="cpe:/o:suse:sles:12:sp3"\n
+    NAME="SLES"
+    VERSION="12-SP3"
+    VERSION_ID="12.3"
+    PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
+    ID="sles"
+    ANSI_COLOR="0;32"
+    CPE_NAME="cpe:/o:suse:sles:12:sp3"
 """)
 
 OS_RELEASE_OPENSUSE = dedent("""\
-NAME="openSUSE Leap"
-VERSION="42.3"
-ID=opensuse
-ID_LIKE="suse"
-VERSION_ID="42.3"
-PRETTY_NAME="openSUSE Leap 42.3"
-ANSI_COLOR="0;32"
-CPE_NAME="cpe:/o:opensuse:leap:42.3"
-BUG_REPORT_URL="https://bugs.opensuse.org";
-HOME_URL="https://www.opensuse.org/";
+    NAME="openSUSE Leap"
+    VERSION="42.3"
+    ID=opensuse
+    ID_LIKE="suse"
+    VERSION_ID="42.3"
+    PRETTY_NAME="openSUSE Leap 42.3"
+    ANSI_COLOR="0;32"
+    CPE_NAME="cpe:/o:opensuse:leap:42.3"
+    BUG_REPORT_URL="https://bugs.opensuse.org";
+    HOME_URL="https://www.opensuse.org/";
+""")
+
+OS_RELEASE_OPENSUSE_L15 = dedent("""\
+    NAME="openSUSE Leap"
+    VERSION="15.0"
+    ID="opensuse-leap"
+    ID_LIKE="suse opensuse"
+    VERSION_ID="15.0"
+    PRETTY_NAME="openSUSE Leap 15.0"
+    ANSI_COLOR="0;32"
+    CPE_NAME="cpe:/o:opensuse:leap:15.0"
+    BUG_REPORT_URL="https://bugs.opensuse.org";
+    HOME_URL="https://www.opensuse.org/";
+""")
+
+OS_RELEASE_OPENSUSE_TW = dedent("""\
+    NAME="openSUSE Tumbleweed"
+    ID="opensuse-tumbleweed"
+    ID_LIKE="opensuse suse"
+    VERSION_ID="20180920"
+    PRETTY_NAME="openSUSE Tumbleweed"
+    ANSI_COLOR="0;32"
+    CPE_NAME="cpe:/o:opensuse:tumbleweed:20180920"
+    BUG_REPORT_URL="https://bugs.opensuse.org";
+    HOME_URL="https://www.opensuse.org/";
 """)
 
 OS_RELEASE_CENTOS = dedent("""\
@@ -447,12 +473,35 @@ class TestGetLinuxDistro(CiTestCase):
 
     @mock.patch('cloudinit.util.load_file')
     def test_get_linux_opensuse(self, m_os_release, m_path_exists):
-        """Verify we get the correct name and machine arch on OpenSUSE."""
+        """Verify we get the correct name and machine arch on openSUSE
+           prior to openSUSE Leap 15.
+        """
         m_os_release.return_value = OS_RELEASE_OPENSUSE
         m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
         dist = util.get_linux_distro()
         self.assertEqual(('opensuse', '42.3', platform.machine()), dist)
 
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_opensuse_l15(self, m_os_release, m_path_exists):
+        """Verify we get the correct name and machine arch on openSUSE
+           for openSUSE Leap 15.0 and later.
+        """
+        m_os_release.return_value = OS_RELEASE_OPENSUSE_L15
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(('opensuse-leap', '15.0', platform.machine()), dist)
+
+    @mock.patch('cloudinit.util.load_file')
+    def test_get_linux_opensuse_tw(self, m_os_release, m_path_exists):
+        """Verify we get the correct name and machine arch on openSUSE
+           for openSUSE Tumbleweed
+        """
+        m_os_release.return_value = OS_RELEASE_OPENSUSE_TW
+        m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
+        dist = util.get_linux_distro()
+        self.assertEqual(
+            ('opensuse-tumbleweed', '20180920', platform.machine()), dist)
+
     @mock.patch('platform.dist')
     def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):
         """Verify we get no information if os-release does not exist"""
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 8067979..396d69a 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -199,7 +199,7 @@ def _get_ssl_args(url, ssl_details):
 def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
             headers=None, headers_cb=None, ssl_details=None,
             check_status=True, allow_redirects=True, exception_cb=None,
-            session=None, infinite=False):
+            session=None, infinite=False, log_req_resp=True):
     url = _cleanurl(url)
     req_args = {
         'url': url,
@@ -256,9 +256,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
                 continue
             filtered_req_args[k] = v
         try:
-            LOG.debug("[%s/%s] open '%s' with %s configuration", i,
-                      "infinite" if infinite else manual_tries, url,
-                      filtered_req_args)
+
+            if log_req_resp:
+                LOG.debug("[%s/%s] open '%s' with %s configuration", i,
+                          "infinite" if infinite else manual_tries, url,
+                          filtered_req_args)
 
             if session is None:
                 session = requests.Session()
@@ -294,8 +296,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
                 break
             if (infinite and sec_between > 0) or \
                (i + 1 < manual_tries and sec_between > 0):
-                LOG.debug("Please wait %s seconds while we wait to try again",
-                          sec_between)
+
+                if log_req_resp:
+                    LOG.debug(
+                        "Please wait %s seconds while we wait to try again",
+                        sec_between)
                 time.sleep(sec_between)
     if excps:
         raise excps[-1]
@@ -549,4 +554,18 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
     _uri, signed_headers, _body = client.sign(url)
     return signed_headers
 
+
+def retry_on_url_exc(msg, exc):
+    """readurl exception_cb that will retry on NOT_FOUND and Timeout.
+
+    Returns False to raise the exception from readurl, True to retry.
+    """
+    if not isinstance(exc, UrlError):
+        return False
+    if exc.code == NOT_FOUND:
+        return True
+    if exc.cause and isinstance(exc.cause, requests.Timeout):
+        return True
+    return False
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index c67d6be..a8a232b 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -615,8 +615,8 @@ def get_linux_distro():
         distro_name = os_release.get('ID', '')
         distro_version = os_release.get('VERSION_ID', '')
         if 'sles' in distro_name or 'suse' in distro_name:
-            # RELEASE_BLOCKER: We will drop this sles ivergent behavior in
-            # before 18.4 so that get_linux_distro returns a named tuple
+            # RELEASE_BLOCKER: We will drop this sles divergent behavior in
+            # the future so that get_linux_distro returns a named tuple
             # which will include both version codename and architecture
             # on all distributions.
             flavor = platform.machine()
@@ -668,7 +668,8 @@ def system_info():
             var = 'ubuntu'
         elif linux_dist == 'redhat':
             var = 'rhel'
-        elif linux_dist in ('opensuse', 'sles'):
+        elif linux_dist in (
+                'opensuse', 'opensuse-tumbleweed', 'opensuse-leap', 'sles'):
             var = 'suse'
         else:
             var = 'linux'
@@ -2875,4 +2876,20 @@ def udevadm_settle(exists=None, timeout=None):
     return subp(settle_cmd)
 
 
+def get_proc_ppid(pid):
+    """
+    Return the parent pid of a process.
+    """
+    ppid = 0
+    try:
+        contents = load_file("/proc/%s/stat" % pid, quiet=True)
+    except IOError as e:
+        LOG.warning('Failed to load /proc/%s/stat. %s', pid, e)
+    if contents:
+        parts = contents.split(" ", 4)
+        # man proc says
+        #  ppid %d     (4) The PID of the parent.
+        ppid = int(parts[3])
+    return ppid
+
 # vi: ts=4 expandtab
diff --git a/cloudinit/version.py b/cloudinit/version.py
index 844a02e..a2c5d43 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
 #
 # This file is part of cloud-init. See LICENSE file for license information.
 
-__VERSION__ = "18.4"
+__VERSION__ = "18.5"
 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
 
 FEATURES = [
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 1fef133..7513176 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -167,7 +167,17 @@ system_info:
            - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
            - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
          security: []
-     - arches: [armhf, armel, default]
+     - arches: [arm64, armel, armhf]
+       failsafe:
+         primary: http://ports.ubuntu.com/ubuntu-ports
+         security: http://ports.ubuntu.com/ubuntu-ports
+       search:
+         primary:
+           - http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/
+           - http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/
+           - http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/
+         security: []
+     - arches: [default]
        failsafe:
          primary: http://ports.ubuntu.com/ubuntu-ports
          security: http://ports.ubuntu.com/ubuntu-ports
diff --git a/debian/changelog b/debian/changelog
index 117fd16..f5bb1fa 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,73 @@
+cloud-init (18.5-17-gd1a2fe73-0ubuntu1~18.10.1) cosmic; urgency=medium
+
+  * New upstream snapshot. (LP: #1813346)
+    - opennebula: exclude EPOCHREALTIME as known bash env variable with a
+      delta
+    - tox: fix disco httpretty dependencies for py37
+    - run-container: uncomment baseurl in yum.repos.d/*.repo when using a
+      proxy [Paride Legovini]
+    - lxd: install zfs-linux instead of zfs meta package
+      [Johnson Shi]
+    - net/sysconfig: do not write a resolv.conf file with only the header.
+      [Robert Schweikert]
+    - net: Make sysconfig renderer compatible with Network Manager.
+      [Eduardo Otubo]
+    - cc_set_passwords: Fix regex when parsing hashed passwords
+      [Marlin Cremers]
+    - net: Wait for dhclient to daemonize before reading lease file
+      [Jason Zions]
+    - [Azure] Increase retries when talking to Wireserver during metadata walk
+      [Jason Zions]
+    - Add documentation on adding a datasource.
+    - doc: clean up some datasource documentation.
+    - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo.
+    - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc]
+    - OVF: simplify expected return values of transport functions.
+    - Vmware: Add support for the com.vmware.guestInfo OVF transport.
+    - HACKING.rst: change contact info to Josh Powers
+    - Update to pylint 2.2.2.
+    - Release 18.5
+    - tests: add Disco release [Joshua Powers]
+    - net: render 'metric' values in per-subnet routes
+    - write_files: add support for appending to files. [James Baxter]
+    - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
+    - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
+    - NoCloud: Allow top level 'network' key in network-config.
+    - ovf: Fix ovf network config generation gateway/routes
+    - azure: detect vnet migration via netlink media change event
+      [Tamilmani Manoharan]
+    - Azure: fix copy/paste error in error handling when reading azure ovf.
+      [Adam DePue]
+    - tests: fix incorrect order of mocks in test_handle_zfs_root.
+    - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
+    - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
+    - logs: collect-logs ignore instance-data-sensitive.json on non-root user
+    - net: Ephemeral*Network: add connectivity check via URL
+    - azure: _poll_imds only retry on 404. Fail on Timeout
+    - resizefs: Prefix discovered devpath with '/dev/' when path does not
+      exist [Igor Galić]
+    - azure: retry imds polling on requests.Timeout
+    - azure: Accept variation in error msg from mount for ntfs volumes
+      [Jason Zions]
+    - azure: fix regression introduced when persisting ephemeral dhcp lease
+      [Aswin Rajamannar]
+    - azure: add udev rules to create cloud-init Gen2 disk name symlinks
+    - tests: ec2 mock missing httpretty user-data and instance-identity routes
+    - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
+    - azure: report ready to fabric after reprovision and reduce logging
+      [Aswin Rajamannar]
+    - query: better error when missing read permission on instance-data
+    - instance-data: fallback to instance-data.json if sensitive is absent.
+    - docs: remove colon from network v1 config example. [Tomer Cohen]
+    - Add cloud-id binary to packages for SUSE [Jason Zions]
+    - systemd: On SUSE ensure cloud-init.service runs before wicked
+      [Robert Schweikert]
+    - update detection of openSUSE variants [Robert Schweikert]
+    - azure: Add apply_network_config option to disable network from IMDS
+    - Correct spelling in an error message (udevadm). [Katie McLaughlin]
+
+ -- Chad Smith <chad.smith@xxxxxxxxxxxxx>  Sat, 26 Jan 2019 13:57:43 -0700
+
 cloud-init (18.4-7-g4652b196-0ubuntu1) cosmic; urgency=medium
 
   * New upstream snapshot.
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index e34f145..648c606 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -18,7 +18,7 @@ single way to access the different cloud systems methods to provide this data
 through the typical usage of subclasses.
 
 Any metadata processed by cloud-init's datasources is persisted as
-``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling
+``/run/cloud-init/instance-data.json``. Cloud-init provides tooling
 to quickly introspect some of that data. See :ref:`instance_metadata` for
 more information.
 
@@ -80,6 +80,65 @@ The current interface that a datasource object must provide is the following:
     def get_package_mirror_info(self)
 
 
+Adding a new Datasource
+-----------------------
+The datasource objects have a few touch points with cloud-init.  If you
+are interested in adding a new datasource for your cloud platform you'll
+need to take care of the following items:
+
+* **Identify a mechanism for positive identification of the platform**:
+  It is good practice for a cloud platform to positively identify itself
+  to the guest.  This allows the guest to make educated decisions based
+  on the platform on which it is running. On the x86 and arm64 architectures,
+  many clouds identify themselves through DMI data.  For example,
+  Oracle's public cloud provides the string 'OracleCloud.com' in the
+  DMI chassis-asset field.
+
+  cloud-init enabled images produce a log file with details about the
+  platform.  Reading through this log in ``/run/cloud-init/ds-identify.log``
+  may provide the information needed to uniquely identify the platform.
+  If the log is not present, you can generate it by running from source
+  ``./tools/ds-identify`` or the installed location
+  ``/usr/lib/cloud-init/ds-identify``.
+
+  The mechanism used to identify the platform will be required for the
+  ds-identify and datasource module sections below.
+
+* **Add datasource module ``cloudinit/sources/DataSource<CloudPlatform>.py``**:
+  It is suggested that you start by copying one of the simpler datasources
+  such as DataSourceHetzner.
+
+* **Add tests for datasource module**:
+  Add a new file with some tests for the module to
+  ``cloudinit/sources/test_<yourplatform>.py``.  For example see
+  ``cloudinit/sources/tests/test_oracle.py``
+
+* **Update ds-identify**:  In systemd systems, ds-identify is used to detect
+  which datasource should be enabled or if cloud-init should run at all.
+  You'll need to make changes to ``tools/ds-identify``.
+
+* **Add tests for ds-identify**: Add relevant tests in a new class to
+  ``tests/unittests/test_ds_identify.py``.  You can use ``TestOracle`` as an
+  example.
+
+* **Add your datasource name to the builtin list of datasources:** Add
+  your datasource module name to the end of the ``datasource_list``
+  entry in ``cloudinit/settings.py``.
+
+* **Add your your cloud platform to apport collection prompts:** Update the
+  list of cloud platforms in ``cloudinit/apport.py``.  This list will be
+  provided to the user who invokes ``ubuntu-bug cloud-init``.
+
+* **Enable datasource by default in ubuntu packaging branches:**
+  Ubuntu packaging branches contain a template file
+  ``debian/cloud-init.templates`` that ultimately sets the default
+  datasource_list when installed via package.  This file needs updating when
+  the commit gets into a package.
+
+* **Add documentation for your datasource**: You should add a new
+  file in ``doc/datasources/<cloudplatform>.rst``
+
+
 Datasource Documentation
 ========================
 The following is a list of the implemented datasources.
diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
index 559011e..720a475 100644
--- a/doc/rtd/topics/datasources/azure.rst
+++ b/doc/rtd/topics/datasources/azure.rst
@@ -23,18 +23,18 @@ information in json format to /run/cloud-init/dhclient.hook/<interface>.json.
 In order for cloud-init to leverage this method to find the endpoint, the
 cloud.cfg file must contain:
 
-datasource:
-  Azure:
-    set_hostname: False
-    agent_command: __builtin__
+.. sourcecode:: yaml
+
+  datasource:
+    Azure:
+      set_hostname: False
+      agent_command: __builtin__
 
 If those files are not available, the fallback is to check the leases file
 for the endpoint server (again option 245).
 
 You can define the path to the lease file with the 'dhclient_lease_file'
-configuration.  The default value is /var/lib/dhcp/dhclient.eth0.leases.
-
-    dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
+configuration.
 
 walinuxagent
 ------------
@@ -57,6 +57,64 @@ in order to use waagent.conf with cloud-init, the following settings are recomme
    ResourceDisk.MountPoint=/mnt
 
 
+Configuration
+-------------
+The following configuration can be set for the datasource in system
+configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
+
+The settings that may be configured are:
+
+ * **agent_command**: Either __builtin__ (default) or a command to run to getcw
+   metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the
+   provided command to obtain metadata.
+ * **apply_network_config**: Boolean set to True to use network configuration
+   described by Azure's IMDS endpoint instead of fallback network config of
+   dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False.
+ * **data_dir**: Path used to read metadata files and write crawled data.
+ * **dhclient_lease_file**: The fallback lease file to source when looking for
+   custom DHCP option 245 from Azure fabric.
+ * **disk_aliases**: A dictionary defining which device paths should be
+   interpreted as ephemeral images. See cc_disk_setup module for more info.
+ * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
+   metadata changes.  The '``hostname_bounce: command``' entry can be either
+   the literal string 'builtin' or a command to execute.  The command will be
+   invoked after the hostname is set, and will have the 'interface' in its
+   environment.  If ``set_hostname`` is not true, then ``hostname_bounce``
+   will be ignored.  An example might be:
+
+     ``command:  ["sh", "-c", "killall dhclient; dhclient $interface"]``
+
+ * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
+   metadata changes. Azure will throttle ifup/down in some cases after metadata
+   has been updated to inform dhcp server about updated hostnames.
+ * **set_hostname**: Boolean set to True when we want Azure to set the hostname
+   based on metadata.
+
+Configuration for the datasource can also be read from a
+``dscfg`` entry in the ``LinuxProvisioningConfigurationSet``.  Content in
+dscfg node is expected to be base64 encoded yaml content, and it will be
+merged into the 'datasource: Azure' entry.
+
+An example configuration with the default values is provided below:
+
+.. sourcecode:: yaml
+
+  datasource:
+   Azure:
+    agent_command: __builtin__
+    apply_network_config: true
+    data_dir: /var/lib/waagent
+    dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
+    disk_aliases:
+        ephemeral0: /dev/disk/cloud/azure_resource
+    hostname_bounce:
+        interface: eth0
+        command: builtin
+        policy: true
+        hostname_command: hostname
+    set_hostname: true
+
+
 Userdata
 --------
 Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init
@@ -97,37 +155,6 @@ Example:
    </LinuxProvisioningConfigurationSet>
  </wa:ProvisioningSection>
 
-Configuration
--------------
-Configuration for the datasource can be read from the system config's or set
-via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`.  Content in
-dscfg node is expected to be base64 encoded yaml content, and it will be
-merged into the 'datasource: Azure' entry.
-
-The '``hostname_bounce: command``' entry can be either the literal string
-'builtin' or a command to execute.  The command will be invoked after the
-hostname is set, and will have the 'interface' in its environment.  If
-``set_hostname`` is not true, then ``hostname_bounce`` will be ignored.
-
-An example might be:
-  command:  ["sh", "-c", "killall dhclient; dhclient $interface"]
-
-.. code:: yaml
-
-  datasource:
-   agent_command
-   Azure:
-    agent_command: [service, walinuxagent, start]
-    set_hostname: True
-    hostname_bounce:
-     # the name of the interface to bounce
-     interface: eth0
-     # policy can be 'on', 'off' or 'force'
-     policy: on
-     # the method 'bounce' command.
-     command: "builtin"
-     hostname_command: "hostname"
-
 hostname
 --------
 When the user launches an instance, they provide a hostname for that instance.
diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst
index 3b0148c..9723d68 100644
--- a/doc/rtd/topics/network-config-format-v1.rst
+++ b/doc/rtd/topics/network-config-format-v1.rst
@@ -384,7 +384,7 @@ Valid keys for ``subnets`` include the following:
 - ``address``: IPv4 or IPv6 address.  It may include CIDR netmask notation.
 - ``netmask``: IPv4 subnet mask in dotted format or CIDR notation.
 - ``gateway``: IPv4 address of the default gateway for this subnet.
-- ``dns_nameserver``: Specify a list of IPv4 dns server IPs to end up in
+- ``dns_nameservers``: Specify a list of IPv4 dns server IPs to end up in
   resolv.conf.
 - ``dns_search``: Specify a list of search paths to be included in
   resolv.conf.
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index a3a6d1e..6b2022b 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -191,6 +191,7 @@ fi
 
 # Program binaries
 %{_bindir}/cloud-init*
+%{_bindir}/cloud-id*
 
 # Docs
 %doc LICENSE ChangeLog TODO.rst requirements.txt
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index e781d74..26894b3 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -93,6 +93,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
 
 # Program binaries
 %{_bindir}/cloud-init*
+%{_bindir}/cloud-id*
 
 # systemd files
 /usr/lib/systemd/system-generators/*
diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl
index b92e8ab..5cb0037 100644
--- a/systemd/cloud-init.service.tmpl
+++ b/systemd/cloud-init.service.tmpl
@@ -14,8 +14,7 @@ After=networking.service
 After=network.service
 {% endif %}
 {% if variant in ["suse"] %}
-Requires=wicked.service
-After=wicked.service
+Before=wicked.service
 # setting hostname via hostnamectl depends on dbus, which otherwise
 # would not be guaranteed at this point.
 After=dbus.service
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index defae02..ec5da72 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -129,6 +129,22 @@ features:
 
 releases:
     # UBUNTU =================================================================
+    disco:
+        # EOL: Jan 2020
+        default:
+            enabled: true
+            release: disco
+            version: 19.04
+            os: ubuntu
+            feature_groups:
+                - base
+                - debian_base
+                - ubuntu_specific
+        lxd:
+            sstreams_server: https://cloud-images.ubuntu.com/daily
+            alias: disco
+            setup_overrides: null
+            override_templates: false
     cosmic:
         # EOL: Jul 2019
         default:
diff --git a/tests/unittests/test_builtin_handlers.py b/tests/unittests/test_builtin_handlers.py
index abe820e..b92ffc7 100644
--- a/tests/unittests/test_builtin_handlers.py
+++ b/tests/unittests/test_builtin_handlers.py
@@ -3,6 +3,7 @@
 """Tests of the built-in user data handlers."""
 
 import copy
+import errno
 import os
 import shutil
 import tempfile
@@ -202,6 +203,30 @@ class TestJinjaTemplatePartHandler(CiTestCase):
             os.path.exists(script_file),
             'Unexpected file created %s' % script_file)
 
+    def test_jinja_template_handle_errors_on_unreadable_instance_data(self):
+        """If instance-data is unreadable, raise an error from handle_part."""
+        script_handler = ShellScriptPartHandler(self.paths)
+        instance_json = os.path.join(self.run_dir, 'instance-data.json')
+        util.write_file(instance_json, util.json_dumps({}))
+        h = JinjaTemplatePartHandler(
+            self.paths, sub_handlers=[script_handler])
+        with mock.patch(self.mpath + 'load_file') as m_load:
+            with self.assertRaises(RuntimeError) as context_manager:
+                m_load.side_effect = OSError(errno.EACCES, 'Not allowed')
+                h.handle_part(
+                    data='data', ctype="!" + handlers.CONTENT_START,
+                    filename='part01',
+                    payload='## template: jinja  \n#!/bin/bash\necho himom',
+                    frequency='freq', headers='headers')
+        script_file = os.path.join(script_handler.script_dir, 'part01')
+        self.assertEqual(
+            'Cannot render jinja template vars. No read permission on'
+            " '{rdir}/instance-data.json'. Try sudo".format(rdir=self.run_dir),
+            str(context_manager.exception))
+        self.assertFalse(
+            os.path.exists(script_file),
+            'Unexpected file created %s' % script_file)
+
     @skipUnlessJinja()
     def test_jinja_template_handle_renders_jinja_content(self):
         """When present, render jinja variables from instance-data.json."""
diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
index 199d69b..d283f13 100644
--- a/tests/unittests/test_cli.py
+++ b/tests/unittests/test_cli.py
@@ -246,18 +246,18 @@ class TestCLI(test_helpers.FilesystemMockingTestCase):
         self.assertEqual('cc_ntp', parseargs.name)
         self.assertFalse(parseargs.report)
 
-    @mock.patch('cloudinit.cmd.main.dhclient_hook')
-    def test_dhclient_hook_subcommand(self, m_dhclient_hook):
+    @mock.patch('cloudinit.cmd.main.dhclient_hook.handle_args')
+    def test_dhclient_hook_subcommand(self, m_handle_args):
         """The subcommand 'dhclient-hook' calls dhclient_hook with args."""
-        self._call_main(['cloud-init', 'dhclient-hook', 'net_action', 'eth0'])
-        (name, parseargs) = m_dhclient_hook.call_args_list[0][0]
-        self.assertEqual('dhclient_hook', name)
+        self._call_main(['cloud-init', 'dhclient-hook', 'up', 'eth0'])
+        (name, parseargs) = m_handle_args.call_args_list[0][0]
+        self.assertEqual('dhclient-hook', name)
         self.assertEqual('dhclient-hook', parseargs.subcommand)
-        self.assertEqual('dhclient_hook', parseargs.action[0])
+        self.assertEqual('dhclient-hook', parseargs.action[0])
         self.assertFalse(parseargs.debug)
         self.assertFalse(parseargs.force)
-        self.assertEqual('net_action', parseargs.net_action)
-        self.assertEqual('eth0', parseargs.net_interface)
+        self.assertEqual('up', parseargs.event)
+        self.assertEqual('eth0', parseargs.interface)
 
     @mock.patch('cloudinit.cmd.main.main_features')
     def test_features_hook_subcommand(self, m_features):
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 0f4b7bf..417d86a 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -17,6 +17,7 @@ import crypt
 import httpretty
 import json
 import os
+import requests
 import stat
 import xml.etree.ElementTree as ET
 import yaml
@@ -184,6 +185,35 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
             "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
             self.logs.getvalue())
 
+    @mock.patch('requests.Session.request')
+    @mock.patch('cloudinit.url_helper.time.sleep')
+    @mock.patch(MOCKPATH + 'net.is_up')
+    def test_get_metadata_from_imds_retries_on_timeout(
+            self, m_net_is_up, m_sleep, m_request):
+        """Retry IMDS network metadata on timeout errors."""
+
+        self.attempt = 0
+        m_request.side_effect = requests.Timeout('Fake Connection Timeout')
+
+        def retry_callback(request, uri, headers):
+            self.attempt += 1
+            raise requests.Timeout('Fake connection timeout')
+
+        httpretty.register_uri(
+            httpretty.GET,
+            dsaz.IMDS_URL + 'instance?api-version=2017-12-01',
+            body=retry_callback)
+
+        m_net_is_up.return_value = True  # skips dhcp
+
+        self.assertEqual({}, dsaz.get_metadata_from_imds('eth9', retries=3))
+
+        m_net_is_up.assert_called_with('eth9')
+        self.assertEqual([mock.call(1)]*3, m_sleep.call_args_list)
+        self.assertIn(
+            "Crawl of Azure Instance Metadata Service (IMDS) took",  # log_time
+            self.logs.getvalue())
+
 
 class TestAzureDataSource(CiTestCase):
 
@@ -256,7 +286,8 @@ scbus-1 on xpt0 bus 0
         ])
         return dsaz
 
-    def _get_ds(self, data, agent_command=None, distro=None):
+    def _get_ds(self, data, agent_command=None, distro=None,
+                apply_network=None):
 
         def dsdevs():
             return data.get('dsdevs', [])
@@ -312,6 +343,8 @@ scbus-1 on xpt0 bus 0
             data.get('sys_cfg', {}), distro=distro, paths=self.paths)
         if agent_command is not None:
             dsrc.ds_cfg['agent_command'] = agent_command
+        if apply_network is not None:
+            dsrc.ds_cfg['apply_network_config'] = apply_network
 
         return dsrc
 
@@ -434,14 +467,26 @@ fdescfs            /dev/fd          fdescfs rw              0 0
 
     def test_get_data_on_ubuntu_will_remove_network_scripts(self):
         """get_data will remove ubuntu net scripts on Ubuntu distro."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata),
-                'sys_cfg': {}}
+                'sys_cfg': sys_cfg}
 
         dsrc = self._get_ds(data, distro='ubuntu')
         dsrc.get_data()
         self.m_remove_ubuntu_network_scripts.assert_called_once_with()
 
+    def test_get_data_on_ubuntu_will_not_remove_network_scripts_disabled(self):
+        """When apply_network_config false, do not remove scripts on Ubuntu."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': False}}}
+        odata = {'HostName': "myhost", 'UserName': "myuser"}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
+
+        dsrc = self._get_ds(data, distro='ubuntu')
+        dsrc.get_data()
+        self.m_remove_ubuntu_network_scripts.assert_not_called()
+
     def test_crawl_metadata_returns_structured_data_and_caches_nothing(self):
         """Return all structured metadata and cache no class attributes."""
         yaml_cfg = "{agent_command: my_command}\n"
@@ -498,6 +543,61 @@ fdescfs            /dev/fd          fdescfs rw              0 0
             dsrc.crawl_metadata()
         self.assertEqual(str(cm.exception), error_msg)
 
+    @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    @mock.patch(
+        'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
+    @mock.patch('cloudinit.sources.DataSourceAzure.DataSourceAzure._poll_imds')
+    def test_crawl_metadata_on_reprovision_reports_ready(
+                            self, poll_imds_func,
+                            report_ready_func,
+                            m_write, m_dhcp):
+        """If reprovisioning, report ready at the end"""
+        ovfenv = construct_valid_ovf_env(
+                            platform_settings={"PreprovisionedVm": "True"})
+
+        data = {'ovfcontent': ovfenv,
+                'sys_cfg': {}}
+        dsrc = self._get_ds(data)
+        poll_imds_func.return_value = ovfenv
+        dsrc.crawl_metadata()
+        self.assertEqual(1, report_ready_func.call_count)
+
+    @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
+    @mock.patch('cloudinit.sources.helpers.netlink.'
+                'wait_for_media_disconnect_connect')
+    @mock.patch(
+        'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
+    @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
+    @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+    @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
+    def test_crawl_metadata_on_reprovision_reports_ready_using_lease(
+                            self, m_readurl, m_dhcp,
+                            m_net, report_ready_func,
+                            m_media_switch, m_write):
+        """If reprovisioning, report ready using the obtained lease"""
+        ovfenv = construct_valid_ovf_env(
+                            platform_settings={"PreprovisionedVm": "True"})
+
+        data = {'ovfcontent': ovfenv,
+                'sys_cfg': {}}
+        dsrc = self._get_ds(data)
+
+        lease = {
+            'interface': 'eth9', 'fixed-address': '192.168.2.9',
+            'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
+            'unknown-245': '624c3620'}
+        m_dhcp.return_value = [lease]
+        m_media_switch.return_value = None
+
+        reprovision_ovfenv = construct_valid_ovf_env()
+        m_readurl.return_value = url_helper.StringResponse(
+            reprovision_ovfenv.encode('utf-8'))
+
+        dsrc.crawl_metadata()
+        self.assertEqual(2, report_ready_func.call_count)
+        report_ready_func.assert_called_with(lease=lease)
+
     def test_waagent_d_has_0700_perms(self):
         # we expect /var/lib/waagent to be created 0700
         dsrc = self._get_ds({'ovfcontent': construct_valid_ovf_env()})
@@ -523,8 +623,10 @@ fdescfs            /dev/fd          fdescfs rw              0 0
 
     def test_network_config_set_from_imds(self):
         """Datasource.network_config returns IMDS network data."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
         odata = {}
-        data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
         expected_network_config = {
             'ethernets': {
                 'eth0': {'set-name': 'eth0',
@@ -803,9 +905,10 @@ fdescfs            /dev/fd          fdescfs rw              0 0
     @mock.patch('cloudinit.net.generate_fallback_config')
     def test_imds_network_config(self, mock_fallback):
         """Network config is generated from IMDS network data when present."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
         odata = {'HostName': "myhost", 'UserName': "myuser"}
         data = {'ovfcontent': construct_valid_ovf_env(data=odata),
-                'sys_cfg': {}}
+                'sys_cfg': sys_cfg}
 
         dsrc = self._get_ds(data)
         ret = dsrc.get_data()
@@ -825,6 +928,36 @@ fdescfs            /dev/fd          fdescfs rw              0 0
     @mock.patch('cloudinit.net.get_devicelist')
     @mock.patch('cloudinit.net.device_driver')
     @mock.patch('cloudinit.net.generate_fallback_config')
+    def test_imds_network_ignored_when_apply_network_config_false(
+            self, mock_fallback, mock_dd, mock_devlist, mock_get_mac):
+        """When apply_network_config is False, use fallback instead of IMDS."""
+        sys_cfg = {'datasource': {'Azure': {'apply_network_config': False}}}
+        odata = {'HostName': "myhost", 'UserName': "myuser"}
+        data = {'ovfcontent': construct_valid_ovf_env(data=odata),
+                'sys_cfg': sys_cfg}
+        fallback_config = {
+            'version': 1,
+            'config': [{
+                'type': 'physical', 'name': 'eth0',
+                'mac_address': '00:11:22:33:44:55',
+                'params': {'driver': 'hv_netsvc'},
+                'subnets': [{'type': 'dhcp'}],
+            }]
+        }
+        mock_fallback.return_value = fallback_config
+
+        mock_devlist.return_value = ['eth0']
+        mock_dd.return_value = ['hv_netsvc']
+        mock_get_mac.return_value = '00:11:22:33:44:55'
+
+        dsrc = self._get_ds(data)
+        self.assertTrue(dsrc.get_data())
+        self.assertEqual(dsrc.network_config, fallback_config)
+
+    @mock.patch('cloudinit.net.get_interface_mac')
+    @mock.patch('cloudinit.net.get_devicelist')
+    @mock.patch('cloudinit.net.device_driver')
+    @mock.patch('cloudinit.net.generate_fallback_config')
     def test_fallback_network_config(self, mock_fallback, mock_dd,
                                      mock_devlist, mock_get_mac):
         """On absent IMDS network data, generate network fallback config."""
@@ -1411,21 +1544,20 @@ class TestCanDevBeReformatted(CiTestCase):
                     '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
                 }}})
 
-        err = ("Unexpected error while running command.\n",
-               "Command: ['mount', '-o', 'ro,sync', '-t', 'auto', ",
-               "'/dev/sda1', '/fake-tmp/dir']\n"
-               "Exit code: 32\n"
-               "Reason: -\n"
-               "Stdout: -\n"
-               "Stderr: mount: unknown filesystem type 'ntfs'")
-        self.m_mount_cb.side_effect = MountFailedError(
-            'Failed mounting %s to %s due to: %s' %
-            ('/dev/sda', '/fake-tmp/dir', err))
-
-        value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
-                                                 preserve_ntfs=False)
-        self.assertTrue(value)
-        self.assertIn('cannot mount NTFS, assuming', msg)
+        error_msgs = [
+            "Stderr: mount: unknown filesystem type 'ntfs'",  # RHEL
+            "Stderr: mount: /dev/sdb1: unknown filesystem type 'ntfs'"  # SLES
+        ]
+
+        for err_msg in error_msgs:
+            self.m_mount_cb.side_effect = MountFailedError(
+                "Failed mounting %s to %s due to: \nUnexpected.\n%s" %
+                ('/dev/sda', '/fake-tmp/dir', err_msg))
+
+            value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
+                                                     preserve_ntfs=False)
+            self.assertTrue(value)
+            self.assertIn('cannot mount NTFS, assuming', msg)
 
     def test_never_destroy_ntfs_config_false(self):
         """Normally formattable situation with never_destroy_ntfs set."""
@@ -1547,6 +1679,8 @@ class TestPreprovisioningShouldReprovision(CiTestCase):
 
 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
+@mock.patch('cloudinit.sources.helpers.netlink.'
+            'wait_for_media_disconnect_connect')
 @mock.patch('requests.Session.request')
 @mock.patch(MOCKPATH + 'DataSourceAzure._report_ready')
 class TestPreprovisioningPollIMDS(CiTestCase):
@@ -1558,25 +1692,49 @@ class TestPreprovisioningPollIMDS(CiTestCase):
         self.paths = helpers.Paths({'cloud_dir': self.tmp})
         dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
 
-    @mock.patch(MOCKPATH + 'util.write_file')
-    def test_poll_imds_calls_report_ready(self, write_f, report_ready_func,
-                                          fake_resp, m_dhcp, m_net):
-        """The poll_imds will call report_ready after creating marker file."""
-        report_marker = self.tmp_path('report_marker', self.tmp)
+    @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
+    def test_poll_imds_re_dhcp_on_timeout(self, m_dhcpv4, report_ready_func,
+                                          fake_resp, m_media_switch, m_dhcp,
+                                          m_net):
+        """The poll_imds will retry DHCP on IMDS timeout."""
+        report_file = self.tmp_path('report_marker', self.tmp)
         lease = {
             'interface': 'eth9', 'fixed-address': '192.168.2.9',
             'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
             'unknown-245': '624c3620'}
         m_dhcp.return_value = [lease]
+        m_media_switch.return_value = None
+        dhcp_ctx = mock.MagicMock(lease=lease)
+        dhcp_ctx.obtain_lease.return_value = lease
+        m_dhcpv4.return_value = dhcp_ctx
+
+        self.tries = 0
+
+        def fake_timeout_once(**kwargs):
+            self.tries += 1
+            if self.tries == 1:
+                raise requests.Timeout('Fake connection timeout')
+            elif self.tries == 2:
+                response = requests.Response()
+                response.status_code = 404
+                raise requests.exceptions.HTTPError(
+                    "fake 404", response=response)
+            # Third try should succeed and stop retries or redhcp
+            return mock.MagicMock(status_code=200, text="good", content="good")
+
+        fake_resp.side_effect = fake_timeout_once
+
         dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
-        mock_path = (MOCKPATH + 'REPORTED_READY_MARKER_FILE')
-        with mock.patch(mock_path, report_marker):
+        with mock.patch(MOCKPATH + 'REPORTED_READY_MARKER_FILE', report_file):
             dsa._poll_imds()
         self.assertEqual(report_ready_func.call_count, 1)
         report_ready_func.assert_called_with(lease=lease)
+        self.assertEqual(3, m_dhcpv4.call_count, 'Expected 3 DHCP calls')
+        self.assertEqual(3, self.tries, 'Expected 3 total reads from IMDS')
 
-    def test_poll_imds_report_ready_false(self, report_ready_func,
-                                          fake_resp, m_dhcp, m_net):
+    def test_poll_imds_report_ready_false(self,
+                                          report_ready_func, fake_resp,
+                                          m_media_switch, m_dhcp, m_net):
         """The poll_imds should not call reporting ready
            when flag is false"""
         report_file = self.tmp_path('report_marker', self.tmp)
@@ -1585,6 +1743,7 @@ class TestPreprovisioningPollIMDS(CiTestCase):
             'interface': 'eth9', 'fixed-address': '192.168.2.9',
             'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
             'unknown-245': '624c3620'}]
+        m_media_switch.return_value = None
         dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
         with mock.patch(MOCKPATH + 'REPORTED_READY_MARKER_FILE', report_file):
             dsa._poll_imds()
@@ -1594,6 +1753,8 @@ class TestPreprovisioningPollIMDS(CiTestCase):
 @mock.patch(MOCKPATH + 'util.subp')
 @mock.patch(MOCKPATH + 'util.write_file')
 @mock.patch(MOCKPATH + 'util.is_FreeBSD')
+@mock.patch('cloudinit.sources.helpers.netlink.'
+            'wait_for_media_disconnect_connect')
 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
 @mock.patch('requests.Session.request')
@@ -1606,10 +1767,13 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
         self.paths = helpers.Paths({'cloud_dir': tmp})
         dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
 
-    def test_poll_imds_returns_ovf_env(self, fake_resp, m_dhcp, m_net,
+    def test_poll_imds_returns_ovf_env(self, fake_resp,
+                                       m_dhcp, m_net,
+                                       m_media_switch,
                                        m_is_bsd, write_f, subp):
         """The _poll_imds method should return the ovf_env.xml."""
         m_is_bsd.return_value = False
+        m_media_switch.return_value = None
         m_dhcp.return_value = [{
             'interface': 'eth9', 'fixed-address': '192.168.2.9',
             'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0'}]
@@ -1627,16 +1791,19 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
                                              'Cloud-Init/%s' % vs()
                                              }, method='GET', timeout=1,
                                     url=full_url)])
-        self.assertEqual(m_dhcp.call_count, 1)
+        self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
-        self.assertEqual(m_net.call_count, 1)
+        self.assertEqual(m_net.call_count, 2)
 
-    def test__reprovision_calls__poll_imds(self, fake_resp, m_dhcp, m_net,
+    def test__reprovision_calls__poll_imds(self, fake_resp,
+                                           m_dhcp, m_net,
+                                           m_media_switch,
                                            m_is_bsd, write_f, subp):
         """The _reprovision method should call poll IMDS."""
         m_is_bsd.return_value = False
+        m_media_switch.return_value = None
         m_dhcp.return_value = [{
             'interface': 'eth9', 'fixed-address': '192.168.2.9',
             'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
@@ -1660,11 +1827,11 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
                                              'User-Agent':
                                              'Cloud-Init/%s' % vs()},
                                     method='GET', timeout=1, url=full_url)])
-        self.assertEqual(m_dhcp.call_count, 1)
+        self.assertEqual(m_dhcp.call_count, 2)
         m_net.assert_any_call(
             broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
             prefix_or_mask='255.255.255.0', router='192.168.2.1')
-        self.assertEqual(m_net.call_count, 1)
+        self.assertEqual(m_net.call_count, 2)
 
 
 class TestRemoveUbuntuNetworkConfigScripts(CiTestCase):
diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
index 9f81255..1a5956d 100644
--- a/tests/unittests/test_datasource/test_ec2.py
+++ b/tests/unittests/test_datasource/test_ec2.py
@@ -211,9 +211,9 @@ class TestEc2(test_helpers.HttprettyTestCase):
         self.metadata_addr = self.datasource.metadata_urls[0]
         self.tmp = self.tmp_dir()
 
-    def data_url(self, version):
+    def data_url(self, version, data_item='meta-data'):
         """Return a metadata url based on the version provided."""
-        return '/'.join([self.metadata_addr, version, 'meta-data', ''])
+        return '/'.join([self.metadata_addr, version, data_item])
 
     def _patch_add_cleanup(self, mpath, *args, **kwargs):
         p = mock.patch(mpath, *args, **kwargs)
@@ -238,10 +238,18 @@ class TestEc2(test_helpers.HttprettyTestCase):
             all_versions = (
                 [ds.min_metadata_version] + ds.extended_metadata_versions)
             for version in all_versions:
-                metadata_url = self.data_url(version)
+                metadata_url = self.data_url(version) + '/'
                 if version == md_version:
                     # Register all metadata for desired version
-                    register_mock_metaserver(metadata_url, md)
+                    register_mock_metaserver(
+                        metadata_url, md.get('md', DEFAULT_METADATA))
+                    userdata_url = self.data_url(
+                        version, data_item='user-data')
+                    register_mock_metaserver(userdata_url, md.get('ud', ''))
+                    identity_url = self.data_url(
+                        version, data_item='dynamic/instance-identity')
+                    register_mock_metaserver(
+                        identity_url, md.get('id', DYNAMIC_METADATA))
                 else:
                     instance_id_url = metadata_url + 'instance-id'
                     if version == ds.min_metadata_version:
@@ -261,7 +269,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         find_fallback_path = (
             'cloudinit.sources.DataSourceEc2.net.find_fallback_nic')
         with mock.patch(find_fallback_path) as m_find_fallback:
@@ -293,7 +301,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         find_fallback_path = (
             'cloudinit.sources.DataSourceEc2.net.find_fallback_nic')
         with mock.patch(find_fallback_path) as m_find_fallback:
@@ -322,7 +330,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ds._network_config = {'cached': 'data'}
         self.assertEqual({'cached': 'data'}, ds.network_config)
 
@@ -338,7 +346,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=old_metadata)
+            md={'md': old_metadata})
         self.assertTrue(ds.get_data())
         # Provide new revision of metadata that contains network data
         register_mock_metaserver(
@@ -372,7 +380,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         # Mock 404s on all versions except latest
         all_versions = (
             [ds.min_metadata_version] + ds.extended_metadata_versions)
@@ -399,7 +407,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ret = ds.get_data()
         self.assertTrue(ret)
         self.assertEqual(0, m_dhcp.call_count)
@@ -412,7 +420,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ret = ds.get_data()
         self.assertTrue(ret)
 
@@ -422,7 +430,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data={'uuid': uuid, 'uuid_source': 'dmi', 'serial': ''},
             sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ret = ds.get_data()
         self.assertFalse(ret)
 
@@ -432,7 +440,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data={'uuid': uuid, 'uuid_source': 'dmi', 'serial': ''},
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ret = ds.get_data()
         self.assertTrue(ret)
 
@@ -442,7 +450,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         platform_attrs = [
             attr for attr in ec2.CloudNames.__dict__.keys()
             if not attr.startswith('__')]
@@ -469,7 +477,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
         ret = ds.get_data()
         self.assertFalse(ret)
         self.assertIn(
@@ -499,7 +507,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
         ds = self._setup_ds(
             platform_data=self.valid_platform_data,
             sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
-            md=DEFAULT_METADATA)
+            md={'md': DEFAULT_METADATA})
 
         ret = ds.get_data()
         self.assertTrue(ret)
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index b6468b6..3429272 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -1,7 +1,10 @@
 # This file is part of cloud-init. See LICENSE file for license information.
 
 from cloudinit import helpers
-from cloudinit.sources import DataSourceNoCloud
+from cloudinit.sources.DataSourceNoCloud import (
+    DataSourceNoCloud as dsNoCloud,
+    _maybe_remove_top_network,
+    parse_cmdline_data)
 from cloudinit import util
 from cloudinit.tests.helpers import CiTestCase, populate_dir, mock, ExitStack
 
@@ -40,9 +43,7 @@ class TestNoCloudDataSource(CiTestCase):
             'datasource': {'NoCloud': {'fs_label': None}}
         }
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertEqual(dsrc.userdata_raw, ud)
         self.assertEqual(dsrc.metadata, md)
@@ -63,9 +64,7 @@ class TestNoCloudDataSource(CiTestCase):
             'datasource': {'NoCloud': {'fs_label': None}}
         }
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         self.assertTrue(dsrc.get_data())
         self.assertEqual(dsrc.platform_type, 'nocloud')
         self.assertEqual(
@@ -73,8 +72,6 @@ class TestNoCloudDataSource(CiTestCase):
 
     def test_fs_label(self, m_is_lxd):
         # find_devs_with should not be called ff fs_label is None
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
         class PsuedoException(Exception):
             pass
 
@@ -84,12 +81,12 @@ class TestNoCloudDataSource(CiTestCase):
 
         # by default, NoCloud should search for filesystems by label
         sys_cfg = {'datasource': {'NoCloud': {}}}
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         self.assertRaises(PsuedoException, dsrc.get_data)
 
         # but disabling searching should just end up with None found
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertFalse(ret)
 
@@ -97,13 +94,10 @@ class TestNoCloudDataSource(CiTestCase):
         # no source should be found if no cmdline, config, and fs_label=None
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         self.assertFalse(dsrc.get_data())
 
     def test_seed_in_config(self, m_is_lxd):
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
         data = {
             'fs_label': None,
             'meta-data': yaml.safe_dump({'instance-id': 'IID'}),
@@ -111,7 +105,7 @@ class TestNoCloudDataSource(CiTestCase):
         }
 
         sys_cfg = {'datasource': {'NoCloud': data}}
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertEqual(dsrc.userdata_raw, b"USER_DATA_RAW")
         self.assertEqual(dsrc.metadata.get('instance-id'), 'IID')
@@ -130,9 +124,7 @@ class TestNoCloudDataSource(CiTestCase):
             'datasource': {'NoCloud': {'fs_label': None}}
         }
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertEqual(dsrc.userdata_raw, ud)
         self.assertEqual(dsrc.metadata, md)
@@ -145,9 +137,7 @@ class TestNoCloudDataSource(CiTestCase):
 
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertEqual(dsrc.userdata_raw, b"ud")
         self.assertFalse(dsrc.vendordata)
@@ -174,9 +164,7 @@ class TestNoCloudDataSource(CiTestCase):
 
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertTrue(ret)
         # very simple check just for the strings above
@@ -195,9 +183,23 @@ class TestNoCloudDataSource(CiTestCase):
 
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        ret = dsrc.get_data()
+        self.assertTrue(ret)
+        self.assertEqual(netconf, dsrc.network_config)
+
+    def test_metadata_network_config_with_toplevel_network(self, m_is_lxd):
+        """network-config may have 'network' top level key."""
+        netconf = {'config': 'disabled'}
+        populate_dir(
+            os.path.join(self.paths.seed_dir, "nocloud"),
+            {'user-data': b"ud",
+             'meta-data': "instance-id: IID\n",
+             'network-config': yaml.dump({'network': netconf}) + "\n"})
+
+        sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertTrue(ret)
         self.assertEqual(netconf, dsrc.network_config)
@@ -228,9 +230,7 @@ class TestNoCloudDataSource(CiTestCase):
 
         sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
 
-        ds = DataSourceNoCloud.DataSourceNoCloud
-
-        dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
+        dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
         ret = dsrc.get_data()
         self.assertTrue(ret)
         self.assertEqual(netconf, dsrc.network_config)
@@ -258,8 +258,7 @@ class TestParseCommandLineData(CiTestCase):
         for (fmt, expected) in pairs:
             fill = {}
             cmdline = fmt % {'ds_id': ds_id}
-            ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
-                                                       cmdline=cmdline)
+            ret = parse_cmdline_data(ds_id=ds_id, fill=fill, cmdline=cmdline)
             self.assertEqual(expected, fill)
             self.assertTrue(ret)
 
@@ -276,10 +275,43 @@ class TestParseCommandLineData(CiTestCase):
 
         for cmdline in cmdlines:
             fill = {}
-            ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
-                                                       cmdline=cmdline)
+            ret = parse_cmdline_data(ds_id=ds_id, fill=fill, cmdline=cmdline)
             self.assertEqual(fill, {})
             self.assertFalse(ret)
 
 
+class TestMaybeRemoveToplevelNetwork(CiTestCase):
+    """test _maybe_remove_top_network function."""
+    basecfg = [{'type': 'physical', 'name': 'interface0',
+                'subnets': [{'type': 'dhcp'}]}]
+
+    def test_should_remove_safely(self):
+        mcfg = {'config': self.basecfg, 'version': 1}
+        self.assertEqual(mcfg, _maybe_remove_top_network({'network': mcfg}))
+
+    def test_no_remove_if_other_keys(self):
+        """should not shift if other keys at top level."""
+        mcfg = {'network': {'config': self.basecfg, 'version': 1},
+                'unknown_keyname': 'keyval'}
+        self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
+
+    def test_no_remove_if_non_dict(self):
+        """should not shift if not a dict."""
+        mcfg = {'network': '"content here'}
+        self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
+
+    def test_no_remove_if_missing_config_or_version(self):
+        """should not shift unless network entry has config and version."""
+        mcfg = {'network': {'config': self.basecfg}}
+        self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
+
+        mcfg = {'network': {'version': 1}}
+        self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
+
+    def test_remove_with_config_disabled(self):
+        """network/config=disabled should be shifted."""
+        mcfg = {'config': 'disabled'}
+        self.assertEqual(mcfg, _maybe_remove_top_network({'network': mcfg}))
+
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_ovf.py b/tests/unittests/test_datasource/test_ovf.py
index a226c03..349d54c 100644
--- a/tests/unittests/test_datasource/test_ovf.py
+++ b/tests/unittests/test_datasource/test_ovf.py
@@ -17,6 +17,10 @@ from cloudinit.sources import DataSourceOVF as dsovf
 from cloudinit.sources.helpers.vmware.imc.config_custom_script import (
     CustomScriptNotFound)
 
+MPATH = 'cloudinit.sources.DataSourceOVF.'
+
+NOT_FOUND = None
+
 OVF_ENV_CONTENT = """<?xml version="1.0" encoding="UTF-8"?>
 <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
@@ -125,8 +129,8 @@ class TestDatasourceOVF(CiTestCase):
         retcode = wrap_and_call(
             'cloudinit.sources.DataSourceOVF',
             {'util.read_dmi_data': None,
-             'transport_iso9660': (False, None, None),
-             'transport_vmware_guestd': (False, None, None)},
+             'transport_iso9660': NOT_FOUND,
+             'transport_vmware_guestinfo': NOT_FOUND},
             ds.get_data)
         self.assertFalse(retcode, 'Expected False return from ds.get_data')
         self.assertIn(
@@ -141,8 +145,8 @@ class TestDatasourceOVF(CiTestCase):
         retcode = wrap_and_call(
             'cloudinit.sources.DataSourceOVF',
             {'util.read_dmi_data': 'vmware',
-             'transport_iso9660': (False, None, None),
-             'transport_vmware_guestd': (False, None, None)},
+             'transport_iso9660': NOT_FOUND,
+             'transport_vmware_guestinfo': NOT_FOUND},
             ds.get_data)
         self.assertFalse(retcode, 'Expected False return from ds.get_data')
         self.assertIn(
@@ -189,12 +193,11 @@ class TestDatasourceOVF(CiTestCase):
 
         self.assertEqual('ovf', ds.cloud_name)
         self.assertEqual('ovf', ds.platform_type)
-        MPATH = 'cloudinit.sources.DataSourceOVF.'
         with mock.patch(MPATH + 'util.read_dmi_data', return_value='!VMware'):
-            with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
+            with mock.patch(MPATH + 'transport_vmware_guestinfo') as m_guestd:
                 with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
-                    m_iso9660.return_value = (None, 'ignored', 'ignored')
-                    m_guestd.return_value = (None, 'ignored', 'ignored')
+                    m_iso9660.return_value = NOT_FOUND
+                    m_guestd.return_value = NOT_FOUND
                     self.assertTrue(ds.get_data())
                     self.assertEqual(
                         'ovf (%s/seed/ovf-env.xml)' % self.tdir,
@@ -211,12 +214,11 @@ class TestDatasourceOVF(CiTestCase):
 
         self.assertEqual('ovf', ds.cloud_name)
         self.assertEqual('ovf', ds.platform_type)
-        MPATH = 'cloudinit.sources.DataSourceOVF.'
         with mock.patch(MPATH + 'util.read_dmi_data', return_value='VMWare'):
-            with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
+            with mock.patch(MPATH + 'transport_vmware_guestinfo') as m_guestd:
                 with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
-                    m_iso9660.return_value = (None, 'ignored', 'ignored')
-                    m_guestd.return_value = (None, 'ignored', 'ignored')
+                    m_iso9660.return_value = NOT_FOUND
+                    m_guestd.return_value = NOT_FOUND
                     self.assertTrue(ds.get_data())
                     self.assertEqual(
                         'vmware (%s/seed/ovf-env.xml)' % self.tdir,
@@ -246,10 +248,7 @@ class TestTransportIso9660(CiTestCase):
         }
         self.m_mounts.return_value = mounts
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-        self.assertEqual("mycontent", contents)
-        self.assertEqual("/dev/sr9", fullp)
-        self.assertEqual("myfile", fname)
+        self.assertEqual("mycontent", dsovf.transport_iso9660())
 
     def test_find_already_mounted_skips_non_iso9660(self):
         """Check we call get_ovf_env ignoring non iso9660"""
@@ -272,10 +271,7 @@ class TestTransportIso9660(CiTestCase):
         self.m_mounts.return_value = (
             OrderedDict(sorted(mounts.items(), key=lambda t: t[0])))
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-        self.assertEqual("mycontent", contents)
-        self.assertEqual("/dev/xvdc", fullp)
-        self.assertEqual("myfile", fname)
+        self.assertEqual("mycontent", dsovf.transport_iso9660())
 
     def test_find_already_mounted_matches_kname(self):
         """Check we dont regex match on basename of the device"""
@@ -289,10 +285,7 @@ class TestTransportIso9660(CiTestCase):
         # we're skipping an entry which fails to match.
         self.m_mounts.return_value = mounts
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-        self.assertEqual(False, contents)
-        self.assertIsNone(fullp)
-        self.assertIsNone(fname)
+        self.assertEqual(NOT_FOUND, dsovf.transport_iso9660())
 
     def test_mount_cb_called_on_blkdevs_with_iso9660(self):
         """Check we call mount_cb on blockdevs with iso9660 only"""
@@ -300,13 +293,9 @@ class TestTransportIso9660(CiTestCase):
         self.m_find_devs_with.return_value = ['/dev/sr0']
         self.m_mount_cb.return_value = ("myfile", "mycontent")
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-
+        self.assertEqual("mycontent", dsovf.transport_iso9660())
         self.m_mount_cb.assert_called_with(
             "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
-        self.assertEqual("mycontent", contents)
-        self.assertEqual("/dev/sr0", fullp)
-        self.assertEqual("myfile", fname)
 
     def test_mount_cb_called_on_blkdevs_with_iso9660_check_regex(self):
         """Check we call mount_cb on blockdevs with iso9660 and match regex"""
@@ -315,25 +304,17 @@ class TestTransportIso9660(CiTestCase):
             '/dev/abc', '/dev/my-cdrom', '/dev/sr0']
         self.m_mount_cb.return_value = ("myfile", "mycontent")
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-
+        self.assertEqual("mycontent", dsovf.transport_iso9660())
         self.m_mount_cb.assert_called_with(
             "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
-        self.assertEqual("mycontent", contents)
-        self.assertEqual("/dev/sr0", fullp)
-        self.assertEqual("myfile", fname)
 
     def test_mount_cb_not_called_no_matches(self):
         """Check we don't call mount_cb if nothing matches"""
         self.m_mounts.return_value = {}
         self.m_find_devs_with.return_value = ['/dev/vg/myovf']
 
-        (contents, fullp, fname) = dsovf.transport_iso9660()
-
+        self.assertEqual(NOT_FOUND, dsovf.transport_iso9660())
         self.assertEqual(0, self.m_mount_cb.call_count)
-        self.assertEqual(False, contents)
-        self.assertIsNone(fullp)
-        self.assertIsNone(fname)
 
     def test_mount_cb_called_require_iso_false(self):
         """Check we call mount_cb on blockdevs with require_iso=False"""
@@ -341,13 +322,11 @@ class TestTransportIso9660(CiTestCase):
         self.m_find_devs_with.return_value = ['/dev/xvdz']
         self.m_mount_cb.return_value = ("myfile", "mycontent")
 
-        (contents, fullp, fname) = dsovf.transport_iso9660(require_iso=False)
+        self.assertEqual(
+            "mycontent", dsovf.transport_iso9660(require_iso=False))
 
         self.m_mount_cb.assert_called_with(
             "/dev/xvdz", dsovf.get_ovf_env, mtype=None)
-        self.assertEqual("mycontent", contents)
-        self.assertEqual("/dev/xvdz", fullp)
-        self.assertEqual("myfile", fname)
 
     def test_maybe_cdrom_device_none(self):
         """Test maybe_cdrom_device returns False for none/empty input"""
@@ -384,5 +363,62 @@ class TestTransportIso9660(CiTestCase):
         self.assertTrue(dsovf.maybe_cdrom_device('/dev/xvda1'))
         self.assertTrue(dsovf.maybe_cdrom_device('xvdza1'))
 
+
+@mock.patch(MPATH + "util.which")
+@mock.patch(MPATH + "util.subp")
+class TestTransportVmwareGuestinfo(CiTestCase):
+    """Test the com.vmware.guestInfo transport implemented in
+       transport_vmware_guestinfo."""
+
+    rpctool = 'vmware-rpctool'
+    with_logs = True
+    rpctool_path = '/not/important/vmware-rpctool'
+
+    def test_without_vmware_rpctool_returns_notfound(self, m_subp, m_which):
+        m_which.return_value = None
+        self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
+        self.assertEqual(0, m_subp.call_count,
+                         "subp should not be called if no rpctool in path.")
+
+    def test_notfound_on_exit_code_1(self, m_subp, m_which):
+        """If vmware-rpctool exits 1, then must return not found."""
+        m_which.return_value = self.rpctool_path
+        m_subp.side_effect = util.ProcessExecutionError(
+            stdout="", stderr="No value found", exit_code=1, cmd=["unused"])
+        self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
+        self.assertEqual(1, m_subp.call_count)
+        self.assertNotIn("WARNING", self.logs.getvalue(),
+                         "exit code of 1 by rpctool should not cause warning.")
+
+    def test_notfound_if_no_content_but_exit_zero(self, m_subp, m_which):
+        """If vmware-rpctool exited 0 with no stdout is normal not-found.
+
+        This isn't actually a case I've seen. normally on "not found",
+        rpctool would exit 1 with 'No value found' on stderr.  But cover
+        the case where it exited 0 and just wrote nothing to stdout.
+        """
+        m_which.return_value = self.rpctool_path
+        m_subp.return_value = ('', '')
+        self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
+        self.assertEqual(1, m_subp.call_count)
+
+    def test_notfound_and_warns_on_unexpected_exit_code(self, m_subp, m_which):
+        """If vmware-rpctool exits non zero or 1, warnings should be logged."""
+        m_which.return_value = self.rpctool_path
+        m_subp.side_effect = util.ProcessExecutionError(
+            stdout=None, stderr="No value found", exit_code=2, cmd=["unused"])
+        self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
+        self.assertEqual(1, m_subp.call_count)
+        self.assertIn("WARNING", self.logs.getvalue(),
+                      "exit code of 2 by rpctool should log WARNING.")
+
+    def test_found_when_guestinfo_present(self, m_subp, m_which):
+        """When there is a ovf info, transport should return it."""
+        m_which.return_value = self.rpctool_path
+        content = fill_properties({})
+        m_subp.return_value = (content, '')
+        self.assertEqual(content, dsovf.transport_vmware_guestinfo())
+        self.assertEqual(1, m_subp.call_count)
+
 #
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index c2bc7a0..f96bf0a 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -49,6 +49,9 @@ class MetadataResponses(object):
     FAKE_METADATA = {
         'id': '00000000-0000-0000-0000-000000000000',
         'hostname': 'scaleway.host',
+        'tags': [
+            "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
+        ],
         'ssh_public_keys': [{
             'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
             'fingerprint': '2048 06:ae:...  login (RSA)'
@@ -204,10 +207,11 @@ class TestDataSourceScaleway(HttprettyTestCase):
 
         self.assertEqual(self.datasource.get_instance_id(),
                          MetadataResponses.FAKE_METADATA['id'])
-        self.assertEqual(self.datasource.get_public_ssh_keys(), [
-            elem['key'] for elem in
-            MetadataResponses.FAKE_METADATA['ssh_public_keys']
-        ])
+        self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
+        ].sort())
         self.assertEqual(self.datasource.get_hostname(),
                          MetadataResponses.FAKE_METADATA['hostname'])
         self.assertEqual(self.datasource.get_userdata_raw(),
@@ -218,6 +222,70 @@ class TestDataSourceScaleway(HttprettyTestCase):
         self.assertIsNone(self.datasource.region)
         self.assertEqual(sleep.call_count, 0)
 
+    def test_ssh_keys_empty(self):
+        """
+        get_public_ssh_keys() should return empty list if no ssh key are
+        available
+        """
+        self.datasource.metadata['tags'] = []
+        self.datasource.metadata['ssh_public_keys'] = []
+        self.assertEqual(self.datasource.get_public_ssh_keys(), [])
+
+    def test_ssh_keys_only_tags(self):
+        """
+        get_public_ssh_keys() should return list of keys available in tags
+        """
+        self.datasource.metadata['tags'] = [
+            "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
+            "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABCCCCC",
+        ]
+        self.datasource.metadata['ssh_public_keys'] = []
+        self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+        ].sort())
+
+    def test_ssh_keys_only_conf(self):
+        """
+        get_public_ssh_keys() should return list of keys available in
+        ssh_public_keys field
+        """
+        self.datasource.metadata['tags'] = []
+        self.datasource.metadata['ssh_public_keys'] = [{
+            'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
+            'fingerprint': '2048 06:ae:...  login (RSA)'
+        }, {
+            'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+            'fingerprint': '2048 06:ff:...  login2 (RSA)'
+        }]
+        self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
+        ].sort())
+
+    def test_ssh_keys_both(self):
+        """
+        get_public_ssh_keys() should return a merge of keys available
+        in ssh_public_keys and tags
+        """
+        self.datasource.metadata['tags'] = [
+            "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
+        ]
+
+        self.datasource.metadata['ssh_public_keys'] = [{
+            'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
+            'fingerprint': '2048 06:ae:...  login (RSA)'
+        }, {
+            'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+            'fingerprint': '2048 06:ff:...  login2 (RSA)'
+        }]
+        self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
+            u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
+        ].sort())
+
     @mock.patch('cloudinit.sources.DataSourceScaleway.EphemeralDHCPv4')
     @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
                 get_source_address_adapter)
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index 46778e9..756b4fb 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -138,6 +138,9 @@ class DsIdentifyBase(CiTestCase):
             {'name': 'detect_virt', 'RET': 'none', 'ret': 1},
             {'name': 'uname', 'out': UNAME_MYSYS},
             {'name': 'blkid', 'out': BLKID_EFI_ROOT},
+            {'name': 'ovf_vmware_transport_guestinfo',
+             'out': 'No value found', 'ret': 1},
+
         ]
 
         written = [d['name'] for d in mocks]
@@ -475,6 +478,10 @@ class TestDsIdentify(DsIdentifyBase):
         """OVF is identified when iso9660 cdrom path contains ovf schema."""
         self._test_ds_found('OVF')
 
+    def test_ovf_on_vmware_guestinfo_found(self):
+        """OVF guest info is found on vmware."""
+        self._test_ds_found('OVF-guestinfo')
+
     def test_ovf_on_vmware_iso_found_when_vmware_customization(self):
         """OVF is identified when vmware customization is enabled."""
         self._test_ds_found('OVF-vmware-customization')
@@ -499,7 +506,7 @@ class TestDsIdentify(DsIdentifyBase):
 
         # Add recognized labels
         valid_ovf_labels = ['ovf-transport', 'OVF-TRANSPORT',
-                            "OVFENV", "ovfenv"]
+                            "OVFENV", "ovfenv", "OVF ENV", "ovf env"]
         for valid_ovf_label in valid_ovf_labels:
             ovf_cdrom_by_label['mocks'][0]['out'] = blkid_out([
                 {'DEVNAME': 'sda1', 'TYPE': 'ext4', 'LABEL': 'rootfs'},
@@ -773,6 +780,14 @@ VALID_CFG = {
             'dev/sr0': 'pretend ovf iso has ' + OVF_MATCH_STRING + '\n',
         }
     },
+    'OVF-guestinfo': {
+        'ds': 'OVF',
+        'mocks': [
+            {'name': 'ovf_vmware_transport_guestinfo', 'ret': 0,
+             'out': '<?xml version="1.0" encoding="UTF-8"?>\n<Environment'},
+            MOCK_VIRT_IS_VMWARE,
+        ],
+    },
     'ConfigDrive': {
         'ds': 'ConfigDrive',
         'mocks': [
diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
index 2478ebc..b63db61 100644
--- a/tests/unittests/test_handler/test_handler_lxd.py
+++ b/tests/unittests/test_handler/test_handler_lxd.py
@@ -62,7 +62,7 @@ class TestLxd(t_help.CiTestCase):
         cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
         self.assertFalse(m_maybe_clean.called)
         install_pkg = cc.distro.install_packages.call_args_list[0][0][0]
-        self.assertEqual(sorted(install_pkg), ['lxd', 'zfs'])
+        self.assertEqual(sorted(install_pkg), ['lxd', 'zfsutils-linux'])
 
     @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
     @mock.patch("cloudinit.config.cc_lxd.util")
diff --git a/tests/unittests/test_handler/test_handler_resizefs.py b/tests/unittests/test_handler/test_handler_resizefs.py
index feca56c..3518784 100644
--- a/tests/unittests/test_handler/test_handler_resizefs.py
+++ b/tests/unittests/test_handler/test_handler_resizefs.py
@@ -151,9 +151,9 @@ class TestResizefs(CiTestCase):
                          _resize_ufs(mount_point, devpth))
 
     @mock.patch('cloudinit.util.is_container', return_value=False)
-    @mock.patch('cloudinit.util.get_mount_info')
-    @mock.patch('cloudinit.util.get_device_info_from_zpool')
     @mock.patch('cloudinit.util.parse_mount')
+    @mock.patch('cloudinit.util.get_device_info_from_zpool')
+    @mock.patch('cloudinit.util.get_mount_info')
     def test_handle_zfs_root(self, mount_info, zpool_info, parse_mount,
                              is_container):
         devpth = 'vmzroot/ROOT/freebsd'
@@ -173,6 +173,38 @@ class TestResizefs(CiTestCase):
 
         self.assertEqual(('zpool', 'online', '-e', 'vmzroot', disk), ret)
 
+    @mock.patch('cloudinit.util.is_container', return_value=False)
+    @mock.patch('cloudinit.util.get_mount_info')
+    @mock.patch('cloudinit.util.get_device_info_from_zpool')
+    @mock.patch('cloudinit.util.parse_mount')
+    def test_handle_modern_zfsroot(self, mount_info, zpool_info, parse_mount,
+                                   is_container):
+        devpth = 'zroot/ROOT/default'
+        disk = 'da0p3'
+        fs_type = 'zfs'
+        mount_point = '/'
+
+        mount_info.return_value = (devpth, fs_type, mount_point)
+        zpool_info.return_value = disk
+        parse_mount.return_value = (devpth, fs_type, mount_point)
+
+        cfg = {'resize_rootfs': True}
+
+        def fake_stat(devpath):
+            if devpath == disk:
+                raise OSError("not here")
+            FakeStat = namedtuple(
+                'FakeStat', ['st_mode', 'st_size', 'st_mtime'])  # minimal stat
+            return FakeStat(25008, 0, 1)  # fake char block device
+
+        with mock.patch('cloudinit.config.cc_resizefs.do_resize') as dresize:
+            with mock.patch('cloudinit.config.cc_resizefs.os.stat') as m_stat:
+                m_stat.side_effect = fake_stat
+                handle('cc_resizefs', cfg, _cloud=None, log=LOG, args=[])
+
+        self.assertEqual(('zpool', 'online', '-e', 'zroot', '/dev/' + disk),
+                         dresize.call_args[0][0])
+
 
 class TestRootDevFromCmdline(CiTestCase):
 
@@ -246,39 +278,39 @@ class TestMaybeGetDevicePathAsWritableBlock(CiTestCase):
 
     def test_maybe_get_writable_device_path_does_not_exist(self):
         """When devpath does not exist, a warning is logged."""
-        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
         devpath = wrap_and_call(
             'cloudinit.config.cc_resizefs.util',
             {'is_container': {'return_value': False}},
-            maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
+            maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
         self.assertIsNone(devpath)
         self.assertIn(
-            "WARNING: Device '/I/dont/exist' did not exist."
+            "WARNING: Device '/dev/I/dont/exist' did not exist."
             ' cannot resize: %s' % info,
             self.logs.getvalue())
 
     def test_maybe_get_writable_device_path_does_not_exist_in_container(self):
         """When devpath does not exist in a container, log a debug message."""
-        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
         devpath = wrap_and_call(
             'cloudinit.config.cc_resizefs.util',
             {'is_container': {'return_value': True}},
-            maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
+            maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
         self.assertIsNone(devpath)
         self.assertIn(
-            "DEBUG: Device '/I/dont/exist' did not exist in container."
+            "DEBUG: Device '/dev/I/dont/exist' did not exist in container."
             ' cannot resize: %s' % info,
             self.logs.getvalue())
 
     def test_maybe_get_writable_device_path_raises_oserror(self):
         """When unexpected OSError is raises by os.stat it is reraised."""
-        info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
+        info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
         with self.assertRaises(OSError) as context_manager:
             wrap_and_call(
                 'cloudinit.config.cc_resizefs',
                 {'util.is_container': {'return_value': True},
                  'os.stat': {'side_effect': OSError('Something unexpected')}},
-                maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
+                maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
         self.assertEqual(
             'Something unexpected', str(context_manager.exception))
 
diff --git a/tests/unittests/test_handler/test_handler_write_files.py b/tests/unittests/test_handler/test_handler_write_files.py
index 7fa8fd2..bc8756c 100644
--- a/tests/unittests/test_handler/test_handler_write_files.py
+++ b/tests/unittests/test_handler/test_handler_write_files.py
@@ -52,6 +52,18 @@ class TestWriteFiles(FilesystemMockingTestCase):
             "test_simple", [{"content": expected, "path": filename}])
         self.assertEqual(util.load_file(filename), expected)
 
+    def test_append(self):
+        self.patchUtils(self.tmp)
+        existing = "hello "
+        added = "world\n"
+        expected = existing + added
+        filename = "/tmp/append.file"
+        util.write_file(filename, existing)
+        write_files(
+            "test_append",
+            [{"content": added, "path": filename, "append": "true"}])
+        self.assertEqual(util.load_file(filename), expected)
+
     def test_yaml_binary(self):
         self.patchUtils(self.tmp)
         data = util.load_yaml(YAML_TEXT)
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index 8e38373..5313d2d 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -22,6 +22,7 @@ import os
 import textwrap
 import yaml
 
+
 DHCP_CONTENT_1 = """
 DEVICE='eth0'
 PROTO='dhcp'
@@ -488,8 +489,8 @@ NETWORK_CONFIGS = {
                 address 192.168.21.3/24
                 dns-nameservers 8.8.8.8 8.8.4.4
                 dns-search barley.maas sach.maas
-                post-up route add default gw 65.61.151.37 || true
-                pre-down route del default gw 65.61.151.37 || true
+                post-up route add default gw 65.61.151.37 metric 10000 || true
+                pre-down route del default gw 65.61.151.37 metric 10000 || true
         """).rstrip(' '),
         'expected_netplan': textwrap.dedent("""
             network:
@@ -513,7 +514,8 @@ NETWORK_CONFIGS = {
                             - barley.maas
                             - sach.maas
                         routes:
-                        -   to: 0.0.0.0/0
+                        -   metric: 10000
+                            to: 0.0.0.0/0
                             via: 65.61.151.37
                         set-name: eth99
         """).rstrip(' '),
@@ -537,6 +539,7 @@ NETWORK_CONFIGS = {
                 HWADDR=c0:d6:9f:2c:e8:80
                 IPADDR=192.168.21.3
                 NETMASK=255.255.255.0
+                METRIC=10000
                 NM_CONTROLLED=no
                 ONBOOT=yes
                 TYPE=Ethernet
@@ -561,7 +564,7 @@ NETWORK_CONFIGS = {
                           - gateway: 65.61.151.37
                             netmask: 0.0.0.0
                             network: 0.0.0.0
-                            metric: 2
+                            metric: 10000
                 - type: physical
                   name: eth1
                   mac_address: "cf:d6:af:48:e8:80"
@@ -1161,6 +1164,13 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                      - gateway: 192.168.0.3
                        netmask: 255.255.255.0
                        network: 10.1.3.0
+                     - gateway: 2001:67c:1562:1
+                       network: 2001:67c:1
+                       netmask: ffff:ffff:0
+                     - gateway: 3001:67c:1562:1
+                       network: 3001:67c:1
+                       netmask: ffff:ffff:0
+                       metric: 10000
                   - type: static
                     address: 192.168.1.2/24
                   - type: static
@@ -1197,6 +1207,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                      routes:
                      -   to: 10.1.3.0/24
                          via: 192.168.0.3
+                     -   to: 2001:67c:1/32
+                         via: 2001:67c:1562:1
+                     -   metric: 10000
+                         to: 3001:67c:1/32
+                         via: 3001:67c:1562:1
         """),
         'yaml-v2': textwrap.dedent("""
             version: 2
@@ -1228,6 +1243,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                 routes:
                 -   to: 10.1.3.0/24
                     via: 192.168.0.3
+                -   to: 2001:67c:1562:8007::1/64
+                    via: 2001:67c:1562:8007::aac:40b2
+                -   metric: 10000
+                    to: 3001:67c:1562:8007::1/64
+                    via: 3001:67c:1562:8007::aac:40b2
             """),
         'expected_netplan-v2': textwrap.dedent("""
          network:
@@ -1249,6 +1269,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
                      routes:
                      -   to: 10.1.3.0/24
                          via: 192.168.0.3
+                     -   to: 2001:67c:1562:8007::1/64
+                         via: 2001:67c:1562:8007::aac:40b2
+                     -   metric: 10000
+                         to: 3001:67c:1562:8007::1/64
+                         via: 3001:67c:1562:8007::aac:40b2
              ethernets:
                  eth0:
                      match:
@@ -1349,6 +1374,10 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
         USERCTL=no
         """),
             'route6-bond0': textwrap.dedent("""\
+        # Created by cloud-init on instance boot automatically, do not edit.
+        #
+        2001:67c:1/ffff:ffff:0 via 2001:67c:1562:1  dev bond0
+        3001:67c:1/ffff:ffff:0 via 3001:67c:1562:1 metric 10000 dev bond0
             """),
             'route-bond0': textwrap.dedent("""\
         ADDRESS0=10.1.3.0
@@ -1852,6 +1881,7 @@ class TestRhelSysConfigRendering(CiTestCase):
 
     with_logs = True
 
+    nm_cfg_file = "/etc/NetworkManager/NetworkManager.conf"
     scripts_dir = '/etc/sysconfig/network-scripts'
     header = ('# Created by cloud-init on instance boot automatically, '
               'do not edit.\n#\n')
@@ -1879,14 +1909,24 @@ class TestRhelSysConfigRendering(CiTestCase):
         return dir2dict(dir)
 
     def _compare_files_to_expected(self, expected, found):
+
+        def _try_load(f):
+            ''' Attempt to load shell content, otherwise return as-is '''
+            try:
+                return util.load_shell_content(f)
+            except ValueError:
+                pass
+            # route6- * files aren't shell content, but iproute2 params
+            return f
+
         orig_maxdiff = self.maxDiff
         expected_d = dict(
-            (os.path.join(self.scripts_dir, k), util.load_shell_content(v))
+            (os.path.join(self.scripts_dir, k), _try_load(v))
             for k, v in expected.items())
 
         # only compare the files in scripts_dir
         scripts_found = dict(
-            (k, util.load_shell_content(v)) for k, v in found.items()
+            (k, _try_load(v)) for k, v in found.items()
             if k.startswith(self.scripts_dir))
         try:
             self.maxDiff = None
@@ -2058,6 +2098,10 @@ TYPE=Ethernet
 USERCTL=no
 """
         self.assertEqual(expected, found[nspath + 'ifcfg-interface0'])
+        # The configuration has no nameserver information make sure we
+        # do not write the resolv.conf file
+        respath = '/etc/resolv.conf'
+        self.assertNotIn(respath, found.keys())
 
     def test_config_with_explicit_loopback(self):
         ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK)
@@ -2136,6 +2180,75 @@ USERCTL=no
         self._compare_files_to_expected(entry[self.expected_name], found)
         self._assert_headers(found)
 
+    def test_check_ifcfg_rh(self):
+        """ifcfg-rh plugin is added NetworkManager.conf if conf present."""
+        render_dir = self.tmp_dir()
+        nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+        util.ensure_dir(os.path.dirname(nm_cfg))
+
+        # write a template nm.conf, note plugins is a list here
+        with open(nm_cfg, 'w') as fh:
+            fh.write('# test_check_ifcfg_rh\n[main]\nplugins=foo,bar\n')
+        self.assertTrue(os.path.exists(nm_cfg))
+
+        # render and read
+        entry = NETWORK_CONFIGS['small']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+                                      dir=render_dir)
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+        # check ifcfg-rh is in the 'plugins' list
+        config = sysconfig.ConfigObj(nm_cfg)
+        self.assertIn('ifcfg-rh', config['main']['plugins'])
+
+    def test_check_ifcfg_rh_plugins_string(self):
+        """ifcfg-rh plugin is append when plugins is a string."""
+        render_dir = self.tmp_path("render")
+        os.makedirs(render_dir)
+        nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+        util.ensure_dir(os.path.dirname(nm_cfg))
+
+        # write a template nm.conf, note plugins is a value here
+        util.write_file(nm_cfg, '# test_check_ifcfg_rh\n[main]\nplugins=foo\n')
+
+        # render and read
+        entry = NETWORK_CONFIGS['small']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+                                      dir=render_dir)
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+        # check raw content has plugin
+        nm_file_content = util.load_file(nm_cfg)
+        self.assertIn('ifcfg-rh', nm_file_content)
+
+        # check ifcfg-rh is in the 'plugins' list
+        config = sysconfig.ConfigObj(nm_cfg)
+        self.assertIn('ifcfg-rh', config['main']['plugins'])
+
+    def test_check_ifcfg_rh_plugins_no_plugins(self):
+        """enable_ifcfg_plugin creates plugins value if missing."""
+        render_dir = self.tmp_path("render")
+        os.makedirs(render_dir)
+        nm_cfg = util.target_path(render_dir, path=self.nm_cfg_file)
+        util.ensure_dir(os.path.dirname(nm_cfg))
+
+        # write a template nm.conf, note plugins is missing
+        util.write_file(nm_cfg, '# test_check_ifcfg_rh\n[main]\n')
+        self.assertTrue(os.path.exists(nm_cfg))
+
+        # render and read
+        entry = NETWORK_CONFIGS['small']
+        found = self._render_and_read(network_config=yaml.load(entry['yaml']),
+                                      dir=render_dir)
+        self._compare_files_to_expected(entry[self.expected_name], found)
+        self._assert_headers(found)
+
+        # check ifcfg-rh is in the 'plugins' list
+        config = sysconfig.ConfigObj(nm_cfg)
+        self.assertIn('ifcfg-rh', config['main']['plugins'])
+
 
 class TestOpenSuseSysConfigRendering(CiTestCase):
 
@@ -2347,6 +2460,10 @@ TYPE=Ethernet
 USERCTL=no
 """
         self.assertEqual(expected, found[nspath + 'ifcfg-interface0'])
+        # The configuration has no nameserver information make sure we
+        # do not write the resolv.conf file
+        respath = '/etc/resolv.conf'
+        self.assertNotIn(respath, found.keys())
 
     def test_config_with_explicit_loopback(self):
         ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK)
diff --git a/tests/unittests/test_util.py b/tests/unittests/test_util.py
index 5a14479..0e71db8 100644
--- a/tests/unittests/test_util.py
+++ b/tests/unittests/test_util.py
@@ -1171,4 +1171,10 @@ class TestGetProcEnv(helpers.TestCase):
         self.assertEqual({}, util.get_proc_env(1))
         self.assertEqual(1, m_load_file.call_count)
 
+    def test_get_proc_ppid(self):
+        """get_proc_ppid returns correct parent pid value."""
+        my_pid = os.getpid()
+        my_ppid = os.getppid()
+        self.assertEqual(my_ppid, util.get_proc_ppid(my_pid))
+
 # vi: ts=4 expandtab
diff --git a/tests/unittests/test_vmware_config_file.py b/tests/unittests/test_vmware_config_file.py
index 602dedb..f47335e 100644
--- a/tests/unittests/test_vmware_config_file.py
+++ b/tests/unittests/test_vmware_config_file.py
@@ -263,7 +263,7 @@ class TestVmwareConfigFile(CiTestCase):
         nicConfigurator = NicConfigurator(config.nics, False)
         nics_cfg_list = nicConfigurator.generate()
 
-        self.assertEqual(5, len(nics_cfg_list), "number of elements")
+        self.assertEqual(2, len(nics_cfg_list), "number of elements")
 
         nic1 = {'name': 'NIC1'}
         nic2 = {'name': 'NIC2'}
@@ -275,8 +275,6 @@ class TestVmwareConfigFile(CiTestCase):
                     nic1.update(cfg)
                 elif cfg.get('name') == nic2.get('name'):
                     nic2.update(cfg)
-            elif cfg_type == 'route':
-                route_list.append(cfg)
 
         self.assertEqual('physical', nic1.get('type'), 'type of NIC1')
         self.assertEqual('NIC1', nic1.get('name'), 'name of NIC1')
@@ -297,6 +295,9 @@ class TestVmwareConfigFile(CiTestCase):
                 static6_subnet.append(subnet)
             else:
                 self.assertEqual(True, False, 'Unknown type')
+            if 'route' in subnet:
+                for route in subnet.get('routes'):
+                    route_list.append(route)
 
         self.assertEqual(1, len(static_subnet), 'Number of static subnet')
         self.assertEqual(1, len(static6_subnet), 'Number of static6 subnet')
@@ -351,6 +352,8 @@ class TestVmwareConfigFile(CiTestCase):
 class TestVmwareNetConfig(CiTestCase):
     """Test conversion of vmware config to cloud-init config."""
 
+    maxDiff = None
+
     def _get_NicConfigurator(self, text):
         fp = None
         try:
@@ -420,9 +423,52 @@ class TestVmwareNetConfig(CiTestCase):
               'mac_address': '00:50:56:a6:8c:08',
               'subnets': [
                   {'control': 'auto', 'type': 'static',
-                   'address': '10.20.87.154', 'netmask': '255.255.252.0'}]},
-             {'type': 'route', 'destination': '10.20.84.0/22',
-              'gateway': '10.20.87.253', 'metric': 10000}],
+                   'address': '10.20.87.154', 'netmask': '255.255.252.0',
+                   'routes':
+                       [{'type': 'route', 'destination': '10.20.84.0/22',
+                         'gateway': '10.20.87.253', 'metric': 10000}]}]}],
+            nc.generate())
+
+    def test_cust_non_primary_nic_with_gateway_(self):
+        """A customer non primary nic set can have a gateway."""
+        config = textwrap.dedent("""\
+            [NETWORK]
+            NETWORKING = yes
+            BOOTPROTO = dhcp
+            HOSTNAME = static-debug-vm
+            DOMAINNAME = cluster.local
+
+            [NIC-CONFIG]
+            NICS = NIC1
+
+            [NIC1]
+            MACADDR = 00:50:56:ac:d1:8a
+            ONBOOT = yes
+            IPv4_MODE = BACKWARDS_COMPATIBLE
+            BOOTPROTO = static
+            IPADDR = 100.115.223.75
+            NETMASK = 255.255.255.0
+            GATEWAY = 100.115.223.254
+
+
+            [DNS]
+            DNSFROMDHCP=no
+
+            NAMESERVER|1 = 8.8.8.8
+
+            [DATETIME]
+            UTC = yes
+            """)
+        nc = self._get_NicConfigurator(config)
+        self.assertEqual(
+            [{'type': 'physical', 'name': 'NIC1',
+              'mac_address': '00:50:56:ac:d1:8a',
+              'subnets': [
+                  {'control': 'auto', 'type': 'static',
+                   'address': '100.115.223.75', 'netmask': '255.255.255.0',
+                   'routes':
+                       [{'type': 'route', 'destination': '100.115.223.0/24',
+                         'gateway': '100.115.223.254', 'metric': 10000}]}]}],
             nc.generate())
 
     def test_a_primary_nic_with_gateway(self):
diff --git a/tools/ds-identify b/tools/ds-identify
index 5afe5aa..b78b273 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -237,7 +237,7 @@ read_fs_info() {
         case "${line}" in
             DEVNAME=*)
                 [ -n "$dev" -a "$ftype" = "iso9660" ] &&
-                    isodevs="${isodevs} ${dev}=$label"
+                    isodevs="${isodevs},${dev}=$label"
                 ftype=""; dev=""; label="";
                 dev=${line#DEVNAME=};;
             LABEL=*) label="${line#LABEL=}";
@@ -247,11 +247,11 @@ read_fs_info() {
         esac
     done
     [ -n "$dev" -a "$ftype" = "iso9660" ] &&
-        isodevs="${isodevs} ${dev}=$label"
+        isodevs="${isodevs},${dev}=$label"
 
     DI_FS_LABELS="${labels%${delim}}"
     DI_FS_UUIDS="${uuids%${delim}}"
-    DI_ISO9660_DEVS="${isodevs# }"
+    DI_ISO9660_DEVS="${isodevs#,}"
 }
 
 cached() {
@@ -726,6 +726,25 @@ ovf_vmware_guest_customization() {
     return 1
 }
 
+ovf_vmware_transport_guestinfo() {
+    [ "${DI_VIRT}" = "vmware" ] || return 1
+    command -v vmware-rpctool >/dev/null 2>&1 || return 1
+    local out="" ret=""
+    out=$(vmware-rpctool "info-get guestinfo.ovfEnv" 2>&1)
+    ret=$?
+    if [ $ret -ne 0 ]; then
+        debug 1 "Running on vmware but rpctool query returned $ret: $out"
+        return 1
+    fi
+    case "$out" in
+        "<?xml"*|"<?XML"*) :;;
+        *) debug 1 "guestinfo.ovfEnv had non-xml content: $out";
+           return 1;;
+    esac
+    debug 1 "Found guestinfo transport."
+    return 0
+}
+
 is_cdrom_ovf() {
     local dev="$1" label="$2"
     # skip devices that don't look like cdrom paths.
@@ -735,9 +754,10 @@ is_cdrom_ovf() {
            return 1;;
     esac
 
+    debug 1 "got label=$label"
     # fast path known 'OVF' labels
     case "$label" in
-        OVF-TRANSPORT|ovf-transport|OVFENV|ovfenv) return 0;;
+        OVF-TRANSPORT|ovf-transport|OVFENV|ovfenv|OVF\ ENV|ovf\ env) return 0;;
     esac
 
     # explicitly skip known labels of other types. rd_rdfe is azure.
@@ -757,9 +777,15 @@ dscheck_OVF() {
     # Azure provides ovf. Skip false positive by dis-allowing.
     is_azure_chassis && return $DS_NOT_FOUND
 
-    # DI_ISO9660_DEVS is <device>=label, like /dev/sr0=OVF-TRANSPORT
+    ovf_vmware_transport_guestinfo && return "${DS_FOUND}"
+
+    # DI_ISO9660_DEVS is <device>=label,<device>=label2
+    # like /dev/sr0=OVF-TRANSPORT,/dev/other=with spaces
     if [ "${DI_ISO9660_DEVS#${UNAVAILABLE}:}" = "${DI_ISO9660_DEVS}" ]; then
-        for tok in ${DI_ISO9660_DEVS}; do
+        local oifs="$IFS"
+        # shellcheck disable=2086
+        { IFS=","; set -- ${DI_ISO9660_DEVS}; IFS="$oifs"; }
+        for tok in "$@"; do
             is_cdrom_ovf "${tok%%=*}" "${tok#*=}" && return $DS_FOUND
         done
     fi
diff --git a/tools/run-container b/tools/run-container
index 6dedb75..852f4d1 100755
--- a/tools/run-container
+++ b/tools/run-container
@@ -373,6 +373,7 @@ wait_for_boot() {
             inside "$name" sh -c "echo proxy=$http_proxy >> /etc/yum.conf"
             inside "$name" sed -i s/enabled=1/enabled=0/ \
                 /etc/yum/pluginconf.d/fastestmirror.conf
+            inside "$name" sh -c "sed -i '/^#baseurl=/s/#//' /etc/yum.repos.d/*.repo"
         else
             debug 1 "do not know how to configure proxy on $OS_NAME"
         fi
diff --git a/tox.ini b/tox.ini
index 2fb3209..d371720 100644
--- a/tox.ini
+++ b/tox.ini
@@ -21,7 +21,7 @@ setenv =
 basepython = python3
 deps =
     # requirements
-    pylint==1.8.1
+    pylint==2.2.2
     # test-requirements because unit tests are now present in cloudinit tree
     -r{toxinidir}/test-requirements.txt
 commands = {envpython} -m pylint {posargs:cloudinit tests tools}
@@ -75,7 +75,7 @@ deps =
     jsonpatch==1.16
     six==1.10.0
     # test-requirements
-    httpretty==0.8.6
+    httpretty==0.9.6
     mock==1.3.0
     nose==1.3.7
     unittest2==1.1.0
diff --git a/udev/66-azure-ephemeral.rules b/udev/66-azure-ephemeral.rules
index b9c5c3e..3032f7e 100644
--- a/udev/66-azure-ephemeral.rules
+++ b/udev/66-azure-ephemeral.rules
@@ -4,10 +4,26 @@ SUBSYSTEM!="block", GOTO="cloud_init_end"
 ATTRS{ID_VENDOR}!="Msft", GOTO="cloud_init_end"
 ATTRS{ID_MODEL}!="Virtual_Disk", GOTO="cloud_init_end"
 
-# Root has a GUID of 0000 as the second value
+# Root has a GUID of 0000 as the second value on Gen1 instances
 # The resource/resource has GUID of 0001 as the second value
 ATTRS{device_id}=="?00000000-0000-*", ENV{fabric_name}="azure_root", GOTO="ci_azure_names"
 ATTRS{device_id}=="?00000000-0001-*", ENV{fabric_name}="azure_resource", GOTO="ci_azure_names"
+
+# Azure well known SCSI controllers on Gen2 instances
+ATTRS{device_id}=="{f8b3781a-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi0", GOTO="azure_datadisk"
+# Do not create symlinks for scsi[1-3] or unmatched device_ids
+ATTRS{device_id}=="{f8b3781b-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi1", GOTO="cloud_init_end"
+ATTRS{device_id}=="{f8b3781c-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi2", GOTO="cloud_init_end"
+ATTRS{device_id}=="{f8b3781d-1e82-4818-a1c3-63d806ec15bb}", ENV{fabric_scsi_controller}="scsi3", GOTO="cloud_init_end"
+GOTO="cloud_init_end"
+
+# Map scsi#/lun# fabric_name to azure_root|resource on Gen2 instances
+LABEL="azure_datadisk"
+ENV{DEVTYPE}=="partition", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/../device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result"
+ENV{DEVTYPE}=="disk", PROGRAM="/bin/sh -c 'readlink /sys/class/block/%k/device|cut -d: -f4'", ENV{fabric_name}="$env{fabric_scsi_controller}/lun$result"
+
+ENV{fabric_name}=="scsi0/lun0", ENV{fabric_name}="azure_root", GOTO="ci_azure_names"
+ENV{fabric_name}=="scsi0/lun1", ENV{fabric_name}="azure_resource", GOTO="ci_azure_names"
 GOTO="cloud_init_end"
 
 # Create the symlinks

Follow ups